In statistics, parametric and non-parametric tests are well known and used. A widely used nonparametric test is the Kolmogorov-Smirnov test, This allows you to check whether or not the sample scores follow a normal distribution.
It belongs to the group of so-called goodness-of-fit tests. In this article we will know its characteristics, what it is used for and how it is applied.
The Kolmogorov-Smirnov test is a non-parametric test type. Nonparametric tests (also called free distribution) are used in inferential statistics and have the following characteristics:
- They raise assumptions about the quality of the fit, independence …
- The level of measurement of the variables is low (ordinal).
- They don’t have excessive restrictions.
- They apply to small samples.
- They are sturdy.
Kolmogorov-Smirnov test: features
The Kolmogorov-Smirnov test is a property belonging to statistics, more precisely deductive statistics. Inferential statistics aim to extract information about populations.
It’s a fit quality testIn other words, it is used to check whether or not the scores we obtained from the sample follow a normal distribution. In other words, it measures the degree of agreement between the distribution of a data set and a specific theoretical distribution. Its purpose is to indicate whether the data come from a population that has the specified theoretical distribution, that is, it compares whether the observations can reasonably come from the specified distribution.
The Kolmogorov-Smirnov test answers the following question: Are the observations in the sample taken from a hypothetical distribution?
Null hypothesis and alternative hypothesis
As a goodness-of-fit test, it answers the question: “Does the (empirical) distribution of the sample match the (theoretical) distribution of the population?” In that case, the null hypothesis (H0) will establish that the empirical distribution is similar to the theoretical distribution (The null hypothesis is the one that we do not try to reject). In other words, the null hypothesis is that the observed frequency distribution is consistent with the theoretical distribution (and therefore a good fit is given).
On the other hand, the alternative hypothesis (H1) will establish that the observed frequency distribution is not consistent with the theoretical distribution (misalignment). As in other hypothesis tests, the symbol α (alpha) will indicate the level of significance of the test.
How is it calculated?
The result of the Kolmogorov-Smirnov test is represented by the letter Z. Z is calculated from the largest difference (in absolute value) between the cumulative theoretical and observed distribution functions (empirical).
In order to be able to correctly apply the Kolmogorov-Smirnov test, a number of assumptions must be made. First, the test assumes that the parameters of the test distribution have been specified beforehand. This procedure estimates the parameters of the sample.
On another side, the mean and standard deviation of the sample are the parameters of a normal distribution, The minimum and maximum values of the sample define the range of the uniform distribution, the sample mean is the parameter of the Poisson distribution, and the sample mean is the parameter of the exponential distribution.
The ability of the Kolmogorov-Smirnov test to detect deviations from the hypothetical distribution may be severely impaired. To contrast it with a normal distribution with estimated parameters, the possibility of using the KS Lillliefors test should be considered.
The Kolmogorov-Smirnov test can be applied to a sample to check whether a variable (for example, academic grades or income in €) is normally distributed. Sometimes this is necessary to know, because many parametric tests require that the variables they use follow a normal distribution.
a part of the advantages of the Kolmogorov-Smirnov test son:
- It is more powerful than Chi-square (χ²) test (also suitable for quality test).
- It is easy to calculate and use and does not require any data grouping.
- The statistic is independent of the expected frequency distribution, it simply depends on the sample size.
Differences from parametric tests
Parametric tests, unlike nonparametric tests such as the Kolmogorov-Smirnov test, have the following characteristics:
- They make assumptions about the parameters.
- The level of measurement of the variables is at least quantitative.
- A number of assumptions must be respected.
- They don’t lose information.
- They have high statistical power.
Some examples of parametric tests it would be: the means difference t test or the ANOVA.
- Garcia Bellido, R .; González Such, J. and Jornet Meliá, JM (2010). SPSS: non-parametric tests. InnovaMIDE, Educational innovation group, University of Valencia.
- Lubin, P. Macià, A. Rubio de Lerma, P. (2005). Mathematical psychology I and II. Madrid: UNED.
- Pardo, A. San Martín, R. (2006). Data analysis in psychology II. Madrid: Pyramid.