Nonparametric statistics are alternative approaches that do not require most or any of the assumptions. They
are desirable in situations where your data has distribution issues, or where your
data is not continuous in the first place (e.g. if your data is fundamentally ordinal
in nature).
Nonparametric approaches cease to worry about the underlying measurement metric of
the dependent variable. Instead, most (but not all) nonparametric tests usually simply
calculate the relative
rank that each observation’s dependent variable score has in the context of the other
variables. For instance, in
Figure 14.2 Ranking of dependent variable for nonparametric comparison I give a ranking of 10 sales scores, where “1” means the highest sales levels in
the group.
Nonparametric tests of this nature simply then compare the averages or median of the
ranks within the subgroups, rather than the averages of the original data.
Figure 14.2 Ranking of dependent variable for nonparametric comparison also shows this process for mean ranks of the ten salespeople only. There are other
nonparametric tests not based on ranks, however the key point to note is that these
tests often bypass the assumptions of parametric tests. Having said this, each nonparametric
test has its own assumptions, which you should consider.
If you intend to use a parametric (normal) comparison of categories at all, I always suggest (1) bootstrapping (so long as you have reasonably-sized samples) and (2)
also doing a nonparametric test, if available. You can then compare and contrast the
results. If you come out with something very different, you may have had a data problem
originally, and you may need to check. This is not always the case, however, but at
least it’ll force you to dig deeper into your analyses.