A $U$-classifier for high-dimensional data under non-normality

30 Jul 2016  ·  Ahmad M. Rauf, Pavlenko Tatjana ·

A classifier for two or more samples is proposed when the data are high-dimensional and the underlying distributions may be non-normal. The classifier is constructed as a linear combination of two easily computable and interpretable components, the $U$-component and the $P$-component. The $U$-component is a linear combination of $U$-statistics which are averages of bilinear forms of pairwise distinct vectors from two independent samples. The $P$-component is the discriminant score and is a function of the projection of the $U$-component on the observation to be classified. Combined, the two components constitute an inherently bias-adjusted classifier valid for high-dimensional data. The simplicity of the classifier helps conveniently study its properties, including its asymptotic normal limit, and extend it to multi-sample case. The classifier is linear but its linearity does not rest on the assumption of homoscedasticity. Probabilities of misclassification and asymptotic properties of their empirical versions are discussed in detail. Simulation results are used to show the accuracy of the proposed classifier for sample sizes as small as 5 or 7 and any large dimensions. Applications on real data sets are also demonstrated.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Statistics Theory Statistics Theory