Dataset Concordia (targetcl. 1)

Basic characteristics Concordia (targetcl. 1)

400

target objects

The Concordia CENPARMI handwritten digits (16x16 pixels per image). Used in "Neural-Network Classifiers for Recognizing Totally Unconstrained Handwritten Numerals" by Cho, Sung-Bae. Download mat-file with Prtools dataset.

3600

outlier objects

256

features

Unsupervised PCA Concordia (targetcl. 1)

On the left, the PCA scatterplot is shown, on the right the retained variance for varying number of features.
On the left, the PCA scatterplot is shown of data rescaled to unit variance, on the right the retained variance.

Supervised Fisher Concordia (targetcl. 1)

On the left, the Fisher scatterplot is shown, on the right the ROC curve along this direction.

Results Concordia (targetcl. 1)

The experiments are performed using dd_tools. A rudimentary explanation of the classifiers is given in the classifier section.

531, 0 outliers, AUC (x100) 5x strat. 10-fold
Classifiers Preproc
none unit var PCA 95\%
Gauss 97.7 (0.0) 95.6 (0.0) 98.2 (0.0)
Min.Cov.Determinant NaN (0.0) NaN (0.0) 87.6 (0.0)
Mixture of Gaussians 99.6 (0.1) 91.8 (0.7) 99.3 (0.1)
Naive Parzen 95.6 (0.0) 95.6 (0.0) 98.5 (0.0)
Parzen 98.9 (0.0) 92.2 (0.0) 99.1 (0.0)
k-means 97.3 (0.1) 93.3 (0.1) 92.1 (0.2)
1-Nearest Neighbors 99.5 (0.0) 95.8 (0.0) 99.1 (0.0)
k-Nearest Neighbors 99.5 (0.0) 95.8 (0.0) 99.1 (0.0)
Nearest-neighbor dist 97.6 (0.0) 93.9 (0.0) 91.4 (0.0)
Principal comp. 99.7 (0.0) 96.3 (0.0) 98.5 (0.0)
Self-Organ. Map 98.7 (0.1) 93.9 (0.1) 96.4 (0.8)
Auto-enc network NaN (0.0) NaN (0.0) 93.9 (0.0)
MST 99.5 (0.0) 95.8 (0.0) 99.1 (0.0)
L_1-ball 11.1 (0.0) 11.1 (0.0) 97.6 (0.0)
k-center 98.3 (0.3) 93.3 (0.1) 96.1 (0.4)
Support vector DD 99.5 (0.0) 41.1 (2.1) 99.1 (0.0)
Minimax Prob. DD 99.5 (0.0) 94.0 (0.0) 99.1 (0.0)
LinProg DD 98.8 (0.0) 93.9 (0.2) 97.5 (0.0)
Lof DD 99.5 (0.0) 97.5 (0.0) 98.0 (0.0)
Lof range DD 99.4 (0.0) 96.0 (0.0) 98.5 (0.0)
Loci DD 94.1 (0.0) 93.0 (0.0) 88.7 (0.0)

Classifier projection spaces The first classifier projection spaces are obtained by computing the classifier label disagreements (setting the threshold on 10% target error) and applying an MDS on the resulting distance matrix between classifiers:



Original



Unit variance



PCA mapped

Classifier projection spaces The second versions of the classifier projection spaces are obtained by computing the classifier ranking disagreements and applying an MDS on the resulting distance matrix between classifiers:



Original



Unit variance



PCA mapped