Skip to main content

Table 1 The table shows the performance comparison (including RF) of classifier ensembles derived from different pruning methods, i.e., best, chull, mvo, pfront, rand, and the single best classifier. We used the best ensemble/base model combination, respectively. Numbers refer to the mean performance of a 100-fold Monte Carlo cross-validation. Standard deviation (SD) is added in brackets. Mean and SD are rounded to 2 decimal places. The top base/ensemble classifier combination is always used (see Fig. 2). Classifier ensembles are significantly better than the single best classifiers. In particular, except for one case, the Pareto frontier pruning (pfront) generates the best ensembles. Significance levels are as follows: ** p ≤ 0.001, * p ≤ 0.01, and p ≤ 0.05. Refer to Table 1 for the reported MCC values from the original studies

From: Unsupervised encoding selection through ensemble pruning for biomedical classification

 

best

chull

mvo

pfront

rand

single

acp_mlacp

0.69 (±0.09)

0.68 (±0.1)

0.82. (±0.03)

0.7 (±0.09)

0.7 (±0.1)

0.7 (±0.09)

aip_antiinflam

0.48 (±0.07)

0.47 (±0.07)

0.45 (±0.04)

0.48. (±0.06)

0.45 (±0.06)

0.47 (±0.05)

amp_antibp2

0.89 (±0.04)

0.88 (±0.03)

0.9ns (±0.01)

0.89 (±0.03)

0.89 (±0.03)

0.87 (±0.04)

atb_antitbp

0.7 (±0.11)

0.69 (±0.1)

0.72* (±0.07)

0.72 (±0.09)

0.72 (±0.1)

0.7 (±0.11)

avp_amppred

0.78 (±0.04)

0.78 (±0.04)

0.78 (±0.04)

0.79** (±0.04)

0.78 (±0.04)

0.77 (±0.05)

cpp_mlcpp-complete

0.76 (±0.04)

0.77 (±0.04)

0.78 (±0.06)

0.78** (±0.04)

0.78 (±0.04)

0.76 (±0.04)

hem_hemopi

0.87 (±0.05)

0.87 (±0.05)

0.84 (±0.04)

0.88 (±0.05)

0.89** (±0.05)

0.87 (±0.05)

isp_il10pred

0.56 (±0.08)

0.56 (±0.07)

0.57 (±0.07)

0.57 (±0.07)

0.57 (±0.07)

0.58ns (±0.08)

nep_neuropipred

0.79 (±0.05)

0.79 (±0.05)

0.8 (±0.05)

0.81** (±0.04)

0.8 (±0.04)

0.79 (±0.04)

pip_pipel

0.5 (±0.05)

0.51 (±0.05)

0.48 (±0.05)

0.53** (±0.05)

0.48 (±0.05)

0.51 (±0.05)