Fig. 3From: Discovering feature relevancy and dependency by kernel-guided probabilistic model-building evolutionTransforming the input space onto a higher dimensional space using a nonlinear mapping Φ(·), may resolve nonlinearities with linear discriminants. Here a binary dataset (positive class=red, negative=blue) is visualised in two different subspaces, a line in \(\mathbb {R}^{1}\) and a parabola in \(\mathbb {R}^{2}\). The original data (ovals) is not linearly separable in \(\mathbb {R}^{1}\). The transformed data (squares) is separable in \(\mathbb {R}^{2}\) by an arbitrary linear discriminant (green). The mapping used was Φ(x)↦(x,1+x 2)Back to article page