- Open Access
Conservation machine learning
BioData Mining volume 13, Article number: 9 (2020)
Ensemble techniques—wherein a model is composed of multiple (possibly) weaker models—are prevalent nowadays within the field of machine learning (ML). Well-known methods such as bagging , boosting , and stacking  are ML mainstays, widely (and fruitfully) deployed on a daily basis. Generally speaking, there are two types of ensemble methods, the first generating models in sequence—e.g., AdaBoost —the latter in a parallel manner—e.g., random forests  and evolutionary algorithms .
AdaBoost (Adaptive Boosting) is an ML meta-algorithm that is used in conjunction with other types of learning algorithms to improve performance. The output of so-called “weak learners” is combined into a weighted sum that represents the final output of the boosted classifier. Adaptivity is obtained by tweaking subsequent weak learners in favor of those instances misclassified by previous classifiers. The maximum number of estimators at which boosting is terminated is a free parameter that has to be carefully set by the user. The popular Scikit-learn Python package, used extensively within the ML community, sets this default value to 50 .
A random forest is an ensemble learning method that operates by constructing a multitude of decision trees at training time and then outputting the majority class (for classification problems) or mean prediction (for regression problems) of the individual trees. The number of trees is a free parameter set by the user; the default Scikit-learn value is 100 (up from 10 in past versions) .
An evolutionary algorithm is a population-based approach that inherently produces a cornucopia of models over generations of evolution. Most often one seeks a single, final model (or a Pareto set of models, when multiple objectives are sought). Yet, as eloquently suggested by  in their paper’s title, might we not obtain “Ensemble learning for free with evolutionary algorithms?” They proposed evolutionary ensemble learning, which extracts an ensemble either from the final population only or incrementally during evolution. Recently,  focused on genetic programming—wherein the individuals evolved are computational trees—introducing an ensemble coevolutionary algorithm that maintains two subpopulations, trees and forests, with the output model being a forest built as an ensemble of trees.
The number of models within an ensemble—the ensemble size—greatly impacts performance, yet there seems to be a dearth of studies addressing this issue. One recent theoretical study suggested that there is an ideal ensemble size under certain assumptions . In practice, the ensemble size is set to some default value, or attempts are made to optimize this value either through a-priori, hyperparameter tuning or through online, dynamic sizing. And beyond ensemble approaches, any ML technique we employ is, in practice, run multiple times, producing a deluge of models.
We propose herein a different outlook altogether, seeking not an optimal ensemble size, but asking what might be accomplished if one is in possession of numerous models, either as an inherent part of the learning process, or simply due to many independent runs.
What we propose is simple in nature: Why not save—and possibly make use of—all the models? In a nod toward “save the trees” we designate this idea as conservation machine learning. Since we are expending considerable effort on producing models galore and evermore, why should we consign the vast majority of them to oblivion? Instead of considering most models as “failed” attempts along the way to a glorious winner, we look upon all as partial successes, possibly to be used beneficially at some future point.
Quite likely we shall end up with a large jungle of models rather than a small forest, a situation which may well require new thinking into the design of the ultimate answer to the problem at hand. In some cases, using classical approaches—e.g., majority voting (classification) or averaging (regression)—over the entire jungle might yield a good answer. In other cases we propose that the jungle could be cultivated, producing a garden of select models. Cultivation methods could be simple: select only models that meet certain criteria, or iteratively select models that improve performance; cultivation could also be more sophisticated, unleashing the full might of ML to produce meritorious gardens.
We are delving into new territory, advocating, as we were, sizeable ensembles. We believe this may well be advantageous where sizeable ensembles are generated as par for the course, a common-enough occurrence. After all, why waste a good—or even not-so-good—model?
We need not content ourselves to a per-run conservation approach, collecting only models from a single run. We can conserve models over multiple runs, and perhaps over multiple users. Consider current practice whereby cloud repositories store datasets, leaderboards, and—infrequently—a few choice models; why not store a jungle of models created by multiple users? Not only will this provide copious grist for the ML mill but, moreover, the cost of creating these models is often high—ML algorithms consume significant amounts of energy —and limbo seems a somewhat unbecoming choice for their final resting place.
To drive this point home, think of the following scenario: Several research groups have been tackling an extremely hard problem (e.g., ), each group running variegated ML algorithms over several months (maybe years). It would not be hard to imagine that the number of models produced over time would run into the millions (quite easily more). Most of these models would be discarded unflinchingly, with only a minute handful retained, and possibly reported upon in the literature. The question we raise is: Could we produce better answers to the problem at hand if we had recourse to all the waste? For example, PennAI—an accessible AI system and open-source software—saves models over multiple runs (and possibly over multiple users), affording it the ability to glean insight into parameters from them all .
Using Scikit-learn it was quite straightforward to set up an exploratory experiment, with our choice of ML approach being the popular random forest.Footnote 1 Through the make_classification function we generated 10 datasets, each comprising 1000 samples and a varying number of features, informative features, and classes. For each dataset we performed 30 replicate experiments, each with 5-fold cross validation. For each fold the dataset was split into a training set of 4 folds, and the left-out test fold. The training set was used in 100 independent runs to train a random forest of size 100. All trees across the 100 runs were saved into a jungle, whose size was 10,000 in the end. We then compared the performance of the forests vs. the jungle over the test set, and our results are shown in Table 1. While random forests do not necessarily attain high performance,Footnote 2 conservation machine learning often shows significant improvement, demonstrating that the idea has at least prima facie merit. We fervently invite further exploration.
If one embraces a conservation approach to ML, other issues beyond those raised above will probably make the scene in short order. For example, of considerable interest nowadays are interpretability  and explainability  of AI-produced problem solvers. These issues are doubly important in the biomedical and healthcare fields, and will necessitate consideration under a conservation agenda .
It seems rather befitting to conclude with rapper and songwriter will.i.am who succinctly enunciated, “Waste is only waste if we waste it.”
Availability of data and materials
Our code is available at https://github.com/EpistasisLab.
This was intended as an exploratory experiment; we used random forests and jungles in the most straightforward manner, with default settings, a fairly modest jungle size, and no garden cultivation.
Breiman L. Bagging predictors. Mach Learn. 1996; 24(2):123–40.
Freund Y, Schapire RE. A desicion-theoretica generalization of on-line learning and an application to boosting. In: European Conference on Computational Learning Theory. Berlin: Springer: 1995. p. 23–37.
Wolpert DH. Stacked generalization. Neural Netw. 1992; 5(2):241–s59.
Ho TK. Random decision forests. In: Proceedings of 3rd International Conference on Document Analysis and Recognition. Piscataway: IEEE: 1995. p. 278–82.
Sipper M, Olson RS, Moore JH. Evolutionary computation: the next major transition of artificial intelligence?. BioData Min. 2017; 10(1):26.
Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E. Scikit-learn: Machine learning in Python. J Mach Learn Res. 2011; 12:2825–30.
Gagné C, Sebag M, Schoenauer M, Tomassini M. Ensemble learning for free with evolutionary algorithms? In: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation. New York: ACM Press: 2007. p. 1782–9.
Rodrigues NM, Batista JE, Silva S. Ensemble genetic programming. In: European Conference on Genetic Programming (Part of EvoStar). Cham: Springer: 2020. p. 151–66.
Bonab H, Can F. Less is more: A comprehensive framework for the number of components of ensemble classifiers. IEEE Trans Neural Netw Learn Syst. 2019; 30(9):2735–45.
García-Martín E, Rodrigues CF, Riley G, Grahn H. Estimation of energy consumption in machine learning. J Parallel Distrib Comput. 2019; 134:75–88.
Bycroft C, Freeman C, Petkova D, Band G, Elliott LT, Sharp K, Motyer A, Vukcevic D, Delaneau O, O’Connell J, et al. The UK biobank resource with deep phenotyping and genomic data. Nature. 2018; 562(7726):203–9.
Olson RS, Sipper M, La Cava W, Tartarone S, Vitale S, Fu W, Orzechowski P, Urbanowicz RJ, Holmes JH, Moore JH. A system for accessible artificial intelligence In: Banzhaf W, Olson R, Tozier W, Riolo R, editors. Cham: Springer: 2018. p. 121–34.
Plumb G, Molitor D, Talwalkar AS. Model agnostic supervised local explanations. In: Advances in Neural Information Processing Systems. Red Hook: Curran Associates: 2018. p. 2515–24.
Gunning D. Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), nd Web. 2017; 2.
Hara S, Hayashi K. Making Tree Ensembles Interpretable: A Bayesian Model Selection Approach In: Storkey A, Perez-Cruz F, editors. Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, vol. 84. Playa Blanca: PMLR: 2018. p. 77–85. http://proceedings.mlr.press/v84/hara18a/hara18a.pdf.
Ethics approval and consent to participate
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Sipper, M., Moore, J.H. Conservation machine learning. BioData Mining 13, 9 (2020). https://doi.org/10.1186/s13040-020-00220-z