How to Ensemble Methods to Boost Prediction Analytic Accuracy
As in the real world, so with the multiplicity of predictive analytic models: Where there is unity, there is strength. Several models can be combined in different ways to make predictions. You can then apply the combined model — called an ensemble model — at the learning stage, at the classification stage, or at both stages.
Here’s one way to use an ensemble model:
Split the training data into several sets.
Have each of the individual models that make up the ensemble model process parts of the data and learn from it.
Have each model produce its learning outcome from that data.
So far, so good. But in order to get the ensemble model to predict a future class or category label for new data and make a decision, you have to run the new data through all of the trained models; each model predicts a class label. Then, on the basis of the collective classification or prediction, you can generate an overall prediction.
You can generate that overall prediction by simply implementing a voting mechanism that decides the final result. One voting technique could use the label that the majority of the models predict as the label that the ensemble model produces as its result.
Suppose you want to build a model that will predict whether an incoming e-mail is spam. Assume that the training data consists of a set of e-mails in which some are spam and others are not. Then you can distribute that dataset to a number of models for training purposes.
Then the trained models process an incoming e-mail. If the majority of the models classify it as spam, then the ensemble model gives the e-mail the final label of spam.
Another way to implement an ensemble model is to weigh the accuracy of each model you’re building into the ensemble model against the accuracy of all the other models in the set:
You assign a specific weight (accuracy) to each model.
This weight will vary from one dataset to the next and from one business problem to the next.
After the models are trained, you can use test data where you know the classification of each data point in the test data.
Evaluate the prediction made by each model for each test case.
Increase the weight for the models that predicted correctly and decrease the weight for the models that classified the data incorrectly.