How to Evaluate and Update Your Predictive Analytics Model

By Anasse Bari, Mohamed Chaouchi, Tommy Jung

Your goal, of course, is to build a predictive analytical model that can actually solve the business objectives it was built for. Expect to spend some time evaluating the accuracy of your model’s predictions so as to prove its value to the decision-making process — and to the bottom line.

Evaluate your model from these two distinct angles:

  • Business: The business analyst should evaluate the model’s performance and the accuracy of its predictions in terms of how well they address business objectives. Are the insights derived from the model making it easier for you to make decisions? Are you spending more time or less time in meetings because of these new insights?

  • Technical: The data scientists and IT professionals should evaluate the algorithms used and the statistical techniques and methods applied. Are the algorithms chosen optimal for the model’s purpose? Are the insights being generated fast enough to produce actionable advantages?

Evaluating the model is essentially an ongoing process of re-examining the algorithms used, the data included, and the features selected for analysis — as well as by constantly monitoring the accuracy of the model’s performance in a live systems and a real business environment.

In addition to closely examining the data used, selecting variables with the most predictive power, and the algorithms applied, the most critical test is to evaluate whether the model meets business needs and whether it adds value to the business.

Creating an actionable decision is the foremost criterion against which to judge the success of the model. If your organization can act on the output of the model and come out ahead, your model is a success.

Test your model in a test environment that closely resembles the production environment. Set the metrics to evaluate the success of the model at the beginning of the project. Specifying the metrics early makes the model easier to validate later on.

Successful deployment of the model in production is no time to relax. You’ll need to closely monitor its accuracy and performance over time. A model tends to degrade over time; and a new infusion of energy is required from time to time to keep that model up and running. To stay successful, a model must be revisited and re-evaluated in light of new data and changing circumstances.

If conditions change so they no longer fit the model’s original training, then you’ll have to retrain the model to meet the new conditions. Such demanding new conditions include

  • An overall change in the business objective

  • The adoption of — and migration to — new and more powerful technology

  • The emergence of new trends in the marketplace

  • Evidence that the competition is catching up

Your strategic plan should include staying alert for any such emergent need to refresh your model and take it to the next level, but updating your model should be an ongoing process anyway. You’ll keep on tweaking inputs and outputs, incorporating new data streams, retraining the model for the new conditions and continuously refining its outputs. Keep these goals in mind:

  • Stay on top of changing conditions by retraining and testing the model regularly; enhance it whenever necessary.

  • Monitor your model’s accuracy to catch any degradation in its performance over time.

  • Automate the monitoring of your model by developing customized applications that report and track the model’s performance.

    Automated monitoring saves time and helps you avoid errors in tracking the model’s performance.