Contact

Data

Apr 25, 2022

The Value Proposition for MLOps

Lawson Smith
Cole Harrison

Lawson Smith and Cole Harrison

The Value Proposition for MLOps

Machine Learning Projects Do Not Consistently Deliver on Their Promised Value

Machine learning has evolved into a necessity for organizations that wish to improve customer experience, reduce costs, and build innovative solutions to complex problems, with a 2021 Forbes survey finding that 76% of enterprises prioritize machine learning (ML) over other IT initiatives.

However, even companies with an abundance of data scientists are struggling to deliver on the promises surrounding the buzz of ML for three major reasons:

  1. No method to efficiently monitor and improve production models as they get stale over time.

  2. Data scientists spend more time on maintenance than innovation.

  3. Data product teams start from scratch on new ML applications.

The common threats posed by these three pain points have led to the emergence of a new discipline in data science, machine learning operations (MLOps). MLOps is a set of practices to combat issues with productionizing and maintaining models in machine learning. It borrows the ideologies of version control, automation, and CI/CD from DevOps, but has an added layer of complexity due to the use of data and artificial intelligence. One of the chief issues addressed by MLOps is ensuring that models continue to work as expected in production.

Feedback Loops Allow for the Monitoring and Improvement of Production Models

What Is Model Drift?

Model drift is the idea that model quality perpetually degrades in a production environment. Getting a model to production is often the main priority of many ML initiatives today, but that is only the beginning of a long journey toward value-creating ML. Once in production, model performance will degrade over time due to data drift, concept drift, or codebase changes. 

MLOps Accuracy

Model accuracy decreases over time in a production environment

Three Types of Model Drift:

  1. Data drift: Occurs when the statistical properties of your input variables change. This can happen for multiple reasons, from seasonality to a change in customer preference. 

  2. Concept drift: Occurs when you need your model in production to predict new or different targets. An effective way to think about this is in the context of cybersecurity. If a model is trying to identify phishing emails, it will need to adapt based on new tactics deployed by the threat actors sending those emails. 

  3. Codebase change: Occurs when revisions are made in the code that creates a model upstream of the model’s serving environment. If an ML developer makes a change to a model in development, the changes won’t apply to the production model without an MLOps system to facilitate a complete retraining and deployment.

What Are the Effects of Model Drift?

When models degrade and become stale, they begin making less than optimal predictions, which can significantly impact an organization’s bottom line.  

Let’s look at a recommendation engine for a shoe company as an example. If you are implementing ML to display targeted ads for your organization, your recommendation engine will need to account for changes in fashion trends. If high-top shoes are becoming more popular during this season, but your algorithm is stale and continues to primarily show ads for low-top shoes, your organization would not maximize its return on its investment in ML or advertising. Investing in ML without taking the necessary precautions to combat model drift is like buying a fancy new car, knowing a crash is inevitable, yet refusing to purchase insurance.

How Do We Combat Model Drift?

It’s clear your models are bound to fail you, but it is not all doom and gloom when it comes to productionizing ML models. The solution to this inevitable degradation—refreshing your models on a regular basis—is simple in theory. However, this continual refresh requires a robust feedback loop that includes continuous monitoring and re-training on new data from production.  

MLOps

With regular updates, you can counter model degradation

Performance monitoring is essential to determine the timing of refreshing models before they become stale. Methods for monitoring are highly case-specific, but it is typically best to use a two-fold approach: monitoring the schema and distributions of incoming data from production as well as the distributions and quality of model predictions. When targets fall outside of a determined threshold, models are automatically triggered for re-training, and the ML team will be notified.

Refreshing your models given a time-based rule is an alternative. For example, you may choose to refresh your model daily or weekly depending on the use case. However, even if a model is refreshed on a regular cadence, it is still important to implement performance monitoring to protect against any erratic model behavior or corrupt data.

Automation Allows Data Scientists to Innovate

The main obstacle to delivering value through ML is how labor-intensive it is to keep models up to date and reliable. Past-production, making sure the model continues to accurately solve the business problem it was made for is critical; otherwise, the resources spent to build the model are wasted.

Your data scientists are aware of these required maintenance tasks. Unfortunately, they are the ones manually retraining the models when an MLOps system is not in place. By the time one cycle of retraining is completed, it is often time to start the retraining process over again. As a result, modern data scientists are unable to spend much time experimenting to create innovative solutions.

Model Retraining Process

With so many important steps in the retraining process (validating and checking for anomalies in data inputs, running retraining pipelines, validating the model’s new accuracy and outputs, among many others), data scientists’ work can often end in an infinite loop of maintaining the same models, which hinders them from developing anything new. Assigning highly paid employees (not to mention some of your best problem solvers) to maintenance tasks not only costs time and money but also stunts innovation in a quickly evolving world of innovative competitors.

Giving data scientists the freedom to do actual data science work has two massive benefits:

  1. Innovation capital: Data science teams will be able to deliver value with innovative solutions. Whether it is minimizing call center costs through smart chatbots or increasing revenue through cross-selling driven by a recommendation engine, every company has business problems that can be solved with data science.

  2. Halting “The Great Resignation” in your company: A study from 2021 indicated that data scientists are leaving their jobs in droves seeking better positions, with an average tenure of 1.7 years. The two leading causes: a gap between the reality and expectations of data science work and a lack of employee engagement. That means it is imperative for companies to give data scientists engaging problems to solve if they want to keep their talented employees.

Get a Head Start by Laying an ML Foundation

The value of MLOps does not end at improved model accuracy and reduced maintenance costs; MLOps also speeds up model production. With a scalable MLOps solution, models can be rapidly created, tested, and implemented in a robust production environment, in comparison to the manual process of most systems without MLOps.

MLOps provides a ready-made framework to serve as a starting point for new projects. Without MLOps, each time you want to productionize a model, you must build from the ground up. With an MLOps system in place, you already start with infrastructure to support a production-ready model, putting you well beyond the halfway point in an unsupported development cycle.

Furthermore, MLOps systems can centralize the process for putting models into production. In most organizations, development processes are scattered across teams in different functional groups. When each team within your analytics function has their own custom processes for running production models, it is likely the knowledge required to run these models becomes locked up with only the owners of those models.

However, with MLOps, data science teams have a standardized framework that guides how they build and deploy new models. Having a central MLOps system for running production models means that a model can continue functioning even if the data scientist who built it leaves the company without documenting their process.

Leverage MLOps Infrastructure for Minimal Maintenance

A minimal maintenance MLOps system is extremely valuable. To help organizations along their ML journey, Credera has developed the architecture below for constructing an MLOps infrastructure that can be customized to meet your organization’s specific needs.

Credera’s MLOps Reference Architecture

This architecture lays out the MLOps feedback loop, automation, and versioning into a framework for building innovative solutions.

This architecture has been developed by an internal team of ML experts as a framework for building custom MLOps. If you are interested in having a conversation about MLOps at Credera, feel free to reach out at findoutmore@credera.com.

Conversation Icon

Contact Us

Ready to achieve your vision? We're here to help.

We'd love to start a conversation. Fill out the form and we'll connect you with the right person.

Searching for a new career?

View job openings