Version Controlling and Model Registry in ML Deployments

Take your production environment from good to great.

Real-world ML deployment is never just about “deployment” — host the model somewhere, obtain an API endpoint, integrate it into the application, and you are done!

This is because, in reality, plenty of things must be done post-deployment to ensure the model’s reliability and performance.

Let’s understand a few of them.

On a side note: If you have never deployed a single ML model and want to learn this in the most beginner-friendly manner, check this: Deploy, Version Control, and Manage ML Models Right From Your Jupyter Notebook with Modelbit.

#1) Version control

To begin, it is immensely crucial to version control ML deployments.

You may have noticed this while using ChatGPT, for instance. OpenAI frequently updates its model.

But updating does not simply mean overwriting the previous version.

Instead, ML models are always version-controlled (using git tools).

The advantages of version-controlling ML deployments are pretty obvious as well:

  • In case of sudden mishaps post-deployment, we can instantly roll back to an older version.

  • We can facilitate parallel development with branching, and many more.

#2) Model registry

Another practical idea is to maintain a model registry for deployments.

Let’s understand what it is.

Simply put, a model registry can be considered a repository of models.

See, typically, we might be inclined to version our code and the ML model together:

However, when we use a model registry, we version models separately from the code.

Let me give you an intuitive example to understand this better.

Imagine our deployed model takes three inputs to generate a prediction:

While writing the inference code, we overlooked that, at times, one of the inputs might be missing. We realized this by analyzing the model’s logs.

We may want to fix this quickly (at least for a while) before we decide on the next steps more concretely.

Thus, we may decide to update the inference code by assigning a dummy value for the missing input.

This will allow the model to still process the incoming request.

Let me ask you a question: “Did we update the model?”

No, right?

Here, we only need to update the inference code. The model will remain the same.

But if we were to version the model and code together, it would lead to a redundant model and take up extra space.

However, by maintaining a model registry:

  • We can only update the inference code.

  • Avoid pushing a new (yet unwanted) model to deployment.

This makes intuitive sense as well, doesn’t it?

That being said, managing models in deployment is always easier said than done.

Some relevant questions are:

  • What are post-deployment considerations? How to address them?

  • What are the challenges during deployment?

  • What are the challenges post-deployment?

  • How to practically implement version control in ML deployments?

  • Why is model logging critical to identify challenges like:

    • Performance drift

    • Concept drift

    • Covariate shift

    • Non-stationarity, etc.

  • How to practically implement and maintain a model registry?

    • How do we maintain ML models separately from code?

    • How do we deploy models to a model registry?

    • How to only update inference code?

    • How do we only update the model?

    • What are some major advantages of model registry?

  • And most importantly, how do we reliably implement end-to-end ML deployment in our projects?

Why you should care about it?

Deploying a model is one thing.

Maintaining, updating, and tracking is another thing.

If you intend to ship reliable models to production, learning end-to-end deployment and model management is a must-know practical skill.

I am confident you will learn a lot of practical skills from this 30-minute deep dive :)

Thanks for reading!

Whenever you are ready, here’s one more way I can help you:

Every week, I publish 1-2 in-depth deep dives (typically 20+ mins long). Here are some of the latest ones that you will surely like:

To receive all full articles and support the Daily Dose of Data Science, consider subscribing:

👉 If you love reading this newsletter, share it with friends!

👉 Tell the world what makes this newsletter special for you by leaving a review here :)

Reply

or to participate.