Quantization: Run ML Models on Tiny Hardware

A must-know skill for ML engineers.

In the model compression article, we discussed various techniques to increase the practical utility of ML models.

Today, we are extending that series to explore Quantization techniques: Quantization: Optimize ML Models to Run Them on Tiny Hardware.

We cover the following:

  • Motivation behind Quantization

  • How it differs from similar techniques like mixed precision training

  • Common quantization techniques for semi-large models.

  • Issues with these techniques for large models.

  • Some methods for quantizing large models.

  • And more.

The article is entirely beginner-friendly, so if you have never heard of Quantization, that’s okay.

Why care?

Typically, the parameters of a neural network (layer weights) are represented using 32-bit floating-point numbers.

The rationale is that since the parameters of an ML model are not constrained to any specific range of values, assigning a data type to parameters that cover a wide range of values is wise to avoid numerical instability and maintain high precision during training and inference.

Quite clearly, a major caveat of this approach is that using the biggest data type also means the model will consume more memory.

Imagine if we could represent the same parameters using lower-bit representations, such as 16-bit, 8-bit, 4-bit, or even 1-bit while preserving or maybe retaining most of the information.

Wouldn’t that be cool?

This would significantly decrease the memory required to store the model’s parameters without substantially compromising the model’s accuracy.

Quantization techniques precisely help us do that.

As a machine learning engineer, awareness about techniques that help your employer save money is genuinely well appreciated.

Skillsets like these put you on par to become an indispensable asset to your team.

Thanks for reading!

Are you overwhelmed with the amount of information in ML/DS?

Every week, I publish no-fluff deep dives on topics that truly matter to your skills for ML/DS roles.

For instance:

Join below to unlock all full articles:

SPONSOR US

Get your product in front of 82,000 data scientists and other tech professionals.

Our newsletter puts your products and services directly in front of an audience that matters — thousands of leaders, senior data scientists, machine learning engineers, data analysts, etc., who have influence over significant tech decisions and big purchases.

To ensure your product reaches this influential audience, reserve your space here or reply to this email to ensure your product reaches this influential audience.

Reply

or to participate.