The True Definition of a Tuple's Immutability

A common misconception.

When we say tuples are immutable, most Python programmers think the values inside a tuple cannot change.

But this is not entirely true.

For instance, in the code below, we have a tuple whose second element is a list.

Appending to the list updates the tuple’s value, and yet, Python does not raise an error:

What happened there?

If tuples were really immutable, then Python must have raised an error on the append statement, no?

But as demonstrated above, it didn’t.

So, it’s clear that we are missing something here.

This brings us to an important point that many tutorials/courses overlook.

When we say tuples are immutable, it means that during its entire lifecycle, two things can NEVER change:

  • The object IDs inside a tuple.

  • The order of object IDs inside a tuple.

For instance, say a tuple has two objects with object IDs — a and b.

Immutability says that this tuple will always continue to reference only two objects:

  • With IDs “a” and “b”.

  • And their IDs will be in the order: “a” followed by “b”.

But there is NO restriction that these individual objects cannot be modified.

Thus, even if the objects inside a tuple are mutable, we can safely modify them, and this will not violate a tuple’s immutability.

This explains the demonstration above.

As append performs an inplace operation, the collection of IDs inside the tuple never changed.

Thus, Python didn’t raise an error.

In fact, we can also verify this by printing the collection of object IDs referenced inside the tuple before and after the append operation:

As shown above, the object IDs before the append operation and after the append operation are the same.

Thus, immutability isn’t violated and Python never raised an error.

👉 Over to you: What are some other overlooked aspects of Python programming?

Extended piece #1

What happens under the hood when we do .cuda()?

It’s SO EASY to accelerate model training with GPUs today. All it takes is just a simple .cuda() call.

Yet, GPUs are among the biggest black-box aspects, despite being so deeply rooted in deep learning.

If you are always curious about underlying details, I have written an article about CUDA programming: Implementing Parallelized CUDA Programs From Scratch Using CUDA Programming.

We cover the end-to-end details of CUDA and do a hands-on demo on CUDA programming by implementing parallelized implementations of various operations we typically perform in deep learning.The article is beginner-friendly, so if you have never written a CUDA program before, that’s okay.

Extended piece #2

Many ML engineers quickly pivot to building a different model when they don’t get satisfying results with one kind of model.

They do not fully exploit the possibilities of existing models and continue to move towards complex ones.

But after building so many ML models, I have learned various techniques that uncover nuances and optimizations we could apply to significantly enhance model performance without necessarily increasing the model complexity.

The article provides the clear motivation behind their usage, as well as the corresponding code, so that you can start using them right away.

👉 If you love reading this newsletter, share it with friends!

👉 Tell the world what makes this newsletter special for you by leaving a review here :)

Reply

or to participate.