As most of you already know, Python is a general-purpose programming language optimized for simplicity and ease of use. Whereas it’s an incredible software for gentle duties, code execution velocity can quickly change into a significant bottleneck in your packages.
On this article, we’ll focus on why Python is so sluggish, when in comparison with different programming languages. Then, we’ll see write a primary Rust extension for Python and examine its efficiency to a local Python implementation.
Why Python is sluggish
Earlier than we begin, I want to level out that programming languages aren’t inherently quick or sluggish: their implementations are. If you wish to be taught in regards to the distinction between a language and its implementation, take a look at this text:
To begin with, Python is dynamically typed, that means that variable varieties are solely recognized at runtime, and never at compile-time. Whereas this design alternative permits for extra versatile code, the Python interpreter can’t make assumptions about what your variables are and their dimension. Because of this, it can’t make optimizations like a static compiler would.
One other design alternative that makes Python slower than different options is the notorious GIL. The International Interpreter Lock is a mutex lock that enables just one thread to execute at any time limit. The GIL was initially meant to ensure thread security however has encountered nice backlash from builders of multi-threaded purposes.
On prime of that, Python code is executed by a digital machine as a substitute of operating immediately on the CPU. This additional layer of abstraction provides a major execution overhead, in comparison with statically compiled languages.
Moreover, Python objects are internally handled as dictionaries (or hashmaps) and their attributes (properties and strategies, accessed through the dot operator) aren’t often accessed by a reminiscence offset, however…