Make no difference to the final application size.
When you are prompted to read something from a module in Python, even if it is just a variable in a module, it runs (like any .py file) - the functions, classes, and variables defined in the module are created and are available to leave from there.
The only difference to from X import Y
is that in the module where you imported it will only be created the variable Y
that will refer to the same variable Y
that is in module X
. Similarly, if you do import X
, only the X
variable, which refers to the entire module, is created.
If X has thousands of functions and variables, all are in memory and available to be immediately usable - other commands of import
will never read the X
module again:
after the first import
, the X module becomes available in sys.modules['X']
- and other import ... from X
will take the direct references from there.
On the other hand, given the size of the memory of conventional PCs and even the one made available on virtual servers nowadays, hardly code will leave the program heavier - bytecode Python, which is what it is read when we import a module is about the same size as the source code - maybe 30% larger - and we always have to keep in mind that the whole Christian Bible, for example, when counted as text, occupies only 3MB: that is , a program that had as much code as the bible has text would occupy about 4MB of memory just to be imported - against PCs typically with 8000MB of memory, or even virtual servers with 512MB.
The startup time of a module may weigh slightly - to something as large as a few seconds if the .pyc file does not yet exist. But if there is a library with thousands of items that can weigh in the final executable, it is up to the author of the library to separate it into subpackages, which have to be explicitly imported.