Question

I have a question on shared libraries vs static libraries loading time.

Assume that i have a executable foo.exe which uses liba, libb, libc. Also at a given time there are more than 10 instances of the executable running on the machine.

Now if the above 3 libraries were shared libraries : 1st Insance is loaded into RAM : The time taken will be time taken by main() of foo.exe to be loaded memory (assuming its negligible) + time to load liba + time to load libb + time to load libc 2nd instance is started : Now assume a second instance of this executable is run. Since all libraries are already loaded into the main memory, time taken is only for loading the main() to the memory which is negligible .

Now if the above 3 libraries were static libraries : 1st Insance is loaded into RAM : The time taken will be time taken by main() of foo.exe to be loaded memory (assuming its negligible) + time to load liba + time to load libb + time to load libc (Offcourse its now all part of the executable as a whole) 2nd instance is started : Now assume a second instance of this executable is run. Time taken will be again time taken by main() of foo.exe to be loaded memory (assuming its negligible) + time to load liba + time to load libb + time to load libc. (Since each executable cant share librareies as these are static librareies)

So my conclusion is that with static library the loading time will be more. But i was told that shared libraries take more time during loading than static libraries, so there will be a delay and so shared libraries is not a good option. How is this possible ?

Was it helpful?

Solution

Linking (resolving references) is not free. With static linking, the resolution is done once and for all when the binary is generated. With dynamic linking, it has to be done every time the binary is loaded. Not to mention that code compiled to run in a shared library can be less efficient than code compiled to be linked statically. The exact cost depends on the architecture and on the system's implementation of dynamic linking.

The cost of making a library dynamic can be relatively high for the 32-bit x86 instruction set: in the ELF binary format, one of the already scarce registers has to be sacrificed to make dynamically linked code relocatable. The older a.out format placed each shared library at a fixed place, but that didn't scale. I believe that Mac OS X has had an intermediate system when dynamic libraries where placed in pre-determined locations in the address space, but the conflicts were resolved at the scale of the individual computer (the lengthy "Optimizing system performance" phase after installing new software). In a way, this system (called pre-binding) allows you to have your cake and eat it too. I do not know if prebinding is still necessary now that Apple pretty much switched to the amd64 architecture.

Also, on a modern OS both statically and dynamically linked code is only loaded (paged in) from disk if it is used, but this is quite orthogonal to your question.

OTHER TIPS

Static libraries are linked at compiled time, shared libraries are linked at runtime. Therefore, executables using static libraries amortize all their link time before even being written to disk.

Thanks a lot for this unbelievably fast response. we have 2 architecural scenarious :

Q1. Architecure-1 :Assume exe size 3GB (static libraries). 95% is libraries and 5% main().With such a huge size would loading of this exe take more time (assuming static libraries) or would linking of this exe take more time (assuming using shared libraries, and if all libraries are already in memory only linking has to be done.)

Architecure-2 :Assume i have a exe size 1.5GB (95% lib + 5% main()), and 6 instances of this are running at the same time. Once these 6 instances are up they will run for days, assume we are ready to take the extra delay during initial loading+linking of these 6 instances.

Q2. Now if i am using shared objects rather than static objects, will i have lot of free space on the RAM since all the libraries are shared amongst the 6 instances ? will not my realtime execution speed up because of more space in RAM which decrease the page swapping ?

Q3. If i use 'map' files to decrease the number of symbols exported (which is only possible using shared libraries) will not my symbol table size decrease and will it not improve the runtime performance ?

Thanks Sud

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top