dpJudas wrote:Chris wrote:Funny you say that, since dynamic linking was much less prevalent in the 80s and early 90s (let alone 70s). Consoles, which are more memory- and storage-limited than PCs, also stuck with static linking longer than PCs did.
ELF was introduced in 1988 as part of SVR4.
I didn't say it wasn't possible, I said it was less prevalent. Dynamic linking was made to solve problems apps were running into in the 80s, but memory and storage concerns weren't the biggest among them. Unix-based systems (Linux wasn't around in the 80s) were designed more for mainframes and server use, which would tend to have more storage and memory than a typical PC. Customization (third-party plugins) and resource sharing (multiple apps/users accessing the same hardware at the same time) were bigger issues to solve on these systems. PCs started utilizing dynamic linking more as they started doing similar things.
Windows 95 was designed to fit into 4 MB of system memory. Think about that for a moment - 4 megabytes. We burn more than that on just one of our upload buffers in GZDoom. So while disk space had indeed gone up (compared to 80's) it doesn't change the fact that at this time in history every single bit you could save counted - and dynamic linking saves both memory and disk space.
Consider games like Doom or some other typical DOS game. When you run its setup program to select your sound and MIDI devices, those are all effectively hardware drivers the game's engine/middleware has code for. Each game that uses CGA, VGA, and/or VESA is specifically coded to use it. DOS games didn't use DLLs to share this between multiple games on the same system, each game contained the code for hardware it supported, and were getting by relatively fine... until you start adding in multi-tasking, a windowing system, and other shared hardware resources.
Look closely at the ELF standard and notice how much care has been put into touching as few pages as theoretically possible so that the maximum amount of memory could be shared. They wouldn't have bothered with all that if it wasn't one of the key points of the entire exercise.
Just because dynamic linking wasn't made to solve problems with memory and storage doesn't mean it can't be memory and storage efficient. Solving problem A, and getting benefits with B as a side effect, isn't an uncommon occurrence.
First of all, this isn't how Linux distros decided to do things. They decided to link *everything* dynamically - from zlib to your favorite xml parsing library. If they truly kept dynamic linking to their platform ABI then I wouldn't have a problem with their policy, but they do not.
It's certainly not *everything*. A fair number of -dev packages here come with static libraries. It may be the policy of various distros to prefer dynamic linking, but it's certainly not a requirement. There's nothing stopping you from static-linking everything from zlib to your favorite xml parsing library, and only dynamic-linking the platform interfaces, if that's what you want for your apps. Some ideologues may complain (with or without valid arguments for a given library), but the actual system will be just fine with you and people can run what you provide.
The linux kernel itself is also an example of how you can change things and still support decades old stuff - unlike userland the kernel itself always kept its part of the userspace compatibility bargain.
The kernel is a pretty different beast than userland. The kernel is the bridge to the hardware, while userland needs to do system management and make sure apps play nice together. But even still, the kernel hasn't gone without breakages... when the kernel switched from its original threading implementation to NPTL, I remember things got rather bumpy. If you go far enough back to apps that use OSS (the audio API), that gets emulated in userland these days. It doesn't get handled in the kernel anymore because it can't share resources very well at that level. OSS4 tried to add sharing, but as we can see, it never really took off on Linux -- you're making the kernel do things it isn't intended to, or you're doing a weird kernel->userland->kernel loop.
But once again, that isn't what Linux distros are doing with their /usr/lib folders. Windows went away from this somewhere around 1994 when COM was introduced. It became all to clear what happened when everyone and everything thought they could share random utility libraries and toss it into the System32 folder.
COM is a fancy dynamic loading interface. Under the hood, it's loading DLLs and calling functions based upon the requested interface to get you a vtable, and automatically freeing DLLs when they're no longer used. COM can also only work with DLLs that are registered with the system, storing information globally in the registry, which causes its fair share of issues (IMO far worse than random dlls tossed into system32; at least if you dump a random foo.dll in system32, an app won't use it if nothing explicitly looks for it, but with COM, foo.dll may be used wherever it is if it got registered for a particular interface).
dpJudas wrote:But if it comes bundled with the old versions the linking is effectively static.
Not really, because I can replace the SDL and OpenAL shared libs it comes with to make it use updated versions, which can use newer modesetting and window management features or directly communicate with PulseAudio with more advanced resampling and mixing capabilities that weren't available back then. If they were actually static-linked, you'd have to do hacks with code-injection to replace internal calls with external ones (assuming none of the library code got inlined, which would make it much more difficult). As an interesting side-note, some years ago SDL implemented a method in which its own calls could be overridden with itself. So you could static link SDL and not worry about supplying an SDL.so or whatever, but the code would look for a user setting that could make it load a dynamic version of itself and forward its calls to there.
[quote="dpJudas"][quote="Chris"]Funny you say that, since dynamic linking was much less prevalent in the 80s and early 90s (let alone 70s). Consoles, which are more memory- and storage-limited than PCs, also stuck with static linking longer than PCs did.[/quote]
ELF was introduced in 1988 as part of SVR4.[/quote]
I didn't say it wasn't possible, I said it was less prevalent. Dynamic linking was made to solve problems apps were running into in the 80s, but memory and storage concerns weren't the biggest among them. Unix-based systems (Linux wasn't around in the 80s) were designed more for mainframes and server use, which would tend to have more storage and memory than a typical PC. Customization (third-party plugins) and resource sharing (multiple apps/users accessing the same hardware at the same time) were bigger issues to solve on these systems. PCs started utilizing dynamic linking more as they started doing similar things.
[quote]Windows 95 was designed to fit into 4 MB of system memory. Think about that for a moment - 4 megabytes. We burn more than that on just one of our upload buffers in GZDoom. So while disk space had indeed gone up (compared to 80's) it doesn't change the fact that at this time in history every single bit you could save counted - and dynamic linking saves both memory and disk space.[/quote]
Consider games like Doom or some other typical DOS game. When you run its setup program to select your sound and MIDI devices, those are all effectively hardware drivers the game's engine/middleware has code for. Each game that uses CGA, VGA, and/or VESA is specifically coded to use it. DOS games didn't use DLLs to share this between multiple games on the same system, each game contained the code for hardware it supported, and were getting by relatively fine... until you start adding in multi-tasking, a windowing system, and other shared hardware resources.
[quote]Look closely at the ELF standard and notice how much care has been put into touching as few pages as theoretically possible so that the maximum amount of memory could be shared. They wouldn't have bothered with all that if it wasn't one of the key points of the entire exercise.[/quote]
Just because dynamic linking wasn't made to solve problems with memory and storage doesn't mean it can't be memory and storage efficient. Solving problem A, and getting benefits with B as a side effect, isn't an uncommon occurrence.
[quote]First of all, this isn't how Linux distros decided to do things. They decided to link *everything* dynamically - from zlib to your favorite xml parsing library. If they truly kept dynamic linking to their platform ABI then I wouldn't have a problem with their policy, but they do not.[/quote]
It's certainly not *everything*. A fair number of -dev packages here come with static libraries. It may be the policy of various distros to prefer dynamic linking, but it's certainly not a requirement. There's nothing stopping you from static-linking everything from zlib to your favorite xml parsing library, and only dynamic-linking the platform interfaces, if that's what you want for your apps. Some ideologues may complain (with or without valid arguments for a given library), but the actual system will be just fine with you and people can run what you provide.
[quote]The linux kernel itself is also an example of how you can change things and still support decades old stuff - unlike userland the kernel itself always kept its part of the userspace compatibility bargain.[/quote]
The kernel is a pretty different beast than userland. The kernel is the bridge to the hardware, while userland needs to do system management and make sure apps play nice together. But even still, the kernel hasn't gone without breakages... when the kernel switched from its original threading implementation to NPTL, I remember things got rather bumpy. If you go far enough back to apps that use OSS (the audio API), that gets emulated in userland these days. It doesn't get handled in the kernel anymore because it can't share resources very well at that level. OSS4 tried to add sharing, but as we can see, it never really took off on Linux -- you're making the kernel do things it isn't intended to, or you're doing a weird kernel->userland->kernel loop.
[quote]But once again, that isn't what Linux distros are doing with their /usr/lib folders. Windows went away from this somewhere around 1994 when COM was introduced. It became all to clear what happened when everyone and everything thought they could share random utility libraries and toss it into the System32 folder.[/quote]
COM is a fancy dynamic loading interface. Under the hood, it's loading DLLs and calling functions based upon the requested interface to get you a vtable, and automatically freeing DLLs when they're no longer used. COM can also only work with DLLs that are registered with the system, storing information globally in the registry, which causes its fair share of issues (IMO far worse than random dlls tossed into system32; at least if you dump a random foo.dll in system32, an app won't use it if nothing explicitly looks for it, but with COM, foo.dll may be used wherever it is if it got registered for a particular interface).
[quote="dpJudas"]But if it comes bundled with the old versions the linking is effectively static.[/quote]
Not really, because I can replace the SDL and OpenAL shared libs it comes with to make it use updated versions, which can use newer modesetting and window management features or directly communicate with PulseAudio with more advanced resampling and mixing capabilities that weren't available back then. If they were actually static-linked, you'd have to do hacks with code-injection to replace internal calls with external ones (assuming none of the library code got inlined, which would make it much more difficult). As an interesting side-note, some years ago SDL implemented a method in which its own calls could be overridden with itself. So you could static link SDL and not worry about supplying an SDL.so or whatever, but the code would look for a user setting that could make it load a dynamic version of itself and forward its calls to there.