'''The New Tcl Philosophy''' ---- Small core == Good. Static executables == Bad. Core only distros == Bad. Fully loaded distros == Good. ---- What bugs me about this ''slogan'' is that we do no necessarily see it in practice. For instance - many of the string features that have been added over the past few years certainly could have been done in an extension that was shipped with the core distribution - but instead, it went into the core. In fact, how much currently in the core has to be there? Seems to me that a small core interpreter, able to do loading of scripts, dynamic loading of extensions, and interpretation of code, would be possible without sockets, regular expressions, the history command, etc. in the true '''core''' code, but with all those features distributed in the core '''distribution''' of code, so that by default, Tcl would include those features - but as dynamically loaded extensions. The core itself might need some new features - to provide the abiltiy for dynamically loaded commands to be treated as byte codes, for instance. ---- ''[escargo] 16 Jan 2003'' - But what are the costs and benefits of the different approaches? For example, for every feature that '''might''' be '''currently loaded''' or not, code either has to load it or otherwise test that it is loaded or else handle the error that results if it not loaded. That's a burden on the programmer (writing the code) and on the execution (since it adds some run-time overhead and code volume). Part of the problem is where the cost is paid. Is it paid when code gets loaded? Is it paid when code gets byte-compiled? Is it paid when code gets executed? I personally think dynamic linking is great. I liked it ever since I used it in [Multics] many years ago. But there are advantages to statically binding all parts of a program together. (In some ways this is similar to the discussions about the Linux kernel and its loadable modules.)