[Starkit]s are about packaging, "wrapping" the pieces that comprise an application or a library so they can be deployed and used as a single file. Occasionally, people ask what the point is of putting a single script into a starkit (such as [http://mini.net/sdarchive/tkcon.kit]): * compression: because it's smaller (zipped) * uniformity: because then, everything is a starkit * evolution: because it's a good start if more gets added later * packages: because you can also add all packages it needs, if any * sdarchive: because it can then be submitted to the Starkit Developer Archive [http://mini.net/sdarchive] * stability: because the code is not editable as is (that's both a pro and a con) * starpacks: because it's easy to turn it into a starpack, i.e. include the Tcl/Tk itself with it ---- '''A digression: wrapping vs. updates''' The "packages" argument has two sides to it. On the one hand, developers tend to create a world they work in, where all the tools are set up and configured to be as convenient as useful and where all packages are placed in n on-disk repository. On the other hand, there is the issue of deployment, where that development environment is absent. The advantages cited for the first, more widespread and traditional, perspective are that it is far easier to maintain revisions and upgrades to each of the different packages used by an application, and that it is more efficient in disk space used (and memory, in the cased of shared libs). IMO, both arguments need to be reviewed in the light of ''today's'' technologies and trends: '''Easier updates''' - it is of course much simpler to update everything through as single copy of a package. But this misses the point completely IMO: in practice, the ''last thing you want'' as developer is breaking stuff. This is so common with frequent updates and the not-always-perfect-QC of OSS software, that in my experience, I tend to delay updates of a package, unless there is a known problem, and a resolution is needed. The reality is that I tend to go out of my way to make sure nothing ever changes in a project once it is past the earliest development stages. There is an analogy which applies here, IMO: when you buy a car, you don't go out and replace it with an upgrade whenever a new model comes out. Our streets are filled with cars which all have different "version numbers". To put it differently: if it ain't broken, don't fix it. I'd like to extend that to: if it's only broken in ways you don't care that much about, don't fix it. The urge to update to the latest revision, even with lower-volume software where mishaps and imprefections are far more common and long-lived, is a bad one. We techies tend to update because we can, losing any benefits that come from keeping a software solution stable, known, and dependable (yes, even in its weaknesses). '''Disk space''' - once a software solution has been created and tested, it needs to be "sealed" in some way, to reduce the brittleness which inevitably comes with development (try flipping one random bit in a project, and see how often such a change turns a working system into one which catastrophically fails). A copy of everything needed is the simplest possible way to do so, it helps create a new state with an identity and by being a single file, it is in fact marvelously identifiable, robust, and self-contained. As for disk space in terms of bits: that is by now irrelevant. Every development system can probably store hundreds or even thousands of copies without filling up. Add to that the replication to other servers and destination machines, and the issue is really moot. -[jcw]