[GPS]: While surfing I stumbled upon an article that I found very interesting. It has a lot of meat (real useful content) to it, which was unexpected given the title. http://www.dexterity.com/articles/zero-defect-software-development.htm ---- [DKF] - Good page, and I agree with it 100% both as a software engineer and as a Tcl developer. Now, what other Wiki pages can we link this one in with? ---- [KPV] - Interesting page but I disagree. Software development has many aspects including new features, bug fixes, refactoring, etc. Unfortunately you only have a limited amount of resources and time, and you have to balance all those factors. As Joel Spolsky writes in the excellent essay ''Hard-assed Bug Fixin' '' [http://www.joelonsoftware.com/articles/fog0000000014.html] Fixing bugs is only important when the value of having the bug fixed exceeds the cost of fixing it. ---- [CL] makes an effort to relate this convincingly to [Tcl], and comes up with this: Tcl has an advantage (over, let's say, [C]) in ZDSD 'cause there's such a short distance between idea and executable. ---- [TV] It brings to my memory the term 'correctness by construction', which one could apply to function (de-) composition techniques. It breaks down at the first loop, than you'd have to do protocol analysis, maybe process algebra for verification/construction. Of course the major advantages for errorless development with tcl / tk are that memory management is absent and as far as internal (near ?) perfect, pointer dealings can be limited to elements from well formed sets, and the user interface should be faultless at least at the level of the elements one applies. Major advantages. I guess timing is harder. [GPS] Tcl does have memory management issues. Global arrays are not collected, and things like http::cleanup have to be called or references need to be counted to cleanup Tcl structures. In particular due to the lack of a garbage-collector we have memory faults and leaks that are too common in the C code. There are ways around this. The [Boehm Garbage Collector] could possibly be used in future Tcl versions; which would eliminate most issues with leaks (there are already some languages using it). [TV] I'm not sure what you mean by memory faults, which is an important issue. I have no problem with globals being around as long as I do not on purpose unset them, that what they are globals for, so that is not a fault imo, but if they would eat more memory by getting set and reset to new values, outside the bounds of in that process at any time not use more than twice plus a bit the maximum reasonable memory space required to store the content, than I don't see that as a fault. The idea of garbage collection I guess would be to have all chucks of malloc-ed or 'object defined' memory deleted as soon as no 'official' pointer or otherwise references to them are present anymore, which is a programming error in my opinion, and is a 'fault': you should deallocate memory you dereference, and I'd say that is normally speaking always possible. I didn't look into the tcl code much, but I guess it by and large should not contain such error. I you want to delete global vars like in the http library, which is written in tcl and can therefore not be blamed for 'errors' in the tcl core, I'd think it may not be a language error: you have to close files you open, too. I know that in [tclhttpd] descriptors of kept alive connection linger around as globals, which is not realy a tcl error, and not necessarily a tclhttpd error, though it would be neater to limit the time which they stay around. It leaves me with the question if tcl/tk core code 'leaks' memory: running tclhttpd for weeks on a row doesn't make me conclude it eats lots of memory, but I don't know, and indeed that would be an error. A whole page might be in order to see what all happens with the unneat and basically unpaged stack and heap(s) of a tcl executable on for instance windows, and even linux (hopefully not unix), lets say the core allocates a 1000 pieces or 99999 byte variables, and then the tcl program runs such that effectively, each such variable content is reduced to 66666 bytes, does that mean that we we can subsequently assign 1000 vars of 33333 bytes a piece to end up with the same heap (program related, malloced) memory size, roughly?