There are many things to say about performance.
I'll go into the performance of this site on the web for now, i.e. through the CGI interface:
Currently, the web site returns every page through CGI. A 2 Mb standalone executable containing Tcl/Tk and MetaKit [L1 ] (called TclKit) gets launched for every page you look at. As things go, that executable then starts doing things like sourcing Don Libes' CGI package and lots of other things. The HTML you see, is generated on-the-fly for each request from the plain text (with its wiki-specific markup).
Considering what is going on, it's in fact amazing how fast the site is. But this won't scale well as the site grows. The plan is to generate static pages, and to revert to this CGI-based approach only for editing, searching, change histories, etc. It's not hard IMO and it can be done without altering any URLs, it just hasn't been done yet... is it time (already?) to implement this "over-drive"?
Unfortunately, overhead also brings up a limitation of the current code. This version uses file locking to prevent two concurrent accesses from clobbering and damaging the database. But, while trying to figure out how to do that in the most general way, I forgot to add a mechanism which is important for CGI access: the system should retry and wait if a lock is found, instead of just giving up. As a result, the current system will occasionally return an HTTP error. This is harmless, you can simply re-fetch the page (or re-save your edits). I'll add retries in the next revision of WiKit, of course...
-- JC
Actually, it is only 'harmless' in some cases; other times, the browser tosses away the data from the form, resulting in the user having to start over.
Nov 28, 2001 - after the umptieth day of seeing huge delays and failures in accessing the Tcl'ers Wiki, I finally found some time to dive in - JCW
Aside from all this, there is also good news to report: the current system stores all changes in two different formats (and all of it is copied in several periodic backup cycles), meaning that even through the glitches, which may continue until I have some new stuff fully solid, all page contents and page history is safe.
In terms of scalability, the Tcl'ers can grow for a long long time (with real history tracking and access coming up). The basic caching mechanism provide full Apache web-server performance, but it depends critically on using the following format to access and refer to pages:
http://mini.net/tcl/N.html
With N being the page number. All other accesses are slower, because they must fall back on a CGI script (that includes searches, since these can't be cached as static page).
Specifically, do not use http://mini.net/tcl/N to refer to page N, because it bypasses the cache...
Accesses via purl.org/tcl/ and other PURLs (see [L2 ]) should also work fine. Accesses via mini.net/cgi-bin/wikit/ are "deprecated", please adjust such links to the other preferred formats. They still work (bypassing the cache), but may cease to at some future date.
LV Great news! However, I've a question. I've only gotten wiki search urls to work via the cgi-bin/wikit type url. What is the recommended mention for providing a wikit URL that includes a search for a page, if the cgi-bin/wikit is deprecated? P.S. you should update the front page of the wiki - since that recommends using the cgi-bin/wikit style url...