This page is being obsoleted.
LV 2007 Oct 15
Okay, besides Lynx not being able to handle Unicode correctly, now I discover that the Firefox 2.0 on Windows XP , set to Unicode character encoding, apparently doesn't work right either. I tried to make a change on the XOTcl page, and Wubwiki reports that the resulting page had an invalid character.
AlbrechtMucha There where strange character combinations around "Publisher-Subscriber" in the middle of the document. The same message came with firefox in Linux.
So, I don't know if this is a problem or just a strange behavior. Maybe someone in the know can comment.
I see, on a regular basis, entries on the Recent Changes page without a user field specified. There is always an IP address. But I thought that in this latest incarnation of the wiki code, page changes required a user "name".
Is there perhaps a bug that allows that field to be empty or just contain a space or something?
CMcC bug in closing a connection resulting in infinite loop leading to memory exhaustion causing process to exit. Side effect - the logs were 2Gb and this size prevented the daemon from restarting. 2Jul07
CMcC Problem with page saving seemes to be fixed. The intent is to redirect to the newly edited page, but getting Create to redirect is subtle. There's really no good way to do this. Currently using the SeeOther 303 HTTP response, because 201 isn't implemented by any browsers. Added a meta-redirect as well. Erk.
KBK Implementing history required several pages to be renumbered; this was necessary because the database corruption of 17 March 2006 and subsequent restoration on a new machine caused multiple copies of some pages to appear, that then went their separate ways and caused conflicts when trying to merge the change logs from the old and new machines. A list of Pages renumbered on 12 June 2007 is available.
KBK Owing to various problems that happened over the years, the Wiki is known to have a number of Pages containing invalid UTF-8 sequences. People who are interested in improving the Wiki are invited to attempt to repair the text of these pages.
LV Wiki gnomes - I am not a good candidate to attempt to fix the invalid utf-8, given the limited fonts I have here and even worse the limited knowledge I have of other languages. Would some of you multi-lingual types take a crack at that? Or, at the very least, see if you can determine who entered the data originally and email them requesting they update their contribution?
Please add new observations at the issue tracking link here: [L1 ].
instead of 
in lynx, and even the GUI browsers don't like the rendered html. Lars H: This looks like the same problem as in the old wiki. The problem then was that the links-in-free-text parser was used also for links-within-brackets, which meant every kind of link ends at the first thing that looks like "punctuation". The extra garbage you see is because there is some pre- and postprocessing of bracketed material which conversely assumes everything between the brackets is turned into a link. When these assumptions collide, markup internal to the parser survives to the output.
in "Larry Virden" and get that page just fine.
9th June: still happening. I get "This page contains no data" messages eventually after the first click. CMcC Steve, can you give me a nod when you're on the chat and able to help me with some testing?
Current diagnostic hypothesis is that there's a part of the server getting an error which is not being protected by a catch, and hence not providing feedback. Working to extend catch coverage to total to get detailed diagnosis.
CMcC the latest version isn't online yet. Will be soon. I wanted to wait until I could get the server to reliably go idle, so we know there are no half-changes in the wikit.db
In Summary (ordered by importance)
LV So, what are the issues outstanding for returning access to revision history?
Palliative: Tried setting system encoding to utf-8, this will mitigate the lack of active support for charset in content (shouldn't be a problem, as the stored data ought to be utf-8 encoded.) Mitigated, not solved. Tested on LV's example /15501, and it seems to permit me to save.
Lars H: An old suggestion I made for detecting corruption in the old wikit was to include a "hidden" form item with some Unicode characters chosen by the server. The idea is that if the browser somehow corrupts the page contents in editing (if memory serves, it was usually that utf-8 encoded data was interpreted as being iso8859-1 encoded somewhere in the chain), it should mangle the hidden item as well, and since this shouldn't change the server could detect the corruption. Maybe the new encoding issues are different, but raising the idea shouldn't hurt.
CMcC I've just discovered that (it appears) some clients can sit idle for 20s while sending a header. I'm surprised by that, though perhaps I shouldn't be. The response to this situation is going to change, and now the connection will be dropped when all traffic bound for the client has been sent. See if that helps.
LV It doesn't appear to be helping. I am still seeing 20-30 second delays before completing the display of pages. But I am also seeing a few seconds delays before a page begins displaying. The weirdness here is that it isn't consistent - sometimes, a page starts displaying right away, and other times there is a long delay before the page begins to display. In both cases, the 20+ second delay between completion of page display and closing of connection continues.
CMcC is this in Lynx? Which browser?
LV Yes, this is lynx. I sent you an email from the primary lynx developer. I'm using lynx 2.8.7dev.4 right now.
LV Wow - just a little while later, and what a difference! I emailed Colin, thanking him and telling him that it is like heaven now!
LV access by lynx text web browser seems to work strangely - for example, when going to page 4, the page is displayed, but the browser then seems to sit waiting on something from the server for a minute or so before returning to the user for input.
CMcC I thought this was related to the x-system problem, but it definitely is not that - that problem's been fixed, but the lynx unfriendliness persists.
LV More info on this problem. What I see, if I check the status, is the message that 18 of 22k of a page has been sent back to lynx. However, if I tell lynx to stop waiting for info from the server, the entire wiki page is present. I don't know what other 4 k of data is being help up, but if this is load related, then the load on the wiki must be horrendous, because about 65% of the pages I visit result in this behavior.
CMcC It sounds like Lynx is expecting more input than it's getting. I wonder if this is related to the Unicode problem? If the encoding was bollocksd (which it is) and graphical browsers were more savvy to the brokenness than Lynx is, that might account for it.
CMcC 3May07 - reported that lynx v 2.8.5 can access the wiki with no problems. Raises the question, what's changed between 2.8.5 and 2.8.7? Secondly, I can't get lynx to even access the wiki from here, no connection arrives at the wiki machine at all. Something screwy going on with 2.8.7 dev versions?
LV 2007 May 17 - for several weeks, lynx users were enjoying access to the wikit. However, today I notice the problem is back. I don't know if it returned today, or late yesterday...
NEM: I created the page invoke yesterday, but it vanished by today. I've recreated it, but my user page (Neil Madden) still shows the link as yet to be created. Likewise the title of the invoke page was not a link, indicating that no pages refer to it (that will probably change now that I've linked to it from here).
LV Even in the previous incarnation of the wikit, I noticed cases where I would create a link to a page, then use that link to go to the new page, come back and find the link still marked as yet to be created. And the weird thing was that it wasn't consistent. Editing the page with the original link on it seems to resolve the situation. What concerns me more is that this is not the first time I've noticed someone saying that the page they created disappeared. However, the last point in this section says that there was a database problem yesterday - perhaps those types of events are causing the loss?
CMcC I expect that the problem might be caused by the fact that I'm restarting the wikit several times per day, adding enhancements, fixing problems and such. If this is so, the problem is inherently self-limiting to the extent that the number of bugs and hence the frequency of restarts moves monotonically and asymptotically toward 0 per day average.
There is a facility for the console to send the wiki server idle, such that it will defer new connections, and wait until current connections go away. However, until I have the connection stuff properly sorted out (which is what /_activity is supposed to facilitate) the go_idle is relatively ineffective (because some connections got wedged open.)
Now the connection wedging seems to be ameliorated, if not completely cured, it's likely that (if the problem is actually due to uncoordinated restarts) the problem concerning NEM will disappear.
IF, on the other hand, as LV suggests, there's a deeper problem preexisting in the Wikit, we'll burn that bridge when we come to it.
LV Colin, I'm not certain I understand why restarting the server would cause pages to be renumbered or possibly to disappear. Surely as new pages are created, the contents are still be cached up for future source code incorporation...
CMcC fixed 2May07
since it isn't listed on the page yet, the revisions link isn't leading to the ability to see revisions yet. what about the wiki history server - is that available? if not, the wikignomes need to have access so we can repair things.
CMcC the wikit backend is still storing all the mods, but there's nothing checking them into CVS, nor is there any interface to CVS. It's a temporary situation, but will have to be dealt with eventually because the revision history intermediat storage mechanism is whole-page, and it's going to get out of control eventually.
I think, personally, that a metakit (or other) solution based on the tcllib diff/merge list primitives would be better than running off a CVS, although CVS has the advantage of keeping the data in a more tangible form.
CMcC would be happy to talk to anyone about the prospects for implementation. LV Feel free to use the TclersWiki mailing list to discuss any issues you wish to discuss. If you'd rather discuss in private email, then perhaps you could post a "call for discussion" on TclersWiki.
alove ... it throws you to another search page, where you must type your search term again.
CMcC when I try it it takes me to the search results page, and permits me to type in a new search ... is this not how it works for you?
alove No, it does not provide any search results. I'm not sure why you're getting the expected result but I'm not. I have refreshed cache, but no change.
LV I was getting the same behavior that alove describes - access the front page, replace the text "Search" with the word "test", press return, and I get the following:
and the Search page, with nothing in the search phrase box (and an unlabeled search box at the bottom of the page. Then I did as CMcC suggested, went back to the front page, and did a shift refresh (in Firefox) and now search works as expected.
alove Also, when I do a search, go to a page link and then click the "back" button in my browser, I get the error "Page expired". I'm using IE6 on Windows 2000. I imagine this due to caching, which you are working on.
CMcC Caching appears to be completely working now.
LV What I'm seeing, at least in firefox is the msg "The page you are trying to view contains POSTDATA that has expired from cache. If you resend the data, any action the form carried out (such as a search or online purchase) will be repeated. To resend the data, click OK. Otherwise, click Cancel."
CMcC I (fairly arbitrarily) chose to use POST instead of GET for searches. It doesn't make much difference, serverside, in this case. If there's consensus that it be changed back to GET, that can be done.
AM This is a copy of the message text:
The page you are trying to view contains POSTDATA that has expired from cache. If you resend data, any action the form carried out (such as a search or online purchase) will be repeated. To resend the data, click OK. Otherwise, click Cancel.
NEM This is because the new wikit search form uses HTTP POST rather than GET to submit queries. Firefox and other browsers will generate this warning when you try and revisit a POST result. Using GET would be better, as it would also allow you to cut 'n' paste or bookmark search results.
In Search field type wikit and redirects to https://wiki.tcl-lang.org/1 instead of giving search results.
CMcC isn't that correct behavior? /1 is a page entitled 'wikit', isn't that the defined behavior of wikit, or am I missing something? Would you like searches via /2 to return something different?
Jeff Smith With wikit run under cgi, searches via /2 would display, in the results, all the pages with the search string in the page title. It seems that with WubWikit if the search string matches a specific page title a redirection to that page is happening.
Please reconsider this decision. Right now, search is useless because of this behavior. Before, if I wanted to look at what pages had the word graph in them, for instance, I could type graph in the search entry widget and get the list. Now it appears to be impossible to do such a thing. Not only is exact matches doing the mapping, but if I type grap, because there is a page called graph, that is what is displayed, rather than the search results page.
CMcC A search via /2 for 'kw' is currently, I believe, precisely the same as a search using /kw. As far as I can see, and by design the behaviour you refer to only occurs when a search term yields precisely one match or when a page title is exactly matched by the search term. I understand you are saying that you would prefer a search via page /2 to always return a search results page, and not to try for exact title match. Is this correct? I seem to recall that this was a feature of the original wikit, and I can see its usefulness. In the meantime, searching for 'kw*' will return a larger set of values.
JSI Searching and bookmarking URLs on the Tcl'ers Wiki says ...The following URL is an instruction to look for a page titled "hawaii": http://purl.org/tcl/wiki/hawaii . Assuming there is a page titled "hawaii" (case is ignored), the above URL will lead directly to that page... So if a page named hawaii already exists, then https://wiki.tcl-lang.org/hawaii should jump to this page and https://wiki.tcl-lang.org/2?hawaii should find all pages with hawaii in their name.
CMcC You're right - thanks for pointing it out and sticking to your guns. Fixed - please test.
Jeff Smith seem to be working correctly now :) However another input text box is appearing at the end of each page. Ah! just realised it is a search entry at the end of each page. Very nice!!
Please use http://code.google.com/p/wikitcl/issues/list to report problems and issues, this will allow us to track them more carefully.