RS was asked the following, and herewith passes the questions on to the community:
The story goes like this: I have a set of 8-bit BDF (or PCF) fonts which are almost full in the sense that nearly 250 encodings are occupied by glyphs. (Those not occupied are "forbidden" positions like \x00 (NULL), \x09 (TAB), \x0a (CR), \xa0 ( ) etc.) Older versions of Tcl/Tk in tandem with older X distributions displayed all characters (in tk widgets) perfectly. Newer versions of Tcl/Tk (and newer X distributions) do not display certain characters correctly, for example, I am always getting a display of \x01 as \x01 (and not as the glyph defined in the BDF files at encoding 1).
If one has to use some kind of non-system encodings (like Japanese, Korean, Hebrew etc.), one may use "encoding convertfrom" or "encoding convertto" commands. "fconfigure -encoding" demands a channel to read/write strings in specified encodings. "encoding system", on the other hand, is something that may change the overall application behavior.
But my problem seems to be fundamentally different. I have a string to insert in a text widget / label / button, that contains \x01 in the straightforward 8-bit encoding and I want to see the corresponding glyph defined in the font files and not the escape sequence \x01. Is there a way to do this?
RS: First idea is that one of the MS-DOS codepage encodings delivered with Tcl since 8.1 can help. Here is a little code page browser that displays the characters 0x00..0xFF of the selected encoding (double-click in the listbox).
But from what I see, character positions 0x00..0x1F have no glyph in any encoding..
DKF: So Tcl doesn't know the encoding that the font is using. Luckily, writing a new encoding for 8-bit fonts is not too hard, and once you've done that, all you need to do is to drop the *.enc file into the right directory in your Tcl distribution and you should be able to use the font sensibly.
But the first thing to do is to work out the correspondence between the glyphs in your font(s) and the glyphs in UNICODE, which you have to do yourself. (You can discover what the unicode characters actually are by downloading the right PDF from the Unicode Consortium website http://www.unicode.org/charts/ )
Once you've done that, you can write your encoding in much the same style as Tcl's existing enc files, though if you have the CVS distribution of Tcl you can also create the source to those files (in the tools/encoding directory) and use one of the programs in the tools directory to build the enc from it.
LV I'm seeing a problem that, initially, seemed like the one above. However, we've tried all sorts of things with encodings, and we still have problems. On top of that, even non-tcl/tk programs, like windows emacs, are having problems displaying things using these fonts. However, I'm uncertain what tools are available to analyze font file problems. Anyone have pointers to tools that are useful? The initial fonts were bitmap fonts which a user converted to Windows using Exceed tools. However, today I got a report about a Windows TrueType font causing Tk 8.5 problems. Anyone aware of issues in this area?
LV Anyone have a pointer to a tutorial for creating your own encoding files?
It's easy for CL to imagine easy more power and capability in the font interface than is currently available. Along with TIPs 145 and 213, a number of introspections would help. As tclguy posted to comp.lang.tcl, "Discovery of specific chars in a font, mapping of char ranges to a font (in the creation of a named font), exposing of font fallbacks, and specifically disallowing font fallbacks in certain cases are among the items that could be exposed", along with the given-this-character-Tk-will-render-with-this-glyph computation.