is a predicate for applications which consist only of Tcl code (no C extensions or packages).


  • portable, will run on most or all platforms
  • easily edited even in deployed form


  • may run slower, especially in number-crunching
  • may expose implementation details that the end user is not supposed to know

In general, it is sensible to start coding an app in pure Tcl, and replace only those parts (if any) with C code that are just too slow. (But CPUs are picking up speed considerably faster than C coders ;-) (RS)

LV writes: So does pure-tcl mean that the application does not use Tk, given that Tk is a C extension? Seems technically that would be the case.

There are several subtleties and even polysemies to this term. Sometimes we say, "pure Tcl" in contrast to Tk; thus the 2.0 version of sockspy is useful for certain contexts because it can be run as "pure Tcl" without the requirement for a Tk interpreter.

Sometimes one uses "pure Tcl" to exclude not only extensions, but also exec. This partially invalidates the more general conclusion that pure Tcl implementations are more transparent; for example, in a pedagogic situation with Unix adepts, I might choose to write

    set content [exec cat $file]

rather than

    set fp [open $file]
    set content [read $fp]
    close $fp

Is an example "pure Tcl" if it uses tcllib (a bundle of extensions, each of which are themselves pure Tcl)? Our community seems to answer both "yes" and "no", at different times, simply by the evidence of the way it speaks.

While preferring a Tcl extension over a compiled extension is an admirable one - particularly if the reason is to make it easier for the user to modify, or easier to deploy (no need to worry about what, if any, compiler is on the other size), at times people go too far along this line.

The question sometimes is whether it really is better for someone to write some code, from scratch, in Tcl to try to avoid the issues, rather than use the established work of someone else who happened to choose a language that requires compilation. It does mean at times that , beginning with untested code, one has to begin again from the debugging steps to prove the reliability of the code.

It seems like pure Tcl is a necessary but not sufficient condition for portability (that it will run on most or all platforms).

In sticky areas like file paths and exec where there are system-dependent behaviors, additional contraints are necessary to ensure portability. -- escargo

File paths are only a problem if you insist on using string commands on them. file join and friends can take care of the system-dependencies of file names. -- Lars H

JMN 2003-09-13 What about automatic fallback to pure Tcl versions of an extension when a binary for the system is not present? Somewhere I saw mention of this behaviour - perhaps it was to do with base64 encoding.(?)

Although the extra maintenance hassle of keeping a compiled version in sync with a Tcl version is probably not something that most extension authors would care for - does anyone know of any guidelines or a framework for such extensions? Is it trivial to write a package to behave in such a manner or are there issues such as perhaps a slight performance hit to consider? If there is a package that is largely implemented in Tcl, but has a subset of commands that are available either as pure Tcl or compiled; should it check a flag each time it calls one of these commands, or should it optimize this decision away at initial package-load time - and does it matter? My assumption is that on package require the package could install the appropriate version without any need to clutter the calling code with tests. Presumably the only differences apparent to the user of the package would be performance, and introspection. It would seem to be a shaky idea in general to write code that relies on introspection of commands provided by packages, but perhaps at least things such as code profilers and debuggers legitimately do this. Can and should the author of a dual-implementation package take steps to make any slight differences that are visible at the Tcl script level less apparent? e.g can they somehow disable info body? What about version numbers - should package require report something different for each implementation, and for someone with both installed who wants to test with one or the other, should there be a standard way of selecting an implementation?

For all I know, these issues may simply not be important, but perhaps someone who has gone through the process may be able to provide some notes that give people considering such a system an overview of just how straightforward it is or isn't.

DKF: While it is possible to take steps to disable info body, there's no good reason to do so. WRT versions, it might be easier to use a wrapper package which loads the appropriate (separately versioned) implementation package.

PT 15-Sep-2003: There are now a number of packages in tcllib which implement this dualality. The md4, md5, uuencode, yencode and crc packages will all attempt to use either an exteral compiled extension or a critcl compiled version or finally default to a pure-Tcl implementation. Having written a few of these, I find the simplest technique is to define a set of commands that implement the lowest level of the functionality you need. So the crc package doesn't implement a full crc32 command in C but just the algorithm core. Then we also implement the core in pure-Tcl and use [interp alias] to select the correct implementation at run-time. This make testing a lot simpler as we can easily ensure that both implementations produce identical results. It also makes the user level interface simpler to maintain as this remains Tcl script.