Profiling Tcl presents various techniques and resources for profiling the operational characteristics of a Tcl-based system. A good understanding of the timing, control flow, resource utilization, and code coverage of a process can help with debugging and optimization.
The simplest form of profiling is to just exploit Tcl's ability to time various commands, or to just print timestamps from within your application and perform the analysis manually. Procedures for this technique are outlined in How to Measure Performance.
You can redefine the proc command to insert timing code before and after each proc call. You can collect the data later from a global variable, and figure out which procedures are taking the most time. This is how the sage extension works, which you can find in the FAQ Extension Catalog [L1 ] (DEAD LINK) as:
What: Sage Where: https://web.archive.org/web/20010430041015/http://www.geocities.com/SiliconValley/Ridge/2549/sage/ Description: A Tcl/Tk runtime code analyzer profiling tool for Tcl/Tk applications. Requires Tcl 8.0, and was tested on Linux i386 and SunOS/Solaris. Currently at version 1.1 . Updated: 11/2001 Contact: mailto:[email protected] (John Stump)
This adds overhead to each procedure call, so you will have to interpret the results. A short procedure, called millions of times, will suffer more from the overhead than a long procedure, called only a few times.
If you want more detail, you could profile each and every Tcl command instead of just the procedures. This would add much more overhead than procedure profiling, but could give you lots more information.
If you combine both the procedure and command profiling, you should be able to get a sorted list of time consuming procedures, followed by the most time consuming commands within those procedures. You would be able to find out that the regexp command in your parse procedure was taking up the most time. Figuring out which regexp command (if there are several) is just a little more challenging. Put individual suspicious commands into their own procedures.
This combined procedure and command profiling is provided by the TclX extension, which is part of TclPro. This implementation is written in C instead of pure Tcl, so it is relatively fast. You can even start the profiling, run interactive commands, and save the results to a report file, like this:
> protclsh83 % package require Tclx 8.3 protclsh83.bin>profile -commands on protclsh83.bin>source units.tcl protclsh83.bin>source units.test protclsh83.bin>profile off profarray protclsh83.bin>profrep profarray cpu profdata.txt
The report would look something like this:
-------------------------------------------------------------------------------- Procedure Call Stack Calls Real Time CPU Time -------------------------------------------------------------------------------- <global> 1 34125 2289 source 3 2405 2258 units::convert 64 1826 1783 source units::reduce 126 1815 1752 units::convert source units::ReduceList 90 1627 1582 units::reduce units::convert source namespace 5 346 230 source units::reduce 29 243 209 units::ReduceList units::reduce units::convert source units::ReduceList 29 191 188 units::reduce
Instrumenting each line of code gives you the most detailed picture of program performance. You can easily tell which line within which procedure is creating the bottleneck. This technique is available for some compiled language profilers, but has not been implemented for Tcl (yet) (AFAIK -- RWT)
LV: Any techniques for profiling Tcl that doesn't involve modifying a Tcl application? Something that perhaps could be turned on in the interpreter itself?
RS 2008-07-16: Maybe Profiling with execution traces?
bll 2016-10-10 The quote in the reference section above "First rule of program optimization: Wait for a faster machine." is insanely stupid.
Or "Program runs slow? Throw more hardware at it."
This kind of attitude is one reason why websites are so slow nowadays. Faster hardware is not a replacement for poor program design.