Profiling an application can help you find out where the code is spending time. There are several approaches to profiling Tcl code. Some are easy and some are hard. This page will outline several of the methods. For an overview of profiling and performance tuning, see the white papers at [http://www.rational.com/products/quantify_unix/whitepapers.jsp] The paper titled ''Performance Engineering: A Practical Approach to Performance Improvement'' is a nice overview. These resources are focused on particular performance tuning products, but the overview applies to virtually all code tuning. (''Other resource pointers would be most welcome! -- RWT)'' ---- '''Time It!''' The simplest form of profiling is to just exploit Tcl's ability to time various commands, or to just print timestamps from within your application and perform the analysis manually. Procedures for this technique are outlined in [How to Measure Performance]. ---- '''Proc Profiling''' You can redefine the ''proc'' command to insert timing code before and after each proc call. You can collect the data later from a global variable, and figure out which procedures are taking the most time. This is how the '''sage''' extension works, which you can find in the FAQ Extension Catalog [http://www.purl.org/NET/Tcl-FAQ/] as: What: Sage Where: http://www.geocities.com/SiliconValley/Ridge/2549/sage/ Description: A Tcl/Tk runtime code analyzer profiling tool for Tcl/Tk applications. Requires Tcl 8.0, and was tested on Linux i386 and SunOS/Solaris. Currently at version 1.1 . Updated: 11/2001 Contact: mailto:johnstump@iname.com (John Stump) [above link is failing 26 Sept 2003] This adds overhead to each procedure call, so you will have to interpret the results. A short procedure, called millions of times, will suffer more from the overhead than a long procedure, called only a few times. ---- '''Command Profiling''' If you want more detail, you could profile each and every Tcl command instead of just the procedures. This would add much more overhead than procedure profiling, but could give you lots more information. If you combine both the procedure and command profiling, you should be able to get a sorted list of time consuming procedures, followed by the most time consuming commands within those procedures. You would be able to find out that the '''regexp''' command in your '''parse''' procedure was taking up the most time. Figuring out which '''regexp''' command (if there are several) is just a little more challenging. Put individual suspicious commands into their own procedures. This combined procedure and command profiling is provided by the '''TclX''' extension, which is part of TclPro. This implementation is written in C instead of pure Tcl, so it is relatively fast. You can even start the profiling, run interactive commands, and save the results to a report file, like this: > protclsh83 % package require Tclx 8.3 protclsh83.bin>profile -commands on protclsh83.bin>source units.tcl protclsh83.bin>source units.test protclsh83.bin>profile off profarray protclsh83.bin>profrep profarray cpu profdata.txt The report would look something like this: -------------------------------------------------------------------------------- Procedure Call Stack Calls Real Time CPU Time -------------------------------------------------------------------------------- 1 34125 2289 source 3 2405 2258 units::convert 64 1826 1783 source units::reduce 126 1815 1752 units::convert source units::ReduceList 90 1627 1582 units::reduce units::convert source namespace 5 346 230 source units::reduce 29 243 209 units::ReduceList units::reduce units::convert source units::ReduceList 29 191 188 units::reduce ---- '''Line-by-Line Profiling''' Instrumenting each line of code gives you the most detailed picture of program performance. You can easily tell which line within which procedure is creating the bottleneck. This technique is available for some compiled language profilers, but has not been implemented for Tcl (yet) (AFAIK -- RWT) ---- [Category Performance] ---- Also "[profile]" and "[profiler]."