[RHS] [Tcltest] is an amazing little [package] for writing simple ''unit'' type tests. However, there are a lot of things that tcltest makes it very hard to do. There are other things that aren't hard to do, but make the tests that do them hard to read. I'd like to put together a list of desired functionality for a new test package, in hopes of eventually writing such a beast ''(or, someone else coming along and writing it)''. The things I'm looking for are: * Simple to write simple tests, like tcltest. If I just want to call a [proc] and test the [return] value, it should be a very small bit of code to do so * Things that aren't part of the actual testing framework should be in a seperate package. For example, things like makeFile and makeDirectory should be in a seperate package that comes with the framework. * Testing the errorCode should be as easy as testing the return code * Complex test cases should still be easy to read and understand. * Assertions should be available, and the test cases should know that the test failed if any assertions failed * It should be easy to say ''If this assertion fails, the test is finished, don't bother with the rest'' test myComplexTest-1.1 { A sample of a complex test, with comments } -setup { set data {a 1 b 2 c 3} set filename [extrapackage::makeFile $data myComplexTestFile-1.1] catch {unset myArray} catch {unset expectArray} ; array set expectArray $data } -body { set code [readFileToArray $filename myArray] assertEquals -nofail 1 $code "The read failed, don't bother with other assertions" assertEquals -nofail 1 [info exists myArray] "The array did not get created" assertArrayEquals expectArray myArray "The array results were incorrect" } * Tests w/o a ''-result'' flag are assumed to return with a code of 0/2, and their actual result doesn't matter. The success/failure of such a test case (if it returns with code 0 or 2) depends on the assertions in the test * It should be possible to create ''"Test Suites"'' that group a bunch of tests/test files together * It should be possible to programatically run a test, a whole file of tests, a test suite, a while directory of tests * It should be possible to retrieve the results of programatically run tests I'm sure I can come up with other requirements. More importantly, though, is... what would other folks want in a testing package? '''[RHS]''' ''20August2004'' It occurs to me that it would be useful to have a ''-description'' element also. This would be different from the short description that is the 2nd argument to the test, in that it would be a clear explanation of what functionality/requirement is being ''proven'' by this test. The idea is that one could ask the test suite for a summary of all the tests, and it would print out the test names along with thier longer description.... which could be used as a way to document what the current requirements for the project are. On the other hand, perhaps I should just be using the provided description arguement to better use. I tend to keep that arguement as short and direct as possible. Perhaps I should be adding more detail there. I do, however, like the idea of being able to do something like: test myproc-2.1 { Throw an typed error if class is out of range } -description { The 'class' parameter can have a value from 0 to 6. If the provided value is outside that range, throw a typed error. } -body { foreach class {-1 7} { set code [catch {myproc $class} result] assertEquals -nofail 1 $code "Proc call did not throw an error" assertEquals {CALLER {INVALID PARAMETER VALUE}} $::errorCode assertEquals \ "Invalid value '$class' for input class. Must be between 0 and 6, inclusive" \ $result "Error message was incorrect" } } And then be able to automatically get a summary like * '''myproc-2.1''': The 'class' parameter can have a value from 0 to 6. If the provided value is outside that range, throw a typed error. When one gets the summary for the entire test suite(s) for a project, it should be a complete summary of all the requirements for the project. ---- '''[RHS]''' ''24August2004'' Having had time to put some thought into the mechanism I'd like to use to define tests and test suites, I've run into some "issues". I'd very much like to be able to define a ''Test Suite'', and specify what tests go into that Suite, much like one does when running tcltest now. In addition, I'd like to be able to build Suites containing other Suites, and so on. Lets say we have a directory structure of: test test/module1 test/module2 test/module3 I'd like to be able to have the following files: # File: test/suite.test set suite [testsuite::suite] # Add each of the module directories # This adds the suites defined in those directories, if there is one $suite add directory ./module1 $suite add directory ./module2 $suite add directory ./module3 # File: test/moduleX/suite.test # Add all the .test files in the directory # suite.test is automatically excluded unless otherwise stated set suite [testsuite::suite] $suite add files ./*.test # File: test/module1/myprocs.test testsuite::test aproc-1.1 { A test for aproc } -body { ... } -result { ... } Anyways, thats the basic layout I'd like to have. I'd very much like ''suites'' and ''tests'' to act like object commands and return a command name that can be used to access the information about it. It would then be possible to load a test suite, get a list of its tests, and then iterate over the tests to get information about them. My problem is that I'm not sure how to have tests that are defined be automatically added to the test suite that is loading the test file. My thought is to have a test suite ''register'' itself as the current suite before it loads any files (or runs any tests, or does anything interesting, etc). Then, the ''test'' proc will add its command name to the currently defined ''suite'' when its defined. However, this seems like a bit of a hack. I was hoping someone else might have run into a similar sitauation and be able to comment on what they did, if they think its better. Any helpful thoughts would be much appreciated.