This page is for outlining and discussing improvements of the handling of integers in Tcl, hopefully something that could make it into 8.5, but at the very least something that should be in Tcl 9.0. A related page is 32-bit integer overflow.

(Everything here that isn't explicitly signed otherwise is most likely written by me, i.e., Lars H.)

Update: There is now a TIP (#237, [L1 ]) which goes ahead with a more ambitious plan than what is sketched below, and since that TIP was put forth by a TCT member, there's a real good chance that it will be ready for 8.5. Yummy!

This also means that the following may becoming more of historical interest.

First, an explanation of the title might be in order. It is a sad fact that what passes for "integers" in current Tcl documentation isn't really integers, but rather a (sometimes crude) imitation of the true thing. They quite often fail to satisfy important properties of *proper* integers. Take for example the expressions

$a * $a >= 0 $a * $b % $b == 0 ($a > $b) == ($a+$c > $b+$c)

All of these should return 1 (or an error) when *a*, *b*, and *c* are proper integers, but none of them do in general in current Tcl. Behold:

% info patchlevel 8.4.5 % set tcl_platform(wordSize) 4 % set a 54321; expr {$a * $a} -1344196255 % set a 54321; set b 45678; expr {$a * $b % $b} 43688 % set a 1234567890; set b 0; set c $a; expr {($a > $b) == ($a+$c > $b+$c)} 0

Problems caused by "integers" not being proper integers are generally difficult to spot in testing, because test cases tend to be small, whereas the problems show up mostly for large numbers. It is also something that is expensive to avoid at the development stage, because analyzing an algorithm to see if "N-bit integers" are sufficient is hard work. In a language such as C (with its cost ratio between development time and running time) this may be time well spent, but in Tcl (that has quite a different ratio between the two) it should be better to simply have the internal representation grow to whatever size is necessary. In effect this means that the integers will have to be *proper integers*.

Implementing proper integers is much easier in Tcl than it is in many other languages. It is not necessary to devise any new data type and an input/output library for that data type, because in Tcl *everything is a string*. The strings that are canonical forms of integers are those matched by the regular expression

0|-?[1-9][0-9]*

and that is all we need. There are other strings that should perhaps also be valid integers (hexadecimal and octal numbers, perhaps non-canonical forms with extra leading zeros, signs, or surrounding whitespace, and maybe even non-ASCII numerals), but restricting attention to these for the moment will simplify the discussion.

The specification of operations is no problem either, because the arithmetic operations are quite well-defined and well-known (most of them are part of any all-round education). The bit-wise operations are usually not taught in school, but these too can be defined for integers of any size. The relevant defining identities are, apart from commutativity, that

0 & 0 == 0 -1 & 0 == 0 -1 & -1 == -1 0 | 0 == 0 -1 | 0 == -1 -1 | -1 == -1 0 ^ 0 == 0 -1 ^ 0 == -1 -1 ^ -1 == 0 (2*$a) & (2*$b) == 2*($a & $b) (2*$a + 1) & (2*$b + 1) == 2*($a & $b) + 1 (2*$a + 1) & (2*$b) == 2*($a & $b) (2*$a) | (2*$b) == 2*($a | $b) (2*$a + 1) | (2*$b + 1) == 2*($a | $b) + 1 (2*$a + 1) | (2*$b) == 2*($a | $b) + 1 (2*$a) ^ (2*$b) == 2*($a ^ $b) (2*$a + 1) ^ (2*$b + 1) == 2*($a ^ $b) (2*$a + 1) ^ (2*$b) == 2*($a ^ $b) + 1 ~$a == -$a - 1

for all integers *a* and *b*. These definitions are equivalent to the usual definitions in terms of bits in memory words if the word length is large enough that all operands can be stored as *signed* quantities in two's complement representation.

In other words: the specs are strikingly easy. The implementation is a bit more work, but still fairly straightforward. In particular, it is possible to defer tricky optimization issues until such a time that one really needs them.

One way of discovering where integers are used in Tcl is to imagine what Tcl would be like if all core (sub)commands that use integers were removed. It is however necessary to refine this idea slightly, because Tcl uses 0 and 1 for boolean true and false virtually everywhere. Thus we should first imagine that the string representations of boolean values were changed (or that we're using something like Success-fail control structures to cope without boolean values), before looking for places where integers are used. What is left roughly falls in three categories:

- Internal counting
- Integers used to enumerate things that exist inside the Tcl system. Examples include lindex, string length, and regsub (return value). In all cases, the integers that can be stored in single machine words, because there cannot be more things around than what is possible to address.
- External counting
- Integers used to enumerate things that exist outside the Tcl system, but for the manipulation of which there exists Tcl core commands. Examples include seek, clock, and various file subcommands.
- User-defined data
- Integers in some sort of data that are neither defined in nor directly supported by the Tcl core. Here the main workhorse is expr, with occational help from binary, format, scan, and lsort, but that is pretty much it.

As a contrast, it may be remarked that floating-point values are exclusively user-defined data in core Tcl (although some core Tk commands employ them).

One recent (8.4) development in Tcl evolution has been that the normal integers are no longer sufficient for some external counting tasks, and that "wide" integers have been introduced to overcome this. This change has however been accompanied by several bugs, and IMHO a reason for this is that the development has lacked a clear vision of what should be accomplished (beyond overcoming the immediate problem) and been too focused on the C implementation. Symptomatic of this is that the discussion has very much revolved around "32-bit vs. 64-bit integers".

Almost **nobody needs 64-bit integers**, or integers modulo 2**64 as would be a more precise term, because there are very few algorithms that use them. That they can be preceived as more useful than "32-bit integers" is simply because they are slightly better at imitating proper integers than integers modulo 2**32 are. The rare exceptions occur mainly in various hashing, checksum, and cryptographic algorithms, and in these areas even 64 bits is becoming a tad passé. It is furthermore always possible to emulate N-bit integers using proper integers, but not vice versa.

It would be better if the future evolution of numbers in Tcl was guided by the needs for user-defined data, as that is more likely to get the whole picture.

There are strong reasons for making Tcl integers semantically uniform (e.g. supporting expr operating on arbitrarily large integers, rather than inventing some new command that one would have to use for all operations on large integers), such as the aforementioned cost ratio. There are equally strong reasons for seeking an implementation that is as fast as the current for "small" (that can be accurately represented in machine words) integers. Finally, it would probably be nice to avoid burdening the core with a full-fledged multiple precision library (it could require special expertise to maintain and might present some porting problems). Are these goals possible to combine? I certainly believe they are.

The first thing to observe is that there doesn't have to be a uniform implementation just because there are uniform semantics; even for highly optimized arithmetics libraries these are often several implementations, each of which outperforms the others for some range of input data sizes. For integers that fit into machine words (i.e., most of the integers occuring in Tcl programs) the raw machine operations are indeed the most efficient, so it makes sense to handle e.g. addition of small integers (internal representation of type tclIntType) through a single C addition (in the INST_ADD branch of TclExecuteByteCode). Since this takes good (and fast) care of almost all integers that occur, we can then afford to do something more complicated for the few cases that remain.

One approach that I find nicely flexible is to handle everything else via a callback to the script level, say to some **overflow** procedure. (This would then become a sort of relative of the unknown procedure.) The syntax could be e.g.

**overflow***op**arg*?*arg*...?

and the return value will be used as the result of applying the *op* to the given *arg*s. For example

overflow + 1234567890 9876543210

could return 11111111100. Actually implementing such an overflow procedure is of course quite some work, but a lot of the necessary code already exists on this Wiki. For those operations that are missing, I think that it in principle should be OK to have an **overflow** procedure that simply works as

proc overflow {op args} { if {[llength $args]>1} then { set expr [join [linsert $args 1 $op]] } else { set expr "$op [join $args]" } error "Arithmetic overflow in $expr" "" [list ARITH IOVERFLOW $expr] }

An error saying "I can't do this right." is generally preferable to silently returning an incorrect result.

Alternatively, it would be possible to install an overflow procedure that emulates the current improper behaviour, through something like

proc overflow {op args} { set E [list] foreach arg $args {lappend E "wide($arg)"} # Assuming all operations are unary prefix or binary infix. # Assuming there won't be overflows with wides. return [expr "int([join [linsert $E end-1 $op]])"] }

This would probably be the way to go for an init.tcl definition of **overflow** in Tcl 8.5, as it provides some backwards compatibility. DKF has been observed expressing concerns about that matter.

One issue here is however that overflows have to be detected not only when interpreting strings as integers, but also when an operation on two integers that fit in machine words produces a result that is too big to fit. It seems processor manufacturers are well aware of the need to detect this -- in all processors I know the details of, there is some carry or overflow flag that can be inspected after each operation -- but the C language does not provide anything in particular for this need. Still, there are ways of detecting overflow. For addition and subtraction one can do

overflow = (sum = a+b) >> 1 != ((a>>1) + (b>>1) + (a&b&1)); overflow = (diff = a-b) >> 1 != ((a>>1) - (b>>1) - (~a&b&1));

whereas for products it is probably necessary to make use of a result twice as wide as the operands; assuming long is 32 bits and long long 64 bits, one can do:

long a,b; long long prod; overflow = 0 != ((prod = a*b) + 0x80000000 >> 32);

(but I'm no C programmer, so I don't *know* whether compilers will in general make from the above what I think they will). The only other Tcl 8.5 operations that may cause an overflow are << and ** (unless that produces floats), both of which probably need some special attention anyway.

Here are some ideas, in more practical detail, for how the above could be implemented. The overall idea is however to as much as possible avoid making choices, so that the core will not get stuck with a poor design. Hence this "implementation" consists mostly of a sketch of an interface, to which an extension might attach a working implementation...

Suppose a new object type *tclBigIntType* was created for those integers that are too big to have a *tclIntType* representation. Then it would make sense for library procedures (such as Tcl_GetIntFromObj) that interpret TclObjs as integers to, in the case that the given string is a syntactically correct integer but too big to have a *tclIntType* representation, instead set the type of the TclObj to *tclBigIntType* (discarding any old internal representation and not creating a new one). This would have the effect that the core only has to try this conversion once (for each value); as soon as something is a *tclBigIntType* TclObj, it is known to be big enough to not correspond to anything in the core. Thus in particular, a command like lindex can safely return an empty string whenever it is given a *tclBigIntType* index, since that *has* to be out of bounds of the list. Quite a lot of the things the core does with integers is like this, although the arithmetic is not.

Suppose further that the internal representation of a *tclBigIntType* TclObj is a twoPtrValue, where the first pointer points to a "subtype struct" and the second points to the actual data. This can abstract away all details of a native representation for big integers from the core while at the same time allow the core to take advantage of the existence of such a representation. If the subtype pointer is NULL then there is no internal representation (the value is only stored in the string representation); this can be considered "the null subtype" of *tclBigIntType* and it is the only one that the core needs to handle by itself.

Otherwise the subtype struct would be a collection of function pointers that the core could rely on for carrying out any operations on the internal representation. In particular the core wouldn't know how to free, duplicate, or generate a string from the internal representation, so the freeIntRepProc, dupIntRepProc, and updateStringProc in the *tclBigIntType* Tcl_ObjType would simply (do nothing if the subtype pointer is NULL, but otherwise) pass the call on to some counterpart function in the subtype struct. Besides these (and this is the nice thing), the subtype struct could also contain pointers to functions performing each of the standard arithmetic operations. Rather than relying on the **overflow** callback when adding two big integers of the same subtype, the core could call on some addUsingIntRepProc of the common subtype struct for this. Then the **overflow** callback would only be used for arithmetic operations on big integers of null or different subtypes (typically one of which would be null). Presumably any extension that defines a *tclBigIntType* subtype would also provide a compiled **overflow** procedure that returns big integers with that subtype, so the **overflow** callback would never be used more than once for each value given as a string.

There is however nothing in this that prevents having more than one non-null *tclBigIntType* subtype. Typically TclObjs would tend to have the subtype returned by the **overflow** procedure and having something that produces TclObjs of another subtype would lead to shimmering whenever the two meet, but there is no principal problem.

What place does wides have in the above scheme, then? They could of course be some intermediate between *tclIntType* and *tclBigIntType*, but there is a more interesting approach. There is a certain (user-defined data) need for an integer-modulo-some-power-of-2 data type, as checksum algorithms and the like are often designed with such a thing in mind. Since the characteristic property of a wide currently is that "it is at least 64-bits wide", the step to "it is an integer modulo 2**N for some N>=64" is rather small. This would however necessitate a change of string representation for wides, as they currently cannot be distinguished from integers (despite behaving differently). One string representation that is fairly backwards compatible is as **wide(**int**)**, where "int" is anything that is a valid integer.

*escargo 19 Feb 2004* - I have heard (and can only claim that this might be rumor, rather than fact) that today's highly pipelined processors can encounter mathematical overflow/underflow without being able to pinpoint where in the multistage arithmetic pipeline that the error occurred. This prevents the underlying implementation from being able to specifically indicate where the error happened. Whatever you are specifying here should not assume that detection of an error can also inform you where the error occurred. (Perhaps you have no choice; if that is correct, then you might have introduced some platform dependence, on the processor, compiler, or math libraries, that you might not have expected.)

Lars H: Even if that was true it would be irrelevant, simply because C *doesn't* give us access to the overflow detection in the processor and we therefore have to explicitly compute it, as shown above. The rumor furthermore doesn't ring true. It might be the case that checking for overflow at a particular instruction will prevent pipelining of that instruction, but in view of how seldom there would be a need to do such a check, that can hardly be a problem.

Seriously though, any *on-topic* comments, anyone?

GPS: Great ideas! I say we start with a few small apps that do addition, multiplication and so forth with large numbers. I've wanted to play with the carry flag in assembly language anyway, so this interests me. http://www.snippets.org has some BigNum code in the C section that might be useful. Perhaps we could have some inline assembly for systems that support it, and the slower BigNum code for portability. We already have Win32 assembly code in Tcl for some DLL crap, although it's fairly simple. I'll post to the Wiki when I have something that works with addition.

Lars H: What do you mean by "start with a few small apps"? Separate applications for addition and so on? I don't quite see the point of that, because the functionality should be made available inside Tcl. But if you by "apps" mean extensions or some such then I agree.

A possible step-by-step implementation could be something like this:

- Create an extension that defines the tclBigIntType (only the null variety) and some basic operations on it (in particular: convert string to canonical proper integer form). No arithmetic at this stage.
- Start writing separate commands that do arithmetic on ints and bigInts, calling
**overflow**when necessary. - Write an expr-clone (making use of the existing parser) that does int and bigInt arithmetic.

Somewhere along the line, one should also start implementing some non-null tclBigIntType subtype with binary representation and fast implementation.

Vince would like to add that this seems like a great proposal which neatly fits the "Tcl way" of doing things -- the basic, important cases are fast, but there's a seamless extension to the extended functionality of arbitrary precision, and with the possibility of using some assembly language to catch the overflows, the performance hit on the basic, important cases might be non-existent.

GPS: I suggested a "few small apps" so that we could write some prototypes and learn. There is no point including the complexity of Tcl's expr for a prototype until we have made several apps that get the core algorithms down to the point that we want them to be. My steps would go like this:

- Write a prototype program that deal with +, -, *, and / of large numbers. It should support an accelerated version in asm, and a portable C version.
- Add support for large floating point numbers to the previous program.
- Verify that everything works w/ some tests.
- Test the program on another architecture.
- Write extension commands for +, -, *, and / which use the code from our working prototype. At this point we can write our new Tcl_Obj type.
- Integrate the code into expr, and then into the scheme of TclExecuteByteCode for faster operation.

I propose we write commands like [+ 1 2] rather than [bigexpr {1 + 2}], because it should be easier, and it has been proposed before to create such global commands for 9.0, to ease the burden of using expr.

Why so many steps you may ask? Well, Tcl suffers from enough problems with uncooked code, and I'm not certain how to do a lot of it yet. If you know this stuff like the back of your hand then by all means code up a storm and submit the patch. :)

DKF: In theory I agree with having optimized assembler for this sort of thing. In practice, I really don't want to have to maintain that sort of thing! :^/ Portable C to do it (especially if it is pretty fast portable C) would be really cool. (Also: testing arithmetic? Aaarggh!)

NEM Seems like quite a few languages have support for this already - can we just nick the implementation from one of those?

Lars H: I think quite a lot of what GPS lists (say, 1-4.5) already exists. Just looking around the Wiki, there are pages:

- mpexpr (seems to be the Tcl grand old man in this area)
- bignum
**tclsbignum**[L2 ]- BigFloat for Tcl (MPA is obsolete at this time)
- Arbitrary Precision Math Procedures
- arbint

The challenge is rather the integration into the Tcl core. One of the points in the above scheme is that it might let DKF have the cake (optimized assembler implementation of arithmetic operations) and eat it too (not having to maintain this as part of the Tcl core, as the optimized assembler implementation could be part of an extension that merely defines a bigInt subtype). I suspect that for a lot of the more mundane uses of big integers (such as positions in >2GB files), even all-Tcl string implementations of arithmetical operations could be acceptable (since the file operations are anyway much slower than RAM).

Sarnold 11May05 - At this time Bigfloat for Tcl relies on math::bignum from tcllib to handle all big integers, forming the internal representation of a big-floating-point number. So could we, for a while, keep using a pure-tcl library (through rewriting the calls to the bignum library, though) to achieve point **2.**, and futhermore reduce the "time to market" to such functions?

IMHO there are not many people who really need these big-floats functions with more performance.

GPS: I was aware of those pages.

bignum is out, because of the GPL licensing of libgmp. As the README of mpexpr states:

"The trade-off for 'mpepxr' is in execution time. Since 'mpepxr' doesn't use native integer or floating point machine instructions, execution times can be much greater."

mpexpr (1.0) also doesn't use Tcl_Objs so some work would be needed to update it.

SS: Note that tclsbignum works using Tcl_Obj and internally stores big numbers as numbers with 2^32-1 base. Performances are quite good, the algorithms are not the fastest, it's slower than GMP, so maybe the karatsuba multiplication should be implemented and other faster algorithms for division, sqrt (that is already supported but not very fast), pow, pow modulo N, and so on. Still for any common usage of bignums in Tcl is fast enough, actually was comparable to the implementations of Ruby and Python when I tested it the last time. The license inside the tar.gz is GPL, but this is virtual: I can switch to the Tcl license if there is some interest in the code. The library is capable to read/write bignums from base 2 to 32, and is API compatible with GMP for the subset of functions exported.

Lars H: You seem to aim for a slightly different target than I do, GPS. Getting *fast* proper integer arithmetic (machine data types, optimized assembler, and whatsnot) is all well and good, but right now I'd be content with *some* proper integer arithmetic (regardless of how slow). The only restriction is that the existing integer arithmetic mustn't (in the cases where it produces correct results) be noticably slowed down.

Neither mpexpr nor bignum would be out in the above scheme, because either can provide the necessary implementation of the **overflow** command. bignum being GPL only means it cannot be bundled with the core, but that doesn't mean some user couldn't call upon it to provide faster service than would otherwise be possible. mpexpr not using Tcl_Objs is likewise not a principal problem, but merely a drag on performance, since the *tclBigIntType* does not require a non-null internal representation.

DKF: Speaking as the last person to do this, putting a new numeric type into the core is a lot more work than it appears to be at first glance. A good indication of the number of spots is to get the Tcl 8.4.6 source tree and grep it for *tclWideIntType*...

Lars H: I haven't checked 8.4.6, but in 8.4.5 there are 54 occurencies of that string, and a whole 41 of these are in generic/tclExecute.c. In view of the number of bytecode instructions that operate on numbers and the fact that many of these take two arguments, it seems very hard to get by with less than three dozen occurencies anyway.

(It could however be the case that also a number of occurencies of e.g. Tcl_GetIntFromObj and similar library procedures should better be changed to use some new counterpart that can do something more useful than simply fail when faced with an integer too big to fit in a C int. This would call for a larger number of changes.)

LV The issue *really* being addressed here is not proper integers but large integers.

Change the above values of a to something like 107 or 1376 and you will see the results that you indicate that you expect.

Then, try the same thing in a language like C (or many other scripting languages) and see quite similar results.

What is really being asked for is not even 64 bit integers, but infinite sized integers.

I'm not saying that asking for that is somehow unworthy - just wanting to be certain that someone reading this understands that over the years, Tcl maintainers have always indicated that Tcl's integers are limited by C's representation of integers by variables of size **int**.

Lars H: The page is mainly aimed at people interested in the internals of Tcl, and its title was chosen with this in mind, to forestall precisely that kind of reply.

Proper integers *are* sometimes very large. They have to be, because for any given integer there must be a successor which is larger. "Integers" that are limited in size will therefore be improper, at least in some respect. Avoiding some of the usual lingo makes the problems easier to spot, because the effect of that lingo is partly to turn bugs into features by documenting them: it doesn't really make anything better, but it does make the whole thing Somebody Else's Problem (cf. SEP Field).

A large exposure to C, which is hard to avoid for people interested in the internals of Tcl, does tend to distort one's perceptions on these matters. (Not a sarcasm, but a serious observation.) Every schoolchild knows that if you've got 1500 millions and add another 1000 millions to that, then you've got 2500 millions. Tcl currently asks C straight off however, and C says that the sum of 1500 millions and 1000 millions is -1794967296, which from most points of view is a proposterous answer. We're only so used to it that we don't react anymore.

DKF: For a long time, Tcl's been defined to use finite field arithmetic with a field size of X bits where X is determined by the size of the C type *long* (not *int*; the difference matters on some platforms.) Changing to use arbitrarily long integers has two consequences:

- Reduced speed for
*everyone*. Not everyone cares about being able to add pairs of numbers of 200 digits (or compute the cube of 2**30+7, to give something that is more representable) and there are lots of people who want more speed. Is the performance cost of detecting the overflow cases worth it? I do not know the answer to that, and it isn't something that is easy to answer. - An end to finite-field arithmetic. Some people (often hardware engineers IME) write programs that depend on the fact that the field is finite. Nobody's really sure just how much code there is out there that would be semantically affected like this.

The other big issue is how to do it *quickly* and *portably* without getting tangled up in some license like the GPL, which would happen if we made Tcl dependant on libgmp for its arithmetic support. (The last point does not apply to all languages equally.)

Mathematical correctness is all very well, but it isn't the only factor I'm balancing here.

SS: In response to the possible problems you exposed:

1) Speed. To detect the overflow is an amount of time that should not make any difference for a language like Tcl. When Tcl adds two numbers, the percentage of time spent in the actual addition is minimal, and the overflow detection is also little time: this is why a lot of languages times faster than Tcl have this semantic without problems. (Python, Common Lisp, Scheme, just to mention a few. Ruby also has this support and the speed is comparable to Tcl.)

2) Who write programs in Tcl that exploit the finite field, is writing programs that are not guaranteed to be portable. Tcl needs some kind of specification where I can go and look to see if this is defined to do math modulo <something>, but I guess that this is not a feature of the language that the user can use in code that should be portable, but just a detail of the implementation. Btw, all that is needed to fix this code is to add "% something" in the expression.

3) It's not needed to go hyper-fast with bignums. When the math is in the fixnum range, it's as fast as usually, but when numbers are out of range the user should expect good semantic but not hyper-fast implementations like GMP: the speed should be acceptable, and the result correct. This is why generally languages are not using GMP that's too specific, optimized, and big, for bignums in general purpose languages. Python and Ruby are GPL, but still they don't use GMP for example. They both are using simple code that is easy to handle but still works at decent speed.

4) It's hard to think that almost all the other mainstream scripting languages are wrong about this.

Lars H: Regarding DKF's speed concerns, I am confident that the effect of adding checks for overflow is no larger than was the effect of adding wide integer support. It is probably even possible to make the main case (arithmetic on two objects of type *tclIntType* not resulting in overflow) slightly faster than it is today, even with a check for overflow, by reorganising the code in TclExecuteByteCode to reduce the number of *branches* (if statements and the like) that are encountered in this main case. As you know, branching is the main disruptor of instruction pipelining.

Regarding the term "finite-field arithmetic": That isn't what Tcl does! What Tcl does is "modulo 2**(8*$tcl_platform(wordsize)) arithmetic", which is something quite different. A *field* is an algebraic structure characterized by the fact that every non-zero element has a multiplicative inverse; this is not the case with e.g. 2 in Tcl "integer" arithmeic. (There *is*, for every N, such a thing as **the finite field with 2**$N elements**, but not even programmers would try to pretend that its arithmetic as any kind of "integer arithmetic". For one thing, it has 1+1=0.)

davidw - I wouldn't put 4) in quite those terms, but if everyone has managed to implement something, it's worth a serious, in-depth look, and it's probably worthwhile to write down why it's a bad idea if that is indeed the conclusion reached.

DKF: Some comments on the comments.

1) Speed. I'd love to see some performance studies on the effect of bignums, speaking as one of the people who has to field complaints when things aren't fast enough. It's too easy to say "I think things will be OK". Let's see the evidence.

2) I know those programs are non-portable. But non-portability doesn't stop people from doing it. Especially people with an EE background, and Tcl's (apparently) heavily used in the CAD field where those people are common.

3) The speed is needed when working in the 32- or 64-bit range. Slowing down when you go out of that is not such a big problem for me. OTOH, we *really* don't want to get tangled up in license issues. If we can do this for ourselves in a way that satisfies point 1, this is a non-issue.

4) There's "mainstream" and there's *mainstream*. Who do we compare ourselves with? :^) More seriously, my objections are all really to do with how to implement the scheme in a way that doesn't hurt people who wouldn't take advantage of it. If we can manage that trick, I'd love to see bigints.

**Someone anonymous:** Okay, correct me if I am wrong!

As far as I know Tcl has many Object Oriented extensions. How hard would it be to introduce infinite sized integers as new "Objects", "Classes" or "Types" and leave the literal integers (integers that don't require special initialization) alone to ensure backward compatibility.

Lars H: One of the beautiful things about the everything is a string principle is that no "special initialization" for large integers would ever be required. If it has been set as a string, then it has been as properly set as the script level requires. What the above is mainly about is changing the arithmetic operations so that they can do the right thing. There are also some parts about internally caching binary representations of numbers to speed up the processing, but that is just the way things are done. (Tcl internally caches binary representations of all sorts of things: procedure bodies (bytecode), command meanings, subcommand meanings, lists, etc.)

There is no need to drag OOP into the picture. Unlike many other languages, Tcl can handle the required amount of polymorphism without resorting to decidedly OO extensions.

**Someone anonymous** continues: Is it that awkward to introduce/use new types in tcl? Is this not the obvious or satisfying solution?

Lars H: On the script level, Tcl has no types; all values are strings. On the C library level, there are types, I don't think they're hard to introduce, and the *tclBigIntType* suggested above is precisely the new type for integers too large to store as *tclIntType* Tcl_Objs.

DKF: Speaking as the only person to have done this recently, fitting a new numeric type to Tcl is a lot of work, most of which is just boring coding.

**Someone else anonymous:** I think your suggestion to support large integer math sound great - will you be trying to implement one or more alternatives for people to try out and give feedback? Or at least writing a TIP with specific implementation details?

Lars H: I'm not enough of a C programmer to implement all of it, but I have coded some bits and pieces. Since there is some interest, I've put what I have on Proper integers implementation.

CMCc: you know, there's an earlier libgmp version which is LGPL. It's no longer actively developed, but it'd easily suffice for bignums of some kind. If there's a need, I'll happily redo the bignum package to use it.

The reason I want bignums is to do some crypto work.

DKF: KBK is working on this using a library called libtommath.

LV See Big Floating-Point Numbers for a Tcl package to provide larger floating point - sort of a parallel situation.

Lars H: No, Big Floating-Point Numbers just provides for large exponents, but keeps the mantissa size fixed. That is of no help for handling large integers.

aspenlogic: Just for the sake of a data point I'll offer that I need 64-bit address (unsigned) arithmetic today (for hardware testing using TCL). 128-bit address calculations tomorrow (or the next day!). I was brought here searching for a solution to this problem that didn't require me to write the solution! Hope this data point is useful. Thanks for the discussion.