[TV] Cuda means Compute Unified Device Architecture, see [http://www.nvidia.com/Cuda] or [http://en.wikipedia.org/wiki/CUDA], and is in essence a small supercomputer on a modern graphics card (of NVidia, but hey, they've pretty well available). In fact There's also Tesla, based on simular computing nodes, which preferably in version 8 (with hardware double precision floating point units) can make supercomputing workstations with over 4TFlops of power for unter 10k$. Cuda already has a perl interface IIRC, handy to play around with. [TV] himself currently has not done anything with Tcl/Cuda yet, but wants both scripting with cuda programs and a parallel Cuda implementation of tcl (at least in some academic form to play with). [Lars H]: A "parallel Tcl" in the sense of having each processor running a separate Tcl script probably isn't going to happen — since there isn't much memory available per processor, and per thread in the Cuda model even less, an [EIAS] data model simply doesn't fit! What could work is to do some kind of "parallel [CriTcl]", where a Tcl program is boosted by in-line code in some other language whose data model is closer to that of the hardware (in this case, single-precision floats). [TV] Did you check it out ? There´s 8 kilopbytes of registers, and for instance with my humble (cheap) 9500GT there´s like 10 Gigabyte/sec memory access speed even more than all three memory interfaces of the new I7 in normal use. And I´m talking like [small formula rendering tests] first, I guess. And, for non-hashable associative functions, the parallel approach could be a huge improvement over current tcl. And maybe Tk is fun when connected without the graphics bandwidth bottleneck. ---- !!!!!! %| [Category Uncategorized] |% !!!!!!