Version 0 of process and stream based bwise blocks

Updated 2003-11-03 12:48:35

by Theo Verelst

The idea of this page is to make a start applying bwise to also represent blocks in a stream setup, where the blocks are not activated in the way the 'run' (flood) -like or 'net_funprop(c)' (circular) fire once as if all blocks are functions requiring all inputs present regimes intend, where basically blocks maybe simulate semi-parallel execution, but are always activated in a certain, computable order.

Like processes in a unix like pipeline here we intend to make the processes run in parallel either semi (multitasking) or actually, by distributed execution. This implies a less trivial resulting dataflow, which also may be non-predictable, contrary to right-click on block available regimes, of which the procedures can be analysed, or one can have the course activation schemes in mind.

Of course, making networks can be a hell of a lot more complicated than pipelines alone, and of course the overall ordering of what happens in a pipeline is overseeable, all data comes in at one end, and eventually, all blocks will have processed its input, probably at some point being finished.

First, lets look at the tcl available mechanisms for using actual streams, based on process (standard) IO pipes/sockets. the availability of these varies, especially I know for sure that 16 bit windows processes in fact do not deal with input and output as streams ever, they basically soak up all input, process, and generate the output when the input receives an end of file. There is nothing against using this scheme in bwise, we can simply make a block take its input pin(s), feed the data to an exec-ed process, and put the result on the output pins.

Streaming can be more exiting, less intermedeate buffer space eating, and more parallel, when processes can remain active (alleviating startup time) and process for instance a line at the time for processing large amounts of data, or process chains requiring relatively large intermedeate data amounts, or simply because it is conceptually pleasing, handy, or because we like to simply feed data to a unix grep command because that's what we're used to.

To start, I assume a linux/unix setup, where processes and everything work as intended on the various tcl manual pages for open and exec. To get the idea:

 set f [open "|cat" RDWR]
 puts $f "Line 1\nLine 2"
 flush $f
 fileevent $f readable {global tt f; append tt [gets $f] }
 puts $tt

Which prints

 Line1Line2

Because while executing gets, the trailing newline gets lost (at least it did in my case).