Various methods and tools exist for performing [IPC%|%inter-process communication]. ** See Also ** [Distributing a series of tasks]: [AM]: another shot at interacting processes [Concepts of Architectural Design for Tcl Applications]: [How can Tcl programs on two different machines communicate]: [Tcl implementations of publish-subscribe mechanisms]: [How do I manage lock files in a cross platform manner in Tcl]: [TCL interpreter through socket]: ** Description ** Inter-process communication is the foundation upon which features and systems such as [RPC], [distributed computing], program execution, and [Remote executiong] are built. The [selection] and [clipboard] are technically also IPC mechanisms, at least on [X] platforms. [DDE] and [COM] are Windows only. Also, [sh8] is Windows only. ** Reference ** [BOOK Effective Tcl - Writing Better Programs in Tcl and Tk], by [Mark Harrison] and [Michael McLennan]: [http://beej.us/guide/bgipc/%|%Beej's Guide to Unix Interprocess Communication]: ** Communication through channels ** The most common forms of IPC in Tcl work through a [channel]. They differ quite a lot in how that channel is created, and sometimes also in what properties it may have, but once it's established one uses the same kind of [read], [gets], [puts], [fconfigure], [fileevent], etc. commands for all of them. Pipes (unnamed [pipeline]s): `[exec]`, `[open |...]`, `[chan pipe]`, `[bgexec]` from ''[BLT]'', ... See also [Scripted Wrappers for Legacy Applications] TCP sockets: The "usual" kind of [internet] socket, opened using `[socket]`. Most IPC packages use these in some way. [UDP] sockets: See [UDP for Tcl]; requires extensions. [Unix Domain Sockets]: [ceptcl]. Like TCP sockets, but exist as objects in the file system, rather than as [IP]address:port combinations on the internet. Files: Open with [open]. Not particularly efficient, but sometimes sufficient. Also, can be used on different machines, even of different operating systems, as long as they share a file system ([RS]) [FIFO] or [named pipe]: FIFOs are also known as [named pipe]s on Unix. (The '''fifo''' of [memchan] is however something different.) See [http://sun.systemnews.com/articles/53/5/opt-dev/7227%|%Introduction to Interprocess Communication Using Named Pipes], Faisal Faruqui, for a pointer to a technical article about this method. [dislocate] is an [Expect] program which uses FIFOs. The extension [WNPComm] implements a reliable method of communication based on this. For Windows, [TWAPI] provides commands for communicating over named pipes. ** Shared memory ** Quite a few extensions have been written for [shared-memory] IPC. semaphores: http://www.equi4.com/pub/pp/sorted/net/svipc-2.2.0/Index.html (or perhaps ftp://ftp.tcl.tk/pub/tcl/mirror/ftp.procplace.com/alcatel/extensions/svipc-2.1.1.tar.gz ) which covers semaphores, shared memory, and message queues. [lexfiend] ''2007-12-30'': It's definitely not building with modern Tcl's & *nixes, but I'm working on a patch for that. More when it's done. [tcl-mq]: a Tcl interface to POSIX Message queues [tcl-mmap]: a Tcl interface to POSIX mmap(2) ** Other mechanisms ** pseudoterminals (ptys): [Expect] (Q: Should this be with the channels?) [Database]: A database server provides a state that potentially is shared between several clients, and can thus be used for IPC. (This is often a wrapper around a lower-level mechanism, however.) [MPI]: Origin in parallelisation, for use when "the same program" is distributed over several machines. [Spread]: [http://www.spread.org] [[Provide info. State of Tcl binding unclear. [Perl], [Ruby], [PHP], ... all connect to it ... 'Seems like the kind of thing [davidw] would have encountered ...]] [YAMI]: Yet Another Messaging Infrastructure [signal]s: Very low level mechanism, primarily available in Unixy environments. Similar to an interrupt in assembly language programming: can be used to make a process do something, but provides no information (not even source of signal) other than the signal number. [m2]: network message bus Tcl package [ZeroMQ]: also has a Tcl interface ** IPC packages ** [[List probably incomplete. Other systems may also provide high-level interfaces comparable to those of these packages.]] *** Service-oriented *** A process declares a set of "services" that other processes may call or send messages to. Could also be viewed as '''object-oriented''' IPC: a process exposes one or several objects, letting other processes send them messages. Service-oriented interfaces tend to be closer to the user than remote procedure call interfaces (e.g. only doing things for which there is also a UI), but there is no clear boundary. ''The packages listed under "Application-level message passing" above usually provide this kind of interface.'' [apptalk] builds on Tk's `[send]`, and provides for starting the target process if is isn't already running. [TooCL] is a Tooltalk interface to Tcl/Tk. *** Message-passing *** [tcl-mq]: an implementation of a [POSIX] Message Queues interface for Tcl [XPA]: is a messaging system for communication between processes. [tmpi]: provides bindings to the MPI library. *** Non-Tcl Tools *** [http://zeroc.com/%|%ZerC Ice]: short for '''I'''nternet '''C'''ommunications '''E'''ngine, is distinct from X11's Inter-Client Exchange (ICE) protocol, which is used for IPC by Tk's `[send]`. Appears to be fairly high level and very object-oriented, perhaps as much an OO framework as an IPC framework. There does not appear to be a Tcl binding. [pvm]: short for '''P'''arallel '''V'''irtual '''M'''achine. [https://en.wikipedia.org/wiki/ToolTalk%|%Tooltalk]: was an IPC '''bus''' by [COMPANY: Sun Microsystems, Inc.%|%Sun]. Over on the [Extending Tcl] page there is a reference to ''Toocl'', which is one developer's binding between Tooltalk and Tcl. I don't know if this is the same thing as I am semi-recalling or not. On the [tcl bibliography] page there is a reference to a paper by Michael Jipping, Hope College, (1993) ````Using Tcl as a Tool Talk Encapsulation'', in the Sun User Group Eleventh Annual Conference and Exhibition PROCEEDINGS. That seems quite likely to be what I am remembering. ** Discussion ** [TV]: Bear in mind, for those who don't already, that there are a few basic mechanisms on the os-es and machines I'm aware of, of which [signal%|%signals], [socket%|%sockets] (of the local and inet kind), and [shared memory] are the main ones. Most of the others, including many packages, don't add anything at all or much to the fundamental capacities of these mechanisms in essence, so many limitations and shortcomings of a lot of the parallel programming aids or simulations simply run into the problems and limitations of these facilities, which are on [unix], [linux], [Microsoft Windows%|%windows], and probably (though there I didn't program them myself), and probably on some of the less well known os-es, too. To begin with, there is hardly formatting involved in the basics, except essential flow control, there is always overhead for copying data around except in few extreme cases, already in the basic library use, there is no or not much support for actual parallel machine concepts, except that ethernet and maybe some others can be made to broadcast over standard enough socket interface, and generally there is bad or absent exact definition of the operation of the basic library functions, for instance with the important aspect of flow control. Which is the direct and only reason a lot of things fuck up or don't work right over various versions, brands and programmers on the [Internet]. In [Java] for instance. In many printer spoolers, for instance. Not to mention what this does to performance, a concept a modern university informaticist couldn't even spell out let alone specify, measure right and interpret with some engineering sense to begin with, let alone be able to incorporate in a design, let along in an important language definition. A lot of interfaces and languages serve no purpose one can optimize much or say something positively discriminating about because its just a style someone likes, or something their mothers don't recognize, or maybe a certain concept applied consistently. Some things in tcl, such as the copying of data over sockets are quite optimizable and well designed and for a scripting language highly optimal in certain sense. 2003-10-14: I did [pcom] years ago, which can be used for for instance remote command execution, see examples down the page, and [remote execution using tcl and Pcom]. ---- [TV] 2007-01-07: In [Tcl on Cuda] there is another mechanism at stake: interprocssing communication between the Parallel Processing Elements via shared registers or (fast) memory, and PCI-express-based passing of data (through pointers and mallocs on both sides) with CopyCputoGpu and CopyGputoCpu functions. Advantages: very high speed, in my case 1000 Megabytes/sec measured also with low latency! When the OpenGL processor is considered as taking part in inter-process communication, like when it receives data from the cpu or even from the [Cuda] processors/thread engines (for recent mainstream NVidia cards), see for instance the gaussianblur example from the sdk. This can be linked with Tcl, and via tcl3d for instance (se e.g. [bwise 3D graphics viewer block]) with Tk. [Lars H]: That sounds more like inter-''processor''-communication than inter-''process''-communication. [TV]: Right. The processors all run at least one process (or a kernel and a process) and the host runs processes, the openGL is probably viewable as a process per pipeline and a global processor, and shader processes. What a grammar! [Lars H]: Linguistic contortions aside, it's still not inter-process communication (but rather some form of [threads]), and therefore not relevant on this page. [TV]: Sorry lad, the thing on the host is a process, period. Apparently you aren't aware of what I'm doing but I intend to port things in Tcl to cuda, which runs processes on threads on processors. If you find that linguistically a problem, fine. <> Survey | Concept | Interprocess Communication