During the early development of C, one of the design decisions made was to move file input and output out of the language and into a series of standard library function calls. This application programming interface (aka API) is known as stdio.
Unfortunately, in the beginning, no formal specifications for stdio were available. Instead, the API and a brief description existed, and, for those fortunate to have access to the original Bell Labs source, an implementation in C was available.
However, the basic concept is this.
Unix borrowed the standard application files from Multics, which had longer names for them (standard_input, standard_output, and error_output). - escargo
TV The essential property of the channels would be that they were (I think for all eventual unixes) stream oriented, so that one would access files and also local or internet sockets as streams of incoming and/or outgoing data, with associated buffering and flow control. Stdin and stdout are like socket streams by nature of course.
One can usually change some pointer of a stream descriptor, and I guess (I don't remember, I programmed that sort of thing long ago) thats how a redirection can be efficiently implemented. Since they too are stored as a list in the application, and the std stuff is always opened to begin with, the first 3 elements of that list would refer to them automatically at process startup time.
I did a server at university which would be able to take to reconnect the open end of a process to process (local) unix socket connection, without the application having to close and reopen the socket. Also it would let processes of a different parent connect transparently, and by name reference.
The stream stuff isn't stupid, but as far as I know of hardly a manufacturer would define the exact flow and buffering behavior so that one would still have to guess at efficiency of line or non-newline separated data flowing through sockets, whether they would flush or block at bufferfull, and whether buffer full coinciding with a boundary newline or carriage return is problematic or not for receiving two reads of which one empty or getting its actual newline stripped of.
A stream usually takes extra load machine code to transfer data to and fro buffers, even a few times when one is unlucky, though there are implementations which let one read and write the streams in the buffer space directly, for instance for network drivers, so that no extra copying is required, of course at the expense of error sensitivity when pointing directly in the stream buffer which is not so great when reading and writing perfectly from and to streams, doing nothing else but accessing function (or macro) input and output.
One could argue that regardless of the method of data transfer, either as function parameters, as messages in an oo language, or stream data between threads or as part of a stack or fifo procedure, such methods always copy data around, and often more than needed. Using a shared memory or simply shared data (within a process) approach doesn't incur such strictly speaking unnecessary overhead. The overhead would be in pointing to the data, but usually one uses more than a few bytes of data grouped together, so a little overhead a probably not a large percentage. Processors which don't incur the processor to memory overhead to do stream access, or who can operate on internal lists would be good. Using a memcopy which uses dma hardware resources is good, too. For mobile stuff: transferring data around doesn't just cost instructions cycles, it also costs energy to juggle the bus lines between 1 and zero.
A nice input output method would be streams of adjustable granularity with watermark reading, regardless of the language.
Oh, windows and streams is trouble. Linux should be fine enough, but somehow the application scheduler or God knows what in windows seems never to do all things one wants, but maybe there are always workarounds.
See also