Version 38 of Tequila

Updated 2012-06-25 19:40:42 by EE

"Tequila" [L1 ] is a little Tcl server JCW started in 1999, which implements persistent shared arrays. With Tequila, you need no longer think in terms of communication: a Tcl array gets "attached" to the server, and from then on all clients doing so can read/write/unset items in it. This approach works quite well in combination with traces (and it's also built on traces and file events). When properly set up, you can builds apps as monolithic ones and later split them up with minimal changes. [Elaborate this point.]

Tequila has its own page [L2 ] as part of the MetaKit Wiki [L3 ].

CMCc is there some good reason for not using the comm package in tcllib for communication? Is it too slow/general? It calls eval in the receiver - that's probably slower than the special-purpose interpreter.

I suspect that the author didn't see a reason for using comm. He was comfortable writing communications layers, and probably had some code that he found worked sufficiently. Is there a benefit you see in using comm over what is already present in tequila?

CMcC Well, yes. comm provides for asynchronous sending and receiving of commands, whereas the current communication layer is synchronous with respect to client-outbound change notification, with pipelining of client-inbound change notification.

It seems to me that as long as the tcp connection between client and server remains up, and as long as the server is shut down in an orderly manner, the client can send requests to the tequila server asynchronously, rather than waiting for confirmation from the server (that confirmation, currently, amounts to the server having received and effected the changes, and all notifications of that change having been sent.)

Actually, now I think about it, my question was as much about comm as about Tequila.

10jun03 jcw - Not sure when the above was written, but one reason for using sync mode was as indicated above - the clients get an ack when changes have actually been made and sent around. This solved some important sequence-of-event issues for the app where this was being used. I agree that more sophisticated comms (including more transport-layer independence) would be useful. In the two main projects where Tequila was used as foundation, it worked well. Race issues were darn hairy at one point, so I never went back to make more changes unless truly needed. Another reason was that the central server was *very* long-running, and that all the other parts were evolving rapidly during development. Keeping the server as is (and using it to distribute client software changes) turned out to work well.

In hindsight, Tequila worked (and still works) really well, but it could be made still more generic and performant. Separating notification, persistence, and communication could be a good idea. Keeping in mind the motto don't over-generalize, and don't generalize prematurely....


escargo 6 Nov 2003 - Looking at Tequila a bit, I was wondering whether it was a potential victim of single point of failure problems. Can the Tequila server be replicated or distributed such that the clients have automatic failover in case their primary Tequila server goes down?

Dunno about "victim", but yes Tequila relies on a central server. Automatic fallover was never a design requirement. Good hardware, RAID, backups, etc. would go a long way. Replication would be cool (and non-trivial). The long running setup handled over a billion requests for over a year with no downtime and that was good enough in this particular case. But I agree that a life-support system would need a bit more than this few-hundred line script.. :o) -jcw

AK: A Tuplespace with notification should be easily replicable, just several tuple servers notifying each other about all the changes. Well, modulo code to prevent races when clients take a tuple out. So not too trivial either.

escargo: Any idea how clients that lose connection to one server would switch to an alternative server? (Any idea how other systems might do it?)


escargo 6 Apr 2005 - I downloaded the source so I could play with tequila, but I realized that there's a gap in the functions available, at least as I understand them. Once you attach an array to the server, how can you detach it? I was looking at a client that might attach to multiple arrays, and then need to detach from some arrays and attach to others. Also, is it possible to attach to arrays on different servers in the same client?

There's a new implementation of Tequila in progress, somewhat different model but considerably more flexible (including the ability to attach to multiple servers) - see the starkit at [L4 ] for more info. -jcw

escargo 8 Jun 2007 - It's now a couple of years after that note was written. Is there more progress on Tequila 2 (the starkit dates from June, 2005)?

escargo Apr 2005 - I found the behavior of the new tequila.kit much better behaved on a Linux box than on my Windows XP Pro box. I don't know if you have tested them in both environmments, but Linux is definitely better at this time.


2005-08-04-01:04 Zarutian asked on the chat that if a client itirates through and changes every item in an Tequila'd array would the whole array be transfered to the server and then to all attached clients as the client itirates through the array? ( The array is tranfered item by item as the client itirates through it and changes the items ) GPS said it probably would.

2005-08-04-01:11 Zarutian: so I ask if there is a less bandwith intensive way?

I suspect that there is. The solution would be sending the computation/proccess to the data rather than vice versa.

2005-08-25-01:37 Zarutian: miguel half asked me to clarify what I meant

Say we have an shared array called arrayX and we execute this on a client: (servral clients are connected to the server all shareing the arrayX)

  foreach item [array names arrayX] {
    if {[string is digit $arrayX($item)]} {
      incr arrayX($item) +2
    } else {
      append arrayX($item) "."
    }
  }

what probably happens is that each changed array element is sent to the server and from the server to all the clients excluding the one where the changed elements originated.

but of course some will say: "Aha! If the variable trace on the originating client checks how the variable is updated (append, set, and incr) then the originating client only has to send how the variable was updated. (the update procedure + delta info)"

the above wouldnt work in the case of:

  set names [array names arrayX]
  set last  [lindex $names end]
  foreach item [lrange $names 0 end-1] {
    set tmp $arrayX($item)
    set arrayX($item) $arrayX($last)
    set arrayX($last) $tmp
    set last $item
  }

so like I said earlier: sending the computation to the data rather than vice versa is better in terms of less bandwith usage

but sending code, reciving and evaling it at a remote tcl interp always raises the question of Denial-Of-Service attacks like:

 set a 2
 while 1 { expr $a * $a * $a }

Zarutian looks at the clock and will continue at later time.


EE - 2012-Jun-25

Does anyone know the current status of Tequila? The last update I can see is from 2005, which appears to be an alpha or beta of the Tequila v.2 complete rewrite, saying that it's "under active development".


Category Application