TCL Web Server (HTTPS) Extension
Contact: neophytos (at) gmail (dot) com
It is NOT an application server. It is a loadable module. It is the absolute minimum. It supports multiple certificates for different hosts and multiple servers. It has NOT been released yet (still an experimental version).
Many thanks to Holger Ewert for the constructive feedback. As of 2023-09-04 15:50 EST, (a) listen_server no longer blocks the event loop, and (b) errors are no longer killing the application.
Here is an example without threads in 20 lines of code:
package require twebserver proc process_request {request_dict} { return "HTTP/1.1 200 OK\n\ntest message request_dict=$request_dict\n" } proc process_conn {conn addr port} { if { [catch { set request [::twebserver::read_conn $conn] set reply [process_request [::twebserver::parse_request $request]] ::twebserver::write_conn $conn $reply } errmsg] } { puts "error: $errmsg" } ::twebserver::close_conn $conn } set config_dict [dict create] set server_handle [::twebserver::create_server $config_dict process_conn] ::twebserver::add_context $server_handle localhost "../certs/host1/key.pem" "../certs/host1/cert.pem" ::twebserver::add_context $server_handle www.example.com "../certs/host2/key.pem" "../certs/host2/cert.pem" ::twebserver::listen_server $server_handle 4433 vwait forever ::twebserver::destroy_server $server_handle
And, here is an example with threads in 30 lines of code (needs Thread extension):
package require twebserver package require Thread set thread_script { package require twebserver proc thread_process_request {request_dict} { return "HTTP/1.1 200 OK\n\ntest message request_dict=$request_dict\n" } proc thread_process_conn {conn addr port} { after 1000 [list ::twebserver::close_conn $conn] if { [catch { set request [::twebserver::read_conn $conn] set reply [thread_process_request [::twebserver::parse_request $request]] ::twebserver::write_conn $conn $reply } errmsg] } { puts "error: $errmsg" } ::twebserver::close_conn $conn } } set pool [::tpool::create -minworkers 5 -maxworkers 20 -idletime 40 -initcmd $thread_script] proc process_conn {conn addr port} { global pool ::tpool::post -detached -nowait $pool [list thread_process_conn $conn $addr $port] } set max_request_read_bytes [expr { 10 * 1024 * 1024 }] set max_read_buffer_size [expr { 1024 * 1024 }] set config_dict [dict create max_request_read_bytes $max_request_read_bytes max_read_buffer_size $max_read_buffer_size] set server_handle [::twebserver::create_server $config_dict process_conn] ::twebserver::add_context $server_handle localhost "../certs/host1/key.pem" "../certs/host1/cert.pem" ::twebserver::add_context $server_handle www.example.com "../certs/host2/key.pem" "../certs/host2/cert.pem" ::twebserver::listen_server $server_handle 4433 vwait forever ::twebserver::destroy_server $server_handle
HE 2023-09-04: I like the base idea of that extension. It looks like it takes all hassle away to write the same in plain Tcl by using TLS extension. That is why I directly compiled and tried it for Linux.
Here my experiences based on source files downloaded on 2023-08-31 by using main button Code->Download ZIP It claims it is version 1.0.0.
Now to the part which made me unhappy because in that way the extension is not usable for me. Possibly I have overseen something. I would be happy to learn how to work around the following:
These three are all no goes from my point of expectation for a loadable module/package.
(solved) About "creating a certificate":
To mention is, I had no experience to create certificated. And my private computer is behind a public network access device from the company AVM. These use internally the domain fritz.box.
That means I wanted one certificate for:
And I found out that the following command successfully replace the three openssl commands from the documentation to get it running:
# First go into the directory where the certficate should be stored. # In our case ./certs/host1 openssl req -x509 \ -newkey rsa:4096 \ -keyout key.pem \ -out cert.pem \ -sha256 \ -days 3650 \ -nodes \ -subj "/C=DE/ST=Germany/L=Home/O=none/OU=CompanySectionName/CN=localhost/CN=foo1/CN=foo1.fritz.box"
To use them I replace all "::twebserver::add_context" lines with:
::twebserver::add_context $server_handle localhost "./certs/host1/key.pem" "./certs/host1/cert.pem" ::twebserver::add_context $server_handle holger9 "./certs/host1/key.pem" "./certs/host1/cert.pem" ::twebserver::add_context $server_handle holger9.fritz.box "./certs/host1/key.pem" "./certs/host1/cert.pem"
That worked with the single threaded and the multi threaded version.
(solved) About "::twebserver::listen_server never comes back. Everything behind will never be executed.":
That is a bit strange for a loadable module, which is part of something bigger.
Easy to test. Open a tclsh, copy and paste from the example code everything before ::twebserver::listen_server in it. You still get back a prompt.
Then execute the line with ::twebserver::listen_server and you will not get back a prompt.
Tests with curl shows that the server itself is running
(solved) About "::twebserver::listen_server blocks the event loop.":
I tried this with the single threaded and the multi threaded version.
That means even if the event loop is entered successful (if not the server would not be started) the blocking behaviour of ::twebserver::listen_server stops it. Therefore, that blocks the whole application. Only the HTTP server itself is running.
That is something I consider a real error because I can't go around it.
How to check it:
Replace the following line:
::twebserver::listen_server $server_handle 4433
with:
after 1 [list ::twebserver::listen_server $server_handle 4433] after 100 {puts {A test output}} vwait forever
That would start the server from the eventloop and puts another event on the loop which prints a message. The line "vwait forever" then starts the eventloop.
We can investigate that the server is started and works (curl tests are working). But we never see the test message. That is understandable because if a procedure do not come back the event loop will never be entered again.
And that is why the blocking of ::twebserver::listen_server is a critical issue from my point of view.
About "Errors are killing the application instead to be possible to be caught.": It looks like most (all?) commands of twebserver calls directly exit the program in case of an error instead of throwing an Tcl error which can be processed in the caller. For example the following code:
if {[catch { ::twebserver::add_context $server_handle localhost "../certs/host1/key.pem" "./certs/host1/cert.pem" } err]} { puts $err }
will run into an error because of the wrong path of cert.pem. Result is the output of:
404010CD8B7F0000:error:80000002:system library:file_ctrl:No such file or directory:crypto/bio/bss_file.c:297:calling fopen(./certs/host1/cert.pem, r) 404010CD8B7F0000:error:10080002:BIO routines:file_ctrl:system lib:crypto/bio/bss_file.c:300: 404010CD8B7F0000:error:0A080002:SSL routines:SSL_CTX_use_certificate_file:system lib:ssl/ssl_rsa.c:291:
And tclsh is closed. That is not my expectation of the correct behavior of a loadable module.
HE 2023-09-17: I marked solved items from my last post with "(solved)" In addition I found the following findings in version downloaded on 2023-09-10):
About "Memory leak?": I used example.tcl simply with changed ::twebserver::add_context lines to match my certificate.
And I used another tclsh and copied the following into it:
package require http package require tls ::http::register https 4433 [list ::tls::socket -autoservername true] proc testKeepalive1 {} { foreach el [list /probe/startup /probe/readiness /probe/liveness /metrics /de/da] { set url https://foo.fritz.box[set el] set token [::http::geturl $url -keepalive 1] ::http::cleanup $token } return } proc testKeepalive0 {} { foreach el [list /probe/startup /probe/readiness /probe/liveness /metrics /de/da] { set url https://foo.fritz.box[set el] set token [::http::geturl $url -keepalive 0] ::http::cleanup $token } return }
Then I can use the following lines to bring load to the server:
time testKeepalive0 1000 time testKeepalive1 1000
A third console running the command top showed:
Tasks: 276 total, 2 running, 274 sleeping, 0 stopped, 0 zombie %CPU(s): 17,0 us, 2,3 sy, 0,0 ni, 79,7 id, 0,0 wa, 0,4 hi, 0,6 si, 0,0 st MiB Spch: 15781,0 total, 239,5 free, 15388,4 used, 704,1 buff/cache MiB Swap: 17376,9 total, 5445,9 free, 11931,0 used. 392,6 avail Spch PID USER PR NI VIRT RES SHR S %CPU %MEM ZEIT+ BEFEHL 2843 holger 20 0 537,0g 11,5g 6720 R 81,3 74,6 22:16.81 tclsh 2354 holger 20 0 27392 12916 7396 S 20,0 0,1 4:20.40 tclsh
The first tclsh line is the server. The columns VIRT and RES are increasing with every test run. And never shrink. After a couple of dozens tests this lead to a server crash.
About "Error messages:" Some errors which possibly should be handled:
::twebserver::add_context doesn't catch not existing handle. Where is the context added?
package require twebserver set server_handle {} ::twebserver::add_context $server_handle localhost "../certs/host1/key.pem" "../certs/host1/cert.pem"
On the other hand, ::twebserver::listen_server does it correct:
::twebserver::listen_server $server_handle 4433 #=> server handle not found
And ::twebserver::destroy_server does it correct but use a different error text
::twebserver::destroy_server {} #=> handle not found
By the way an empty host name leads also not to an error. I'm not sure if this could be an issue.
About "::twebserver::return_conn raise segmentation fault": It looks like ::twebserver::return_conn doesn't catch errors with the response_dict correctly. Instead I got a segmentation error.
Easiest way to simulate it: Change line "::twebserver::return_conn $conn $response_dict" to "::twebserver::return_conn $conn {}" in example-with-req-resp.tcl. Then start the example server.
With the first request the server prints:
error: statusCode not found Speicherzugriffsfehler (Speicherabzug geschrieben)
and stops.
"Speicherzugriffsfehler (Speicherabzug geschrieben)" means "Segmentation Fault (Dump written)".
About "And the version I downloaded today is much slower than the version before": Same condition as in 'About "Memory leak?"'.
I got the following result:
% time testKeepalive0 1000 267731.312 microseconds per iteration % time testKeepalive1 1000 137091.769 microseconds per iteration
Before I got:
% time testKeepalive0 1000 37643.129 microseconds per iteration % time testKeepalive1 1000 23941.839 microseconds per iteration
That is between 5 and 7 times slower than before. Is there an explanation for that?
About "How really to use HTTP keep alive?": The above test use -keepalive 1 and -keepalive 0. In the header received by the server and after reply by the client showed Connection: keep-alive.
But the next request from the same client shows a different socket.
I don't believe the issue is on the client side because the server needs control over that.
The example server calls ::twebserver::close_conn. That means, it will not keep the connection alive.
But, if we remove it or control it by a timeout, the question comes up how to get the next request from the connection. For example if we use ::twebserver::parse_conn how we know that there is a new request fully received? Or, if we use ::twebserver::read_conn (which would mean we have to find out by ourselve when we received a full request) how we know that there are new data to read. Without that, we can't go in an asynchronous mode to receive more than one request on the same connection. Possibly I missed something.
And as a last item in the list: I would really like to have some documentation about the request dict and the response dict.
The documentation of twebserver doesn't describe how
neophytosd 2023-09-18: Some quick replies until I get a chance to review all of the feedback: