Download file via HTTP

  package require http
  proc getPage { url } {
        # no cleanup, so we lose some memory on every call
        return [ ::http::data [ ::http::geturl $url ] ]
  }

  set file [ getPage $url ]

The above is a basic example of the http package. It however doesn't account for proxys, URL redirects, etc. See grabchat for code which tries to handle some additional issues.

Some further examples can be found at the http page.

The example above omits cleanup, so memory consumption rises when you use it to get many pages. The example below cares for cleanup.


 package require http
 proc getPage { url } {
       set token [::http::geturl $url]
       set data [::http::data $token]
       ::http::cleanup $token          
       return $data
 }

KPV If you need to download lots of files, check out Parallel Geturl which lets you overlap many getPage calls simultaneously.