package require http proc getPage { url } { # no cleanup, so we lose some memory on every call return [ ::http::data [ ::http::geturl $url ] ] } set file [ getPage $url ] ---- The above is a basic example of the [http] package. It however doesn't account for proxys, URL redirects, etc. See [grabchat] for code which tries to handle some additional issues. Some further examples can be found at the [http] page. The example above ommits cleanup, so memory consumption rises when you use it to get many pages. The example below cares for cleanup. ---- package require http proc getPage { url } { set token [::http::geturl $url] set data [::http::data $token] ::http::cleanup $token return $data } ---- [Category Internet]