HJG Someone has uploaded a lot of pictures to Flickr, and I want to show them someplace where no internet is available.
The pages at Flickr have a lot of links, icons etc., so a simple recursive download with e.g. wget would fetch lots of unwanted stuff. Of course, I could tweak the parameters for calling wget (-accept, -reject, etc.), but doing roughtly the same thing in Tcl looks like more fun :-)
So the first step is to downloading the html-pages from that person, extract the links to the photos from them, then download the photo-pages (containing titles and descriptions), and the pictures in the selected size.
Then we can make our Flickr Offline Photoalbum.
First draft for the download:
package require http proc getPage { url } { set token [::http::geturl $url] set data [::http::data $token] ::http::cleanup $token return $data } catch {console show} ;## set url http://www.flickr.com/photos/siegfrieden set filename "s01.html" set url http://www.flickr.com/photos/siegfrieden/page2 set filename "s02.html" set data [ getPage $url ] #puts "$data" ;## set fileId [open $filename "w"] puts -nonewline $fileId $data close $fileId set n 0 foreach line [split $data \n] { if {[regexp -- "<title>" $line]} { puts "1: $line\n"; incr n } if {[regexp -- "<h4>" $line]} { puts "2: $line\n"; incr n } if {[regexp -- (class="Photo") $line]} { puts "3: $line\n"; incr n } if {[regexp -- (class="Desc") $line]} { puts "4: $line\n"; incr n } if {[regexp -- (class="end") $line]} { puts "5: $line\n"; incr n; break } }
This will only get the first html-page, so the next step is to also get the other pages, extract all the informations we need, and then fetch the pictures.
Strings to look for:
...
See also: