Version 107 of Web scraping

Updated 2009-12-13 01:03:26 by Jorge

[clt postings from jooky and David Welton and Larry Virden.]

Web scraping is the practice of getting information from a web page and reformatting it.

Some reasons one may do this are to send updates to a pager/WAP phone, etc., email one's personal account status to an account, or move data from a website into a local database. See projects like http://sitescooper.org/ , http://www.plkr.org/ , or Python-based mechanize [L1 ] (itself a descendant of Andy Lester's WWW::Mechanize for Perl) for non-Tcl tools for scraping the web.

"Web Scraping ..." [L2 ]

"Web scraping is easy" [L3 ]



  • Apt comments on the technical and business difficulty of Web scraping, along with mentions of WebL and NQL, appear here [L6 ].
  • Some people go in a record-and-playback direction with such tools as AutoIt.
  • Perl probably has the most current Web-scraping activity (even more than tclwebtest?), especially with WWW::Mechanize [L7 ], although Curl [L8 ] also has its place.
  • In 2004, WWWGrab [L9 ] looks interesting for those in a position to work under Windows.

[It seems many of us do our own "home-grown" solutions for these needs.]


An alternative to web scraping is to work with the web host to work out details of a Web Service that would provide useful information programatically.

Also, some web hosts provide XML versions (RSS), or specially formatted versions for use with Avantgo or the plucker command, with the goal of aiding people who need some sort of specialized format for small devices, etc.

It would be great if someone making legitimate use of some of these sources would share some of their code to do this sort of thing.

RS: A little RSS reaper loads an RSS page, renders it to HTML, plus it compacts the referenced pages into the same document with local links, while trying to avoid ads and noise. Not perfect, but I use it daily to reap news sites onto my iPaq :)


LV 2007 Nov 1 On comp.lang.tcl, during Oct 31, 2007, in a thread [L10 ] about someone wanting to extract data from an html page, a user by the name of Ian posted the following snippet of code as an example of what they do to deal with a page of html that has some data in it that

 package require htmlparse
 package require struct

 proc html2data s {
    ::struct::tree x
    ::htmlparse::2tree $s x
    ::htmlparse::removeVisualFluff x

    set data [list]

    x walk root q {
        if {([x get $q type] eq "PCDATA") &&
            [string match R\u00e6kke/pulje [x get $q data]]} {

            set p $q
            for {set i 3} {$i} {incr i -1} {set p [x parent $p]}
            foreach {row} [lrange [x children $p] 1 end] {

            ......
            }
            break
        }
    }
    return $data
 } 

LemonTree branch uses this technique.


See also screenscrape, download files via http, parallel geturl.


LV This isn't technically web scraping, but I'm uncertain where else to reference it - it is making use of a web site's cgi functionality, from a Tk application, from what I can tell.... Amazon.de PreOrder. Or even googling with SOAP.


[Mention Beautiful Soup here, or perhaps in the vicinity of htmlparse.]


NEM notes that to be a good web-citizen any web robots should follow some guidelines:

  • set the user-agent to some unique identifier for your program and include some contact details (email, website) so that site admins can contact you if they have any issues (you can do this with http::config -useragent);
  • fetch the /robots.txt file from the server you are scraping and check every URL accessed against it (see [L11 ]).

(probably more - feel free to expand this list). Checking the robots.txt is relatively simple but still requires a bit of effort. As I'm currently doing some web-scraping I may package up the code I'm using for this into a webrobot package.


While "An almost perfect real-world hack" [L12 ] rests on Python rather than Tcl, the mention of iMacro is equally apt for us, and Python is essentially equivalent to Tcl for these purposes, anyway.