Version 120 of Web scraping

Updated 2013-04-28 01:15:39 by LVwikignome

Summary

Web scraping is the practice of getting information from a web page and reformatting it.

Description

[clt postings from jooky and David Welton and Larry Virden.]

When a service API is not available, sometimes the only recourse is to scrape a web interface instead. An alternative is to work with the web host and arrange details of a Web Service that would provide useful information programmatically.

Web scraping is often employed for small tasks where an API (such as sending updates to a pager/WAP phone, etc., emailing one's personal account status to an account, or moving data from a website into a local database) is not available

See projects like http://sitescooper.org/ , http://www.plkr.org/ , or Python-based mechanize [L1 ] (itself a descendant of Andy Lester's WWW::Mechanize for Perl) for non-Tcl tools for scraping the web.

"Web Scraping ..." [L2 ] (URL not 404 - instead links to a generic landing page on Aug 28, 2011)

Etiquette

NEM notes that to be a good web-citizen any web robots should follow some guidelines:

  • set the user-agent to some unique identifier for your program and include some contact details (email, website) so that site admins can contact you if they have any issues (you can do this with http::config -useragent);
  • fetch the /robots.txt file from the server you are scraping and check every URL accessed against it (see [L3 ]).

(probably more - feel free to expand this list). Checking the robots.txt is relatively simple but still requires a bit of effort. As I'm currently doing some web-scraping I may package up the code I'm using for this into a webrobot package.

Examples

Downloading your utility usage from Pacific Gas and Electric using TCL
Scraping timeentry.kforce.com
an example of scraping a password-protected website over ssl
Downloading pictures from Flickr
A little rain forecaster
An HTTP Robot in Tcl
Web2Desktop
tcllib/examples/oscon: which uses the htmlparse module (among others) to parse the schedule pages for OSCON 2001 and convert them into CSV files usable by Excel and other applications. LemonTree branch uses this technique.
tDOM's HTML-parser + XPath expression
web scraping example for EBAY, presented at the First European Tcl/Tk Users Meeting ( http://sdf.lonestar.org/~loewerj/tdom2001.ppt and http://www.tu-harburg.de/skf/tcltk/tclum2001.pdf.gz )
TcLeo
allows querying the English <=> German web dictionary at http://dict.leo.org from the command line.
ucnetgrab
Getting stock quotes over the internet
TaeglicherStrahlungsBericht
Daily updated radiation map of german government's (BfS) sensor installations
wiki-reaper (and wish-reaper)
tinyurl
Amazon.de PreOrder
LV This isn't technically web scraping, but I'm uncertain where else to reference it - it is making use of a web site's cgi functionality, from a Tk application, from what I can tell
Synchronizing System Time
   [googling with SOAP]
Daily Dilbert

Tools

htmlparse
tclwebtest
a tool to write automated tests for web applications
TWiG
a tool for extracting blocks of data from pages retrieved from the web
Getleft
pop3line
fetches e-mails from the web-mailer of T-Online.
A little RSS reaper
RS: loads an RSS page, renders it to HTML, plus it compacts the referenced pages into the same document with local links, while trying to avoid ads and noise.

Not perfect, but I use it daily to reap news sites onto my iPaq :)

LemonTree branch
a GUI to browse an HTML document
grabchat
a tool to grab yesterday's tcler's wiki chat room log
AutoIt
a simple tool to simulate key presses, mouse movements, and window commands

Non-Tcl Tools

Beautiful Soup
A web scraping library for Python
iMacros for Firefox
spidermonkey
which Kaitzschu has suggested deserves Tcl bindings.

See Also

screenscrape
websearch
TkChat
is one Tk/Tcl web scraper for this web site's chat room!
lapecheronza
monitor your favorite websites for updates and changes.
Tutorial for Web scraping with regexp and tdom
web crawler

Resources

  • Web scraping is easy (URL is 404 on Aug 28, 2011)
  • Apt comments on the technical and business difficulty of Web scraping, along with mentions of WebL and NQL, appear here [L4 ].
  • Perl probably has the most current Web-scraping activity (even more than tclwebtest?), especially with WWW::Mechanize , although Curl also has its place.
  • In 2004, WWWGrab [L5 ] looks interesting for those in a position to work under Windows.
An almost perfect real-world hack
in which Louis Brandy describes using Python and iMacros to do some web scraping

Example by Ian, 2007, comp.lang.tcl

LV 2007-11-01: On comp.lang.tcl, during Oct 31, 2007, in a thread [L6 ] about someone wanting to extract data from an html page, a user by the name of Ian posted the following snippet of code as an example of what they do to deal with a page of html that has some data in it that

package require htmlparse
package require struct

proc html2data s {
   ::struct::tree x
   ::htmlparse::2tree $s x
   ::htmlparse::removeVisualFluff x

   set data [list]

   x walk root q {
       if {([x get $q type] eq "PCDATA") &&
           [string match R\u00e6kke/pulje [x get $q data]]} {

           set p $q
           for {set i 3} {$i} {incr i -1} {set p [x parent $p]}
           foreach {row} [lrange [x children $p] 1 end] {

           ......
           }
           break
       }
   }
   return $data
}

LemonTree branch uses this technique.

Misc

[It seems many of us do our own "home-grown" solutions for these needs.]

Also, some web hosts provide XML versions (RSS), or specially formatted versions for use with Avantgo or the plucker command, with the goal of aiding people who need some sort of specialized format for small devices, etc.

It would be great if someone making legitimate use of some of these sources would share some of their code to do this sort of thing.