How do you Screen Scrape? [closed]
When there is no webservice API available, your only option might be to Screen Scrape, but how do you do it in c#?
how do you think of doing it?
Matt and Paul's answers are correct. "Screen scraping" by parsing the HTML from a website is usually a bad idea because:
Parsing HTML can be difficult, especially if it's malformed. If you're scraping a very, very simple page then regular expressions might work. Otherwise, use a parsing framework like the HTML Agility Pack.
Websites are a moving target. You'll need to update your code each time the source website changes their markup structure.
Screen scraping doesn't play well with Javascript. If the target website is using any sort of dynamic script to manipulate the webpage you're going to have a very hard time scraping it. It's easy to grab the HTTP response, it's a lot harder to scrape what the browser displays in response to client-side script contained in that response.
If screen scraping is the only option, here are some keys to success:
Make it as easy as possible to change the patterns you look for. If possible, store the patterns as text files or in a resource file somewhere. Make it very easy for other developers (or yourself in 3 months) to understand what markup you expect to find.
Validate input and throw meaningful exceptions. In your parsing code, take care to make your exceptions very helpful. The target site will change on you, and when that happens you want your error messages to tell you not only what part of the code failed, but why it failed. Mention both the pattern you're looking for AND the text you're comparing against.
Write lots of automated tests. You want it to be very easy to run your scraper in a non-destructive fashion because you will be doing a lot of iterative development to get the patterns right. Automate as much testing as you can, it will pay off in the long run.
Consider a browser automation tool like Watin. If you require complex interactions with the target website it might be easier to write your scraper from the point of view of the browser itself, rather than mucking with the HTTP requests and responses by hand.
As for how to screen scrape in C#, you can either use Watin (see above) and scrape the resulting document using its DOM, or you can use the WebClient
class [see MSDN or Google] to get at the raw HTTP response, including the HTML content, and then use some sort of text-based analysis to extract the data you want.
Use Html Agility Pack. It handles poorly and malformed HTML. It lets you query with XPath, making it very easy to find the data you're looking for. DON'T write a parser by hand and DON'T use regular expressions, it's just too clumsy.
The term you're looking for is actually called Screen Scraping.
One thing you have to consider about scraping web sites is that they are beyond your control and can change frequently and significantly. If you do go with scraping the fact of change ought to part of your overall strategy. E.g. you will need to update your code sooner or later to deal with a "moving target."
Here are a couple of C# links to get you started:
http://www.cambiaresearch.com/c4/3ee4f5fc-0545-4360-9bc7-5824f840a28c/How-to-scrape-or-download-a-webpage-using-csharp.aspx
Here are sample C# code which will help you
Uri url = new Uri("http://msdn.microsoft.com/en-US/");
if (url.Scheme == Uri.UriSchemeHttp)
{
//Create Request Object
HttpWebRequest objRequest = (HttpWebRequest)HttpWebRequest.Create(url);
//Set Request Method
objRequest.Method = WebRequestMethods.Http.Get;
//Get response from requested url
HttpWebResponse objResponse = (HttpWebResponse)objRequest.GetResponse();
//Read response in stream reader
StreamReader reader = new StreamReader(objResponse.GetResponseStream());
string tmp = reader.ReadToEnd();
objResponse.Close();
//Set response data to container
this.pnlScreen.GroupingText = tmp;
}