Hacker News new | past | comments | ask | show | jobs | submit login
Python Headless Web Browser Scraping on Amazon Linux (fruchterco.com)
102 points by steven5158 on June 17, 2013 | hide | past | favorite | 39 comments



PhantomJS is brilliant, but Selenium is a questionable choice for this task. For some reason, the creators of Selenium have decided that passing HTTP status codes back through the API is and always will be outside the scope of their project. So if you request a page and it returns 404 you have no way to find out (other than using crude heuristics). This makes Selenium completely unusable for anything I would have used it for.

Fortunately you can do it by using phantomjs directly instead of going through the Selenium WebDriver API. Maybe one day the phantomjs WebDriver API implementation (ghostdriver) will extend the API to pass HTTP status information back to the caller. Until then, this API is unusable (at least for me).


Well, I think the matter is a bit more complicated than that. When dealing with a full browser, you fetch a lot of resources. The status code for the first page fetch may be easily obtained, but your API gets very wonky as soon as you want to get status codes for all linked resources. Even if you managed that, any Ajax requests would complicate things, especially if they have deferred loading. And then you have WebSockets.

There are tools, such as BrowserMob Proxy, far better suited for monitoring HTTP traffic. And they'll get you all the headers. You can even capture to HAR so you measure performance.


Difficult edge cases are never a good reason not to support the 99.9% case.

Also, phantomjs has access to all the information you want and the WebDriver API already has a capabilities negotiation facility.

[Edit] Don't forget that the original URL is the only one supplied by the client of the API. It may be incorrect for very different reasons than all the other resources included by the page itself. That's why it is justified to treat it as a special case.


These aren't edge cases. They're asked about constantly. Most people are using Selenium because they care about everything on the page. Otherwise, your stdlib HTTP client would be sufficient.

That aside, if PhantomJS already has the info, you can always fetch it with executeScript.

If you do feel that strongly about the status code part though, I'd urge you to comment on the public draft of the W3C spec: http://www.w3.org/TR/webdriver/


From the point of view of simulating actual users, the fact that some random third-party resource on the page failed to load is not particularly relevant. That happens all the time as I browse around the web, and I never have to care about it as long as the site continues to function. So it very much is an edge case compared to the page itself failing to load.


A JavaScript file failing to load will bork most pages. A CSS file failing to load or a key image will cause most people to quit. And an Ajax request failing in a single-page app will render it useless.

But, my point of view is from actual Selenium users. This is framed by providing support on the IRC channel, on the mailing lists, triaging the issue tracker, and by interacting with people at SeleniumConf and the local Boston meetup. It's not some fringe use case and I'm not arguing the point for the sake of arguing it. The original supposition that it's an edge case is not accurate. And sure, the web breaks. That's why people using Selenium would like a way to catch that. And that's a big part of why the BrowserMob Proxy project exists.


"A JavaScript file failing to load will bork most pages. A CSS file failing to load or a key image will cause most people to quit."

Wha?

Sure, if, say, "app.js" fails to load, you have a problem.

But an analytics script?

A 3rd party ad script (which is what the GP gave as an example)?

These things can and do fail all the time.


I believe you can't use execute because any JavaScript you supply runs inside the page. You don't have access to the phantomjs specific callbacks you need to intercept http traffic.


That's unfortunate. I don't work on PhantomJS, but I can try to track down someone on the team and see if there's a way to attach a handle to window or something.


To follow up to your edit, that may be true in one case. But it's perfectly reasonable to navigate via clicking, anything in the navigate API, JS actions, meta refreshes, and so on. Even in that one case, most people would expect redirects to be followed and basic auth protected pages to submit. Again, all tractable problems, but ones that are likely better handled by an interstitial layer where you can see the entire chain of requests & responses.


Browsers do see the entire chain of requests and responses. All of it. Some browsers make that information available externally. I just don't see why a browser remote control solution like Selenium shouldn't pass on as much of this information as possible.


Phantomjs handles everything you mention (status codes on large numbers of resources, ajax, deferred loading monitoring and HAR output) with the possible exception of websockets - I have not tried and very little documentation today but it should work. The big limitation is this is WebKit-only right now.

For example: here's the wiki on network monitoring including HAR: https://github.com/ariya/phantomjs/wiki/Network-Monitoring

The API seems pretty clean to me but I guess that is a matter of opinion.


You could always write a simple proxy in python and simply route all of your traffic through that.

See: http://voorloopnul.com/blog/a-python-proxy-in-less-than-100-...


BrowserMob Proxy is the go-to tool for use with Selenium:

http://bmp.lightbody.net/


That would add quite a lot of complexity to achieve something rather trivial.


Aren't you stuck with JavaScript then? Sure, PhanthomJS is awesome, but Python is even in the title, so it's not just a side note.


Yes - Unless you are parsing static HTML you will need the rest of the browser's functionality which is implemented as a JavaScript engine. You will also need the original content from the website which will be in JavaScript.

In theory you could recreate this in another language such as Python but you would have to both parse the JavaScript from the website and implement a full browser.


No, phantomjs includes a webserver module. That's what ghostdriver uses to implement the WebDriver API and you can use it to implement a custom API that you call from Python. So you have to use JavaScript to implement the API, but you can use Python to implement your tests or web data extraction or whatever your actual task is.


For anyone using PhantomJS I'd recommend checking out CasperJS (http://casperjs.org/) . It adds some nice features to PhantomJS and takes out a lot of the pain points


I find it preferable to determine the requests that jQuery is making and perform them myself to extract the necessary data, rather than load up a whole browser just to do the same thing.

Selenium is terrible, performance wise, and requires a significant investment in environment in order to work reliably. I try to avoid it except when I absolutely cannot.


I wound up doing this myself, after spending an undue amount of time struggling with a morass of insanely written Javascript. Fiddler proved indispensable for observing the actual interaction with the web server.


If you're writing Python and need to do something like this, you could try using Phantompy, a Python port of PhantomJS: https://github.com/niwibe/phantompy

It's still "in an early stage of development" but it's on my list of libraries to keep an eye on for when I have time to tackle the JS-heavy sites of the world.


For scraping phantomjs or casperjs is the best way to go but you will have to use some JavaScript [1]. Both give you access to everything a WebKit browser user does with either a Node-style callback syntax (phantomjs) or a procedural/promises-style syntax (casperjs). Easy to setup, simple to use and fast enough for scraping but only WebKit (for now).

For testing on browsers other than WebKit (or vendor specific WebKit edge cases) use Selenium. Harder to setup, more complex, probably faster (still slow for testing) but not limited to WebKit.

[1] Sorry folks but some JavaScript is required to programmatically interacting with the web - also need some HTML and CSS.


One more thing, has anyone used BeautifulSoup for forever? Is the project still active? I mean the website is cute and all, but I find pyquery ( Also based on lxml) so much easier with parsing the scraped data.


I'd consider it still active, since it was updated on 2013-06-07: https://pypi.python.org/pypi/beautifulsoup4

I prefer using lxml myself, since I like using XPath queries, but bs4 sometimes parses broken HTML better than any of the provided lxml parsers do.


Something to consider is that the trend the past year has been to use headless browsers over BeautifulSoup, cURL, etc.. because headless browsers are harder to detect by anti-scraping systems and can interpret JavaScript.


That's what the OP is about ;-). But BeautifulSoup isn't a way to retrieve a web page, it's a way to parse HTML. You can get the page with a headless browser, and then transfer the DOM into a BeautifulSoup tree to do your scraping.


BS4, which is still actively developed, got out of the parser game - it can now use lxml (fast) or html5lib (highly tolerant) to parse the HTML. It's kept the convenient interface to dig into the DOM, and it's kept the UnicodeDammit encoding detection system.


I recently tried to get back into Selenium for a work-related project and, despite its frustrations, it is one my favorite open source gems I found in the last several years. When showing it uninitiated web devs their heads almost exploded from joy and amazement. Your setup with Selenium intrigued me since the pain point for me has become how difficult it is to maneuver some browsers with Selenium IDE to throw together ideas, if that is even encouraged anymore.


You are installing some devel-packages, but i don't see anything compiling? Does the selenium installation build native extensions? Then the commands should probably the other way round. Or is phantomjs compiling something on the first run?

Minor nitpick: I don't think it is a good idea to copy a binary directly to /usr/bin, without a package manager. You could just put it into /opt and symlink to /usr/(local/)bin.


The file that he is fetching ( phantomjs-1.9.1-linux-x86_64.tar.bz2 ) is the executables for his platform, with some examples on usage and a readme.


That doesn't seem like a very safe thing to do... dont they have sc for PhantomJS one can checksum and run ./configure > make > sudo make install?


PhantomJS is pretty big. IIRC, building it takes quite some time. I think they bundle webkit and the necessary parts of Qt, and you'd have to be out of your mind to build that from source if you can avoid it.

Using official distribution packages would be a better idea, but their freshness can vary, especially on RHEL.


Fair enough if you want to get straight to doing what you were planning to do.


Off topic: it is perfectly fine to install things like PyQt / PySide on a headless server. I suppose the problem is because the distro doesn't provide these packages?

Also, PhantomJS works fine in this case because the binary in the tarball is statically compiled. You can find a whole lot of qt stuffs inside PhantomJS source repository. There ain't no such thing as "truly headless".


Wow was searching something similar. Actually was trying to build a app which scraps data from movie ticket booking sites and provides data via SMS to user that whether tickets are still available or not. Because everyone doesn't have access to internet in India yet.

@Steven5158 thanks for the share.

If anyone here wants help in building SMS apps do contact me.


We do quite a bit of web scraping / parsing on headless servers with Selenium. What we did was just install some X packages and run VNC server on the headless clients with Firefox. Cool thing about that is you can then go watch the scripts executing if you connect to the VNC session and take a screenshot on failure, etc.


Brilliant! I've been using Xvfb for headless operation, didn't even consider using VNC.


I am under the assumption the python-requests would have the same issue - it does not render the page, it only retrieves the original page response.

Very, very good to know when diving into scraping.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: