Question

$html = file_get_html('http://www.livelifedrive.com/');  
echo $html->plaintext;

I've no problem scraping other websites but this particular one returns gibberish.
Is it encrypted or something?

Was it helpful?

Solution

Actually, the gibberish you see is a GZIPed content.

When I fetch the content with hurl.it for instance, here are the headers returned by server:

GET http://www.livelifedrive.com/malaysia/ (the url http://www.livelifedrive.com/ resolves to http://www.livelifedrive.com/malaysia/)

Connection: keep-alive
Content-Encoding: gzip  <--- The content is gzipped
Content-Length: 18202
Content-Type: text/html; charset=UTF-8
Date: Tue, 31 Dec 2013 10:35:42 GMT
P3p: CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM"
Server: nginx/1.4.2
Vary: Accept-Encoding,User-Agent
X-Powered-By: PHP/5.2.17

So once you have scraped the content, unzip it. Here is a sample code:

if ( ! function_exists('gzdecode'))
{
    /**
     * Decode gz coded data
     * 
     * http://php.net/manual/en/function.gzdecode.php
     * 
     * Alternative: http://digitalpbk.com/php/file_get_contents-garbled-gzip-encoding-website-scraping
     * 
     * @param string $data gzencoded data
     * @return string inflated data
     */
    function gzdecode($data) 
    {
        // strip header and footer and inflate

        return gzinflate(substr($data, 10, -8));
    }
}

References:

OTHER TIPS

There's nothing really like site encryption, if the content can reach your browser and is HTML, it can be scraped.

It's probably because the site uses a lot of Javascript and Flash which cannot be scraped by an HTML parser. Even Google itself is just begginning to make inroads into accurate scraping of flash and Javascript.

To scrape a site in it's browser glory, try Selenium.

Links:

              https://code.google.com/p/php-webdriver-bindings/

              https://groups.google.com/forum/#!topic/selenium-users/Rj6BYEkz9Q0

A neat tip to know what you can scrape using an HTML scraper, try disabling Javascript and Flash on your browser and loading the website. The content you can view is easily scrapable - the rest you have to be a little more clever in your methods.

Maybe the files on their servers aren't saved as UTF-8? I've tried your function on several sites and sometimes it works (on servers I know that they save their files as UTF-8, and not just stating those are encoded in UTF-8) and some other times it gives gibberish.

Try testing it yourself on your local machine, parsing files saved as UTF-8 and other encodings, and see what comes up...

$html->plaintext;

This will give you only text but if you need to fetch html then you need to use

$html->innertext;

For more information you can refer http://simplehtmldom.sourceforge.net/manual.htm

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top