質問

After two days of struggling my last hopes are on you. I'm trying to download a large (+/- 160 mb) XML-file from the Zanox servers. The download link to this file is dynamic and does not directly point to the file itself. I'm trying to download this file to my own server to parse it, but it's not working out for me. I've been using curl with the CURLOPT_HEADER set to 0. Can you guys help me out maybe?

Regards.

One of the codes I used:

$fp = fopen("productfeed1.xml", 'w+');
$c = curl_init($url);
curl_setopt($c, CURLOPT_FILE, $fp);  
curl_setopt($c, CURLOPT_HEADER, 0);
curl_setopt($c, CURLOPT_FOLLOWLOCATION, 1);  
$contents = curl_exec($c);
$info = curl_getinfo($c);
fwrite($fp, $contents);
curl_close($c); 
fclose($fp);
役に立ちましたか?

解決

Give this a try. (It works for me)

$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0.1) Gecko/20100101 Firefox/4.0.1");
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_BINARYTRANSFER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true );
curl_setopt($ch, CURLOPT_MAXREDIRS, 10 );
curl_setopt($ch, CURLOPT_TIMEOUT, 36000);
curl_exec($ch);
ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top