Question

I'm trying to get the response of a curl call into a variable in perl.

my $foo = `curl yadd yadda`;

print $foo;

does not work. When I run this at the command line the curl call prints all its output correctly in the terminal, but the variable is not filled with that data.

Is there a way to do this without installing and calling the Perl curl lib?

Was it helpful?

Solution

It probably sends its stuff to stderr. Try

my $foo = `curl yadd yadda 2>&1`;

OTHER TIPS

You also might consider looking at LWP::UserAgent or even LWP::Simple.

What do you really want to do? Use curl at all costs, or grab the contents of a web page?

A more perlish way of doing this (which relies on no external programs that may or may not be installed on the next machine where you need to do this) would be:

use LWP::Simple;

my $content = get("http://stackoverflow.com/questions/1015438/")
   or die "no such luck\n";

If you want to see why the GET failed, or grab multiple pages from the same site, you'll need to use a bit more machinery. perldoc lwpcook will get you started.

In the shell 2> means redirect fileno 2. Fileno 2 is always what a program sees as stderr. Similarly, fileno 0 is stdin and fileno 1 is stdout. So, when you say 2>&1 you are telling the shell to redirect stderr (fileno 2) into stdout (fileno 1). Since the backticks operator uses the the shell to run the command you specify, you can use shell redirection, so

my $foo = `curl yadda yadda 2>&1`;

is telling curl to redirect its output into stdout, and since the backtick operator catches stdout, you get what you were looking for.

Try this:

$var = `curl "http://localhost" 2>/dev/null`; 
print length($var)

curl displays progress information on stderr, redirecting that to /dev/null makes it easier to see what's going on.

This works on my system:

#!/usr/bin/perl

use strict;
use warnings;

my $output = `curl www.unur.com`;

print $output;

__END__

C:\> z1

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd"><html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">

etc.

You can open a pipe as if it were a file.

$url = "\"http://download.finance.yahoo.com/d/quotes.csv?s=" . 
"$symbol&f=sl1d1t1c1ohgvper&e=.csv\"";

open CURL, "curl -s $url |" or die "single_stock_quote: Can't open curl $!\n";
$line = <CURL>;
close CURL;

It might be that some of the output you want to capture is in standard err, not standard out. Try this:

my $foo = system "curl http://www.stackoverflow.com";
print $foo;
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top