This will do what you want if your CURL result is an HTML page and you only want a
links (and not images or other clickable elements).
$xml = new DOMDocument();
// $html should be your CURL result
$xml->loadHTML($html);
// or you can do that directly by providing the requested page's URL to loadHTMLFile
// $xml->loadHTMLFile("http://...");
// this array will contain all links
$links = array();
// loop through all "a" elements
foreach ($xml->getElementsByTagName("a") as $link) {
// URL-encodes the link's URL and adds it to the previous array
$links[] = urlencode($link->getAttribute("href"));
}
// now do whatever you want with that array
The $links
array will contain all the links found in the page in URL-encoded format.
Edit: if you instead want to replace all links in the page while keeping everything else, it's better to use DOMDocument
than regular expressions (related : why you shouldn't use regex to handle HTML), here's an edited version of my code that replaces every link with its URL-encoded equivalent and then saves the page into a variable :
$xml = new DOMDocument();
// $html should be your CURL result
$xml->loadHTML($html);
// loop through all "a" elements
foreach ($xml->getElementsByTagName("a") as $link) {
// gets original (non URL-encoded link)
$original = $link->getAttribute("href");
// sets new link to URL-encoded format
$link->setAttribute("href", urlencode($original));
}
// save modified page to a variable
$page = $xml->saveHTML();
// now do whatever you want with that modified page, for example you can "echo" it
echo $page;
Code based on this.