Question

I hope this question is not a RTFM one. I am trying to write a Python script that extracts links from a standard HTML webpage (the <link href... tags). I have searched the web for matching regexen and found many different patterns. Is there any agreed, standard regex to match links?

Adam

UPDATE: I am actually looking for two different answers:

  1. What's the library solution for parsing HTML links. Beautiful Soup seems to be a good solution (thanks, Igal Serban and cletus!)
  2. Can a link be defined using a regex?
Was it helpful?

Solution

As others have suggested, if real-time-like performance isn't necessary, BeautifulSoup is a good solution:

import urllib2
from BeautifulSoup import BeautifulSoup

html = urllib2.urlopen("http://www.google.com").read()
soup = BeautifulSoup(html)
all_links = soup.findAll("a")

As for the second question, yes, HTML links ought to be well-defined, but the HTML you actually encounter is very unlikely to be standard. The beauty of BeautifulSoup is that it uses browser-like heuristics to try to parse the non-standard, malformed HTML that you are likely to actually come across.

If you are certain to be working on standard XHTML, you can use (much) faster XML parsers like expat.

Regex, for the reasons above (the parser must maintain state, and regex can't do that) will never be a general solution.

OTHER TIPS

Regexes with HTML get messy. Just use a DOM parser like Beautiful Soup.

No there isn't.

You can consider using Beautiful Soup. You can call it the standard for parsing html files.

Shoudln't a link be a well-defined regex?

No, [X]HTML is not in the general case parseable with regex. Consider examples like:

<link title='hello">world' href="x">link</link>
<!-- <link href="x">not a link</link> -->
<![CDATA[ ><link href="x">not a link</link> ]]>
<script>document.write('<link href="x">not a link</link>')</script>

and that's just a few random valid examples; if you have to cope with real-world tag-soup HTML there are a million malformed possibilities.

If you know and can rely on the exact output format of the target page you can get away with regex. Otherwise it is completely the wrong choice for scraping web pages.

Shoudln't a link be a well-defined regex? This is a rather theoretical question,

I second PEZ's answer:

I don't think HTML lends itself to "well defined" regular expressions since it's not a regular language.

As far as I know, any HTML tag may contain any number of nested tags. For example:

<a href="http://stackoverflow.com">stackoverflow</a>
<a href="http://stackoverflow.com"><i>stackoverflow</i></a>
<a href="http://stackoverflow.com"><b><i>stackoverflow</i></b></a>
...

Thus, in principle, to match a tag properly you must be able at least to match strings of the form:

BE
BBEE
BBBEEE
...
BBBBBBBBBBEEEEEEEEEE
...

where B means the beginning of a tag and E means the end. That is, you must be able to match strings formed by any number of B's followed by the same number of E's. To do that, your matcher must be able to "count", and regular expressions (i.e. finite state automata) simply cannot do that (in order to count, an automaton needs at least a stack). Referring to PEZ's answer, HTML is a context-free grammar, not a regular language.

It depends a bit on how the HTML is produced. If it's somewhat controlled you can get away with:

re.findall(r'''<link\s+.*?href=['"](.*?)['"].*?(?:</link|/)>''', html, re.I)

Answering your two subquestions there.

  1. I've sometimes subclassed SGMLParser (included in the core Python distribution) and must say it's straight forward.
  2. I don't think HTML lends itself to "well defined" regular expressions since it's not a regular language.

In response to question #2 (shouldn't a link be a well defined regular expression) the answer is ... no.

An HTML link structure is a recursive much like parens and braces in programming languages. There must be an equal number of start and end constructs and the "link" expression can be nested within itself.

To properly match a "link" expression a regex would be required to count the start and end tags. Regular expressions are a class of Finite Automata. By definition a Finite Automata cannot "count" constructs within a pattern. A grammar is required to describe a recursive data structure such as this. The inability for a regex to "count" is why you see programming languages described with Grammars as opposed to regular expressions.

So it is not possible to create a regex that will positively match 100% of all "link" expressions. There are certainly regex's that will match a good deal of "link"'s with a high degree of accuracy but they won't ever be perfect.

I wrote a blog article about this problem recently. Regular Expression Limitations

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top