Pregunta

I'm using feedparser to print the top 5 Google news titles. I get all the information from the URL the same way as always.

x = 'https://news.google.com/news/feeds?pz=1&cf=all&ned=us&hl=en&topic=t&output=rss'
feed = fp.parse(x)

My problem is that I'm running this script when I start a shell, so that ~2 second lag gets quite annoying. Is this time delay primarily from communications through the network, or is it from parsing the file?

If it's from parsing the file, is there a way to only take what I need (since that is very minimal in this case)?

If it's from the former possibility, is there any way to speed this process up?

¿Fue útil?

Solución

I suppose that a few delays are adding up:

  • The Python interpreter needs a while to start and import the module
  • Network communication takes a bit
  • Parsing probably consumes only little time but it does

I think there is no straightforward way of speeding things up, especially not the first point. My suggestion is that you have your feeds downloaded on a regularly basis (you could set up a cron job or write a Python daemon) and stored somewhere on your disk (i.e. a plain text file) so you just need to display them at your terminal's startup (echo would probably be the easiest and fastest).

I personally made good experiences with feedparser. I use it to download ~100 feeds every half hour with a Python daemon.

Otros consejos

Parse at real time not better case if you want faster result.

You can try does it asynchronously by Celery or by similar other solutions. I like the Celery, it gives many abilities. There are abilities as task as the cron or async and more.

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top