Question

I just finished transferring as much link-structure data concerning wikipedia (English) as I could. Basically, I downloaded a bunch of SQL dumps from wikipedia's latest dump repository. Since I am using PostgreSQL instead of MySQL, I decided to load all these dumps into my db using pipeline shell commands.

Anyway, one of these tables has 295 million rows: the pagelinks table; it contains all intra-wiki hyperlinks. From my laptop, using pgAdmin III, I sent the following command to my database server (another computer):

SELECT pl_namespace, COUNT(*) FROM pagelinks GROUP BY (pl_namespace);

Its been at it for an hour or so now. The thing is that the postmaster seems to be eating up more and more of my very limited HD space. I think it ate about 20 GB as of now. I had previously played around with the postgresql.conf file in order to give it more performance flexibility (i.e. let it use more resources) for it is running with 12 GB of RAM. I think I basically quadrupled most bytes and such related variables of this file thinking it would use more RAM to do its thing.

However, the db does not seem to use much RAM. Using the Linux system monitor, I am able to see that the postmaster is using 1.6 GB of shared memory (RAM). Anyway, I was wondering if you guys could help me better understand what it is doing for it seems that I really do not understand how PostgreSQL uses HD resources.

Concerning the metastructure of wikipedia databases, they provide a good schema that may be of use or even but of interest to you.

Feel free to ask me for more details, thx.

Was it helpful?

Solution

It's probably the GROUP BY that's causing the problem. In order to do grouping, the database has to sort the rows to put duplicate items together. An index probably won't help. A back-of-the-envelope calculation:

Assuming each row takes 100 bytes of space, that's 29,500,000,000 bytes, or about 30GB of storage. It can't fit all that in memory, so your system is thrashing, which slows operations down by a factor of 1000 or more. Your HD space may be disappearing into swap space, if it's using swap files.

If you only need to do this calculation once, try breaking it apart into smaller subsets of the data. Assuming pl_namespace is numeric and ranges from 1-295million, try something like this:

SELECT pl_namespace, COUNT(*)
FROM pagelinks
WHERE pl_namespace between 1 and 50000000
GROUP BY (pl_namespace);

Then do the same for 50000001-100000000 and so forth. Combine your answers together using UNION or simply tabulate the results with an external program. Forget what I wrote about an index not helping GROUP BY; here, an index will help the WHERE clause.

OTHER TIPS

What exactly is claiming that it's only taking 9.5MB of RAM? That sounds unlikely to me - the shared memory almost certainly is RAM which is being shared between different Postgres processes. (From what I remember, each client ends up as a separate process, although it's been a while so I could be very wrong.)

Do you have an index on the pl_namespace column? If there's an awful lot of distinct results, I could imagine that query being pretty heavy on a 295 million row table with no index. Having said that, 10GB is an awful lot to swallow. Do you know which files it's writing to?

Ok so here is the gist of it:

the GROUP BY clause made the index' invalid, so the postmaster (postgresql server process) decided to create a bunch of tables (23GB of tables) that were located in the directory $PGDATA/base/16384/pgsql_tmp.

When modifying the postgresql.conf file, I had given permission to postgreSQL to use 1.6 GB of RAM (which I will now double for it has access to 11.7 GB of RAM); the postmaster process was indeed using up 1.6 GB of RAM, but that wasn't enough, thus the pgsql_tmp directory.

As was pointed out by Barry Brown, since I was only executing this SQL command to get some statistical information about the distribution of the links among the pagelinks.namespaces, I could have queried a subset of the 296 million pagelinks (this is what they do for surveys).

When the command returned the result set, all temporary tables were automatically deleted as if nothing had happened.

Thx for your help guys!

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top