Question

I'm constantly caught by surprised how certain operations generate me huge amount of WAL files.

I want these WAL files for point in time recovery (I also perform a nightly full dump in addition) so the basic functionality provided is wanted and I don't want to change that (i.e. I'm not searching for a way to turning WAL archives off, etc.)

Using Postgres 9.5 with these settings:

wal_level = archive
checkpoint_timeout = 20min
max_wal_size = 1GB
min_wal_size = 80MB
archive_command = 'test ! -f /backup/wal/%f && cp %p /backup/wal/%f'

I had run this statement today:

WITH table2_only_names AS (
    SELECT id , name FROM table2
)
UPDATE table1
SET table2_name = table2_only_names.name
FROM table2_only_names
WHERE table1.table2_id = table2_only_names.id;

Table1

CREATE TABLE public.table1 (
  id BIGINT PRIMARY KEY NOT NULL DEFAULT nextval('table1_id_seq'::regclass),
  table2_id BIGINT,
  table3_id BIGINT,
  positive_count INTEGER NOT NULL DEFAULT 0,
  neutral_count INTEGER NOT NULL DEFAULT 0,
  negative_count INTEGER NOT NULL DEFAULT 0,
  is_blocked BOOLEAN NOT NULL DEFAULT false,
  blocker_id BIGINT,
  group BIGINT,
  created TIMESTAMP WITH TIME ZONE,
  modified TIMESTAMP WITH TIME ZONE,
  table4_id BIGINT,
  name CHARACTER VARYING(255)
);
CREATE UNIQUE INDEX idx1 ON table1 USING BTREE (table3_id, table2_id);
CREATE INDEX idx2 ON table1 USING BTREE (table3_id);
CREATE INDEX idx3 ON table1 USING BTREE (table2_id);
CREATE INDEX idx4 ON table1 USING BTREE (group);
CREATE INDEX idx5 ON table1 USING BTREE (blocker_id);
CREATE INDEX idx6 ON table1 USING BTREE (table4_id);

15 Mio rows, table size ~3,4GB, index size 7GB

Table2

CREATE TABLE public.table2 (
  id BIGINT PRIMARY KEY NOT NULL DEFAULT nextval('table2_id_seq'::regclass),
  name CHARACTER VARYING(255),
);

10 Mio rows, table size ~2GB, index size 3,4GB

The runtime was about 55 minutes (not complaining here).

The amount of generated WAL files was unexpectedly huge. The aforementioned WAL archives are on a dedicated partition with has 100GB and at the point the query started, around 30GB were free.

It was not enough as after 15-20 minutes disk space was <10GB and I started to delete "older" WAL archives. I had to do this constantly up to the point were I was pretty sure I already had to delete WAL archive files generated by this very statement.

The tables in question were not used by any other process during that time but "normal" operations of other tables continued.

disk usage graph

I've marked the time where I started the query and where it ended. You can clearly see where I deleted the WAL archive files :-)

To me, why so many are generated is a mystery and it currently is a problem because it's hardly foreseeable how much is needed and when operation strike which need one.

What am I missing to better understand how much space is needed? Are these things avoidable? Am I doing something wrong?

Was it helpful?

Solution

When you update a row in PostgreSQL, it generally makes a copy of the entire row (not just the column that was updated) and marks the old row as deleted. The new copy is going to need to get WAL logged in its entirety. The old row is probably also going to be WAL logged in its entirety, on average, if you have full_page_writes turned on and you are checkpointing too closely together.

Almost all of the updated rows are probably going to need to update all of the indexes for it, as well. That is because the new version of the row won't fit on the same page as the old version, so the indexes have to know where to find the new version.

So you are logging the entire table twice (once for the old rows, once for the new ones) and all if its indexes as well. And WAL records have quite a bit of overhead. And if you have full_page_writes turned on and checkpoint frequently, that will make it even worse.

So what are your options to reduce the volume?

1) If many of your updates are degenerate (updated to the value they already have) you can suppress those updates with an additional where clause:

WITH table2_only_names AS (
    SELECT id , name FROM table2
)
UPDATE table1
SET table2_name = table2_only_names.name
FROM table2_only_names
WHERE table1.table2_id = table2_only_names.id
AND table2_name is distinct from table2_only_names.name;

2) Most WAL files are extremely compressible. You can include a compresssion command in your archive_command, something like

 archive_command = 'set -C -o pipefail; xz -2 -c %p > /backup/wal/%f.xz'

Of course you will have to make your recovery_command do the reverse.

3) Since you are using 9.5, you can try turning wal_compression on.

4) You could try turning off full_page_writes, although this does but your data at risk of corruption in the case of a crash, on most storage hardware. Or, if you have frequent checkpoints during this operation you could make checkpoints occur much less frequently, which will lessen the impact of having full_page_writes turned on.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top