Quantcast
Viewing latest article 12
Browse Latest Browse All 287

Efficient Update of hundreds of millions of rows

Hi all! Are there ways to speedy update of hundreds of millions of rows in comressed hypertable. I tried approach described in article. But the update affected only a small part of the values.
I checked the update on one compressed chunk only.

If I understand correctly, the values in the field were updated for those records that were decompressed into the parent chunk from the compressed child chunk before I ran the script to update the values. That is, those that are still in the compressed chunk were not updated (but I don’t know how to check it).

The following steps work correctly (it was suggested to me by my wife, who is not a developer Image may be NSFW.
Clik here to view.
:grin:
): decompress the chunk, update the values, compress the chunk. In the absence of any other option, this one is a working one. But it seems to me that there should be another way to update all the values in the field in a compressed chunk without decompressing the chunk.

My script of update:

DO $$
DECLARE
	v_limit INTEGER := 3000;

	v_row_id_del_to INTEGER;
	v_exit BOOLEAN := FALSE;
BEGIN
	LOOP
		WITH
			cte AS (
				SELECT old_value, new_value
				FROM tmp
				ORDER BY row_id
				LIMIT v_limit
			)
		UPDATE _timescaledb_internal.<uncompressed_chunk_name> AS tgt
		SET field_to_update = cte.new_value
		FROM cte
		WHERE tgt.field_to_update = cte.old_value
		;

		COMMIT;
		-- ..... other code
	END LOOP;
END $$;

2 posts - 2 participants

Read full topic


Viewing latest article 12
Browse Latest Browse All 287

Trending Articles