Hi, I am creating a continuous aggregate on a pretty big table, about 2 billion rows. It seems that the continuous aggregate takes almost the rest of the free space of this disk to make the aggregate, which has been running for half a day now. This continuous aggregate parses JSON besides aggregation operations on many columns, so i’d say it’s a complex aggregate as on what it does and the number of columns. Will this take my entire disk until it collapses ? Does the disk space its taking gets freed afterwards ? Is there any general recommendation about dealing with large tables when doing aggregates ? Also, what is a common strategy to model complex aggregates ? Is there a way to test without having to do the aggregate with the entire table ? For example, make an aggregate only over a year period ?Thanks
3 posts - 2 participants