0

I wonder if there is a limit for partition table by list where each subpartition table contains only one element.

For example, I have this partition table:

CREATE TABLE whatever (
    city_id         int not null,
    country_id      int not null,
) PARTITION BY LIST (country_id);

And I create millions of subpartition tables:

CREATE TABLE whatever_1 PARTITION OF whatever
    FOR VALUES IN (1);

CREATE TABLE whatever_2 PARTITION OF whatever
    FOR VALUES IN (2);

# until millions...

CREATE TABLE whatever_10000000 PARTITION OF whatever
    FOR VALUES IN (10000000);

Assuming an index on country_id, would that still work? Or Will I hit the 65000 limit as described here?

6
  • 2
    "Millions of partitions" are hardly sensible to begin with. What are you trying to achieve with that partitioning scheme? And which universe are you modeling where you have millions of countries? The earth currently has about 200 countries.
    – user330315
    Commented Mar 11, 2021 at 16:00
  • This is just an example. Do not take care of this Commented Mar 11, 2021 at 20:09
  • Reading your description it seems you are using a sequence for country_id and then thinking you need a partition for sequence value. In addition to being basically a bad idea, it is also incorrect. Instead of an arbitrary assigned value use the ISO 3166 standard. In this case just load the entire list of countries . There are several candidate keys listed perhaps the best for here being ISO 1366-1 Numeric Code. Then create your partitions based on that giving you 249 partitions not millions.
    – Belayer
    Commented Mar 11, 2021 at 20:11
  • Country is an example... I'm not using country in real life. Assume it can be just an ID Commented Mar 12, 2021 at 0:39
  • Why partition at all? An index on country_id seems the better choice here.
    – user330315
    Commented Mar 15, 2021 at 10:10

1 Answer 1

1

Even with PostgreSQL v13, anything that goes beyond at most a few thousand partitions won't work well, and it's better to stay lower.

The reason is that when you use a partitioned table in an SQL statement, the optimizer has to consider all partitions separately. It has to figure out which of the partitions it has to use and which not, and for all partitions that it uses it has to come up with an execution plan. Consequently, planning time will go up as the number of partitions increases. This may not matter for large analytical queries, where execution time dominates, but it will considerably slow down the execution of small statements.

Use longer lists or use range partitioning.

1
  • Yes; I have extended the answer to explain that. Commented Mar 15, 2021 at 10:05

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.