- Table size restrictions
- Exceptions
- Techniques to reduce table size
- Alternatives to table modifications
- Benefits of table size limitations
-
Using
has_one
relationships - Related links
Large tables limitations
GitLab enforces some limitations on large database tables schema changes to improve manageability for both GitLab and its customers. The list of tables subject to these limitations is defined in rubocop/rubocop-migrations.yml
.
Table size restrictions
The following limitations apply to table schema changes on GitLab.com:
Limitation | Maximum size after the action (including indexes and column size) |
---|---|
Can not add an index | 50 GB |
Can not add a column with foreign key | 50 GB |
Can not add a new column | 100 GB |
These limitations align with our goal to maintain all tables under 100 GB for improved stability and performance.
Exceptions
Exceptions to these size limitations should only granted for the following cases:
- Migrate a table’s columns from
int4
toint8
- Add a sharding key to support cells
- Modify a table to assist in partitioning or data retention efforts
- Replace an existing index to provide better query performance
Requesting an exception
To request an exception to these limitations:
- Create a new issue using the Database Team Tasks template
- Select the
schema_change_exception
template - Provide detailed justification for why your case requires an exception
- Wait for review and approval from the Database team before proceeding
- Link the approval issue when disabling the cop for your migration
Techniques to reduce table size
Before requesting an exception, consider these approaches to manage table size:
Archiving data
- Move old, infrequently accessed data to archive tables
- Implement archiving workers for automated data migration
- Consider using partitioning by date to facilitate archiving, see date range partitioning
Data retention
- Implement retention policies to remove old data
- Configure automated cleanup jobs for expired data, see deleting old pipelines
Table partitioning
- Partition large tables by date, ID ranges, or other criteria
- Consider range or list partitioning based on access patterns
Column optimization
- Use appropriate data types (for example,
smallint
instead ofinteger
when possible) - Remove unused or redundant indexes
- Consider using
NULL
instead of empty strings or zeros - Use
text
instead ofvarchar
to avoid storage overhead
Normalization
- Split large tables into related smaller tables
- Move rarely used columns to separate tables
- Use junction tables for many-to-many relationships
- Consider vertical partitioning for wide tables
External storage
- Move large text or binary data to object storage
- Store only metadata in the database
- Use Elasticsearch for search-specific data
- Consider using Redis for temporary or cached data
Alternatives to table modifications
Consider these alternatives when working with large tables:
- Creates a separate table for new columns, especially if the column is not present in all rows. The new table references the original table through a foreign key.
- Work with the Global Search team to add your data to Elasticsearch for enhanced filter/search functionality.
- Simplify filtering/sorting options (for example, use
id
instead ofcreated_at
for sorting).
Benefits of table size limitations
Table size limitations provide several advantages:
- Enable separate vacuum operations with different frequencies
- Generate less Write-Ahead Log (WAL) data for column updates
- Prevent unnecessary data copying during row updates
For more information about data model trade-offs, see the database documentation.
Using has_one
relationships
When a table becomes too large for new columns, create a new table with a has_one
relation. For example, in merge request !170371, we track the total weight count of an issue in a separate table.
Benefits of this approach:
- Keeps the main table narrower, reducing data load from PostgreSQL
- Creates an efficient narrow table for specific queries
- Allows selective population of the new table as needed
This approach is particularly effective when:
- The new column applies to a subset of the main table
- Only specific queries need the new data
Disadvantages
- More tables may result in more “joins” which will complicate queries
- Queries with multiple joins may end up being hard to optimize