Large tables limitations

GitLab enforces some limitations on large database tables schema changes to improve manageability for both GitLab and its customers. The list of tables subject to these limitations is defined in rubocop/rubocop-migrations.yml.

Table size restrictions

The following limitations apply to table schema changes on GitLab.com:

Limitation Maximum size after the action (including indexes and column size)
Can not add an index 50 GB
Can not add a column with foreign key 50 GB
Can not add a new column 100 GB

These limitations align with our goal to maintain all tables under 100 GB for improved stability and performance.

Exceptions

Exceptions to these size limitations should only granted for the following cases:

  • Migrate a table’s columns from int4 to int8
  • Add a sharding key to support cells
  • Modify a table to assist in partitioning or data retention efforts
  • Replace an existing index to provide better query performance

Requesting an exception

To request an exception to these limitations:

  1. Create a new issue using the Database Team Tasks template
  2. Select the schema_change_exception template
  3. Provide detailed justification for why your case requires an exception
  4. Wait for review and approval from the Database team before proceeding
  5. Link the approval issue when disabling the cop for your migration

Techniques to reduce table size

Before requesting an exception, consider these approaches to manage table size:

Archiving data

  • Move old, infrequently accessed data to archive tables
  • Implement archiving workers for automated data migration
  • Consider using partitioning by date to facilitate archiving, see date range partitioning

Data retention

  • Implement retention policies to remove old data
  • Configure automated cleanup jobs for expired data, see deleting old pipelines

Table partitioning

Column optimization

  • Use appropriate data types (for example, smallint instead of integer when possible)
  • Remove unused or redundant indexes
  • Consider using NULL instead of empty strings or zeros
  • Use text instead of varchar to avoid storage overhead

Normalization

  • Split large tables into related smaller tables
  • Move rarely used columns to separate tables
  • Use junction tables for many-to-many relationships
  • Consider vertical partitioning for wide tables

External storage

  • Move large text or binary data to object storage
  • Store only metadata in the database
  • Use Elasticsearch for search-specific data
  • Consider using Redis for temporary or cached data

Alternatives to table modifications

Consider these alternatives when working with large tables:

  1. Creates a separate table for new columns, especially if the column is not present in all rows. The new table references the original table through a foreign key.
  2. Work with the Global Search team to add your data to Elasticsearch for enhanced filter/search functionality.
  3. Simplify filtering/sorting options (for example, use id instead of created_at for sorting).

Benefits of table size limitations

Table size limitations provide several advantages:

  • Enable separate vacuum operations with different frequencies
  • Generate less Write-Ahead Log (WAL) data for column updates
  • Prevent unnecessary data copying during row updates

For more information about data model trade-offs, see the database documentation.

Using has_one relationships

When a table becomes too large for new columns, create a new table with a has_one relation. For example, in merge request !170371, we track the total weight count of an issue in a separate table.

Benefits of this approach:

  1. Keeps the main table narrower, reducing data load from PostgreSQL
  2. Creates an efficient narrow table for specific queries
  3. Allows selective population of the new table as needed

This approach is particularly effective when:

  • The new column applies to a subset of the main table
  • Only specific queries need the new data

Disadvantages

  1. More tables may result in more “joins” which will complicate queries
  2. Queries with multiple joins may end up being hard to optimize