- Requirements for configuring a Gitaly Cluster
- Setup Instructions
- Distributed reads
- Strong consistency
- Replication factor
- Automatic failover and primary election strategies
- Primary Node Failure
- Data recovery
- Migrate to Gitaly Cluster
Configure Gitaly Cluster
Configure Gitaly Cluster using either:
- Gitaly Cluster configuration instructions available as part of reference architectures for installations of up to:
- The custom configuration instructions that follow on this page.
Smaller GitLab installations may need only Gitaly itself.
Requirements for configuring a Gitaly Cluster
The minimum recommended configuration for a Gitaly Cluster requires:
- 1 load balancer
- 1 PostgreSQL server (PostgreSQL 11 or newer)
- 3 Praefect nodes
- 3 Gitaly nodes (1 primary, 2 secondary)
See the design document for implementation details.
Setup Instructions
If you installed GitLab using the Omnibus package (highly recommended), follow the steps below:
- Preparation
- Configuring the Praefect database
- Configuring the Praefect proxy/router
- Configuring each Gitaly node (once for each Gitaly node)
- Configure the load balancer
- Updating the GitLab server configuration
- Configure Grafana
Preparation
Before beginning, you should already have a working GitLab instance. Learn how to install GitLab.
Provision a PostgreSQL server (PostgreSQL 11 or newer).
Prepare all your new nodes by installing GitLab.
- At least 1 Praefect node (minimal storage required)
- 3 Gitaly nodes (high CPU, high memory, fast storage)
- 1 GitLab server
You need the IP/host address for each node.
-
LOAD_BALANCER_SERVER_ADDRESS
: the IP/host address of the load balancer -
POSTGRESQL_SERVER_ADDRESS
: the IP/host address of the PostgreSQL server -
PRAEFECT_HOST
: the IP/host address of the Praefect server -
GITALY_HOST_*
: the IP or host address of each Gitaly server -
GITLAB_HOST
: the IP/host address of the GitLab server
If you are using a cloud provider, you can look up the addresses for each server through your cloud provider’s management console.
If you are using Google Cloud Platform, SoftLayer, or any other vendor that provides a virtual private cloud (VPC) you can use the private addresses for each cloud instance (corresponds to “internal address” for Google Cloud Platform) for PRAEFECT_HOST
, GITALY_HOST_*
, and GITLAB_HOST
.
Secrets
The communication between components is secured with different secrets, which are described below. Before you begin, generate a unique secret for each, and make note of it. This enables you to replace these placeholder tokens with secure tokens as you complete the setup process.
-
GITLAB_SHELL_SECRET_TOKEN
: this is used by Git hooks to make callback HTTP API requests to GitLab when accepting a Git push. This secret is shared with GitLab Shell for legacy reasons. -
PRAEFECT_EXTERNAL_TOKEN
: repositories hosted on your Praefect cluster can only be accessed by Gitaly clients that carry this token. -
PRAEFECT_INTERNAL_TOKEN
: this token is used for replication traffic inside your Praefect cluster. This is distinct fromPRAEFECT_EXTERNAL_TOKEN
because Gitaly clients must not be able to access internal nodes of the Praefect cluster directly; that could lead to data loss. -
PRAEFECT_SQL_PASSWORD
: this password is used by Praefect to connect to PostgreSQL.
We note in the instructions below where these secrets are required.
gitlab-secrets.json
for GITLAB_SHELL_SECRET_TOKEN
.PostgreSQL
These instructions help set up a single PostgreSQL database, which creates a single point of failure. The following options are available:
- For non-Geo installations, either:
- Use one of the documented PostgreSQL setups.
- Use your own third-party database setup, if fault tolerance is required.
- For Geo instances, either:
- Set up a separate PostgreSQL instance.
- Use a cloud-managed PostgreSQL service. AWS Relational Database Service is recommended.
To complete this section you need:
- 1 Praefect node
- 1 PostgreSQL server (PostgreSQL 11 or newer)
- An SQL user with permissions to create databases
During this section, we configure the PostgreSQL server, from the Praefect
node, using psql
which is installed by Omnibus GitLab.
-
SSH into the Praefect node and login as root:
sudo -i
-
Connect to the PostgreSQL server with administrative access. This is likely the
postgres
user. The databasetemplate1
is used because it is created by default on all PostgreSQL servers./opt/gitlab/embedded/bin/psql -U postgres -d template1 -h POSTGRESQL_SERVER_ADDRESS
Create a new user
praefect
to be used by Praefect. ReplacePRAEFECT_SQL_PASSWORD
with the strong password you generated in the preparation step.CREATE ROLE praefect WITH LOGIN CREATEDB PASSWORD 'PRAEFECT_SQL_PASSWORD';
-
Reconnect to the PostgreSQL server, this time as the
praefect
user:/opt/gitlab/embedded/bin/psql -U praefect -d template1 -h POSTGRESQL_SERVER_ADDRESS
Create a new database
praefect_production
. By creating the database while connected as thepraefect
user, we are confident they have access.CREATE DATABASE praefect_production WITH ENCODING=UTF8;
The database used by Praefect is now configured.
If you see Praefect database errors after configuring PostgreSQL, see troubleshooting steps.
PgBouncer
To reduce PostgreSQL resource consumption, we recommend setting up and configuring
PgBouncer in front of the PostgreSQL instance. To do
this, set the corresponding IP or host address of the PgBouncer instance in
/etc/gitlab/gitlab.rb
by changing the following settings:
-
praefect['database_host']
, for the address. -
praefect['database_port']
, for the port.
Because PgBouncer manages resources more efficiently, Praefect still requires a
direct connection to the PostgreSQL database. It uses the
LISTEN
feature that is not supported by
PgBouncer with pool_mode = transaction
.
Set praefect['database_host_no_proxy']
and praefect['database_port_no_proxy']
to a direct connection, and not a PgBouncer connection.
Save the changes to /etc/gitlab/gitlab.rb
and
reconfigure Praefect.
This documentation doesn’t provide PgBouncer installation instructions, but you can:
- Find instructions on the official website.
- Use a Docker image.
In addition to the base PgBouncer configuration options, set the following values in
your pgbouncer.ini
file:
-
The Praefect PostgreSQL database in the
[databases]
section:[databases] * = host=POSTGRESQL_SERVER_ADDRESS port=5432 auth_user=praefect
-
pool_mode
andignore_startup_parameters
in the[pgbouncer]
section:[pgbouncer] pool_mode = transaction ignore_startup_parameters = extra_float_digits
The praefect
user and its password should be included in the file (default is
userlist.txt
) used by PgBouncer if the auth_file
configuration option is set.
6432
to accept incoming
connections. You can change it by setting the listen_port
configuration option. We recommend setting it to the default port value (5432
) used by
PostgreSQL instances. Otherwise you should change the configuration parameter
praefect['database_port']
for each Praefect instance to the correct value.Praefect
Introduced in GitLab 13.4, Praefect nodes can no longer be designated as primary
.
If there are multiple Praefect nodes:
- Complete the following steps for each node.
- Designate one node as the “deploy node”, and configure it first.
To complete this section you need a configured PostgreSQL server, including:
- IP/host address (
POSTGRESQL_SERVER_ADDRESS
) - Password (
PRAEFECT_SQL_PASSWORD
)
Praefect should be run on a dedicated node. Do not run Praefect on the application server, or a Gitaly node.
-
SSH into the Praefect node and login as root:
sudo -i
-
Disable all other services by editing
/etc/gitlab/gitlab.rb
:# Disable all other services on the Praefect node postgresql['enable'] = false redis['enable'] = false nginx['enable'] = false alertmanager['enable'] = false prometheus['enable'] = false grafana['enable'] = false puma['enable'] = false sidekiq['enable'] = false gitlab_workhorse['enable'] = false gitaly['enable'] = false # Enable only the Praefect service praefect['enable'] = true # Prevent database connections during 'gitlab-ctl reconfigure' gitlab_rails['auto_migrate'] = false praefect['auto_migrate'] = false
-
Configure Praefect to listen on network interfaces by editing
/etc/gitlab/gitlab.rb
:praefect['listen_addr'] = '0.0.0.0:2305' # Enable Prometheus metrics access to Praefect. You must use firewalls # to restrict access to this address/port. praefect['prometheus_listen_addr'] = '0.0.0.0:9652'
-
Configure a strong
auth_token
for Praefect by editing/etc/gitlab/gitlab.rb
. This is needed by clients outside the cluster (like GitLab Shell) to communicate with the Praefect cluster:praefect['auth_token'] = 'PRAEFECT_EXTERNAL_TOKEN'
-
Configure Praefect to connect to the PostgreSQL database by editing
/etc/gitlab/gitlab.rb
.You need to replace
POSTGRESQL_SERVER_ADDRESS
with the IP/host address of the database, andPRAEFECT_SQL_PASSWORD
with the strong password set above.praefect['database_host'] = 'POSTGRESQL_SERVER_ADDRESS' praefect['database_port'] = 5432 praefect['database_user'] = 'praefect' praefect['database_password'] = 'PRAEFECT_SQL_PASSWORD' praefect['database_dbname'] = 'praefect_production' praefect['database_host_no_proxy'] = 'POSTGRESQL_SERVER_ADDRESS' praefect['database_port_no_proxy'] = 5432
If you want to use a TLS client certificate, the options below can be used:
# Connect to PostgreSQL using a TLS client certificate # praefect['database_sslcert'] = '/path/to/client-cert' # praefect['database_sslkey'] = '/path/to/client-key' # Trust a custom certificate authority # praefect['database_sslrootcert'] = '/path/to/rootcert'
By default, Praefect refuses to make an unencrypted connection to PostgreSQL. You can override this by uncommenting the following line:
# praefect['database_sslmode'] = 'disable'
-
Configure the Praefect cluster to connect to each Gitaly node in the cluster by editing
/etc/gitlab/gitlab.rb
.The virtual storage’s name must match the configured storage name in GitLab configuration. In a later step, we configure the storage name as
default
so we usedefault
here as well. This cluster has three Gitaly nodesgitaly-1
,gitaly-2
, andgitaly-3
, which are intended to be replicas of each other.If you have data on an already existing storage calleddefault
, you should configure the virtual storage with another name and migrate the data to the Gitaly Cluster storage afterwards.Replace
PRAEFECT_INTERNAL_TOKEN
with a strong secret, which is used by Praefect when communicating with Gitaly nodes in the cluster. This token is distinct from thePRAEFECT_EXTERNAL_TOKEN
.Replace
GITALY_HOST_*
with the IP or host address of the each Gitaly node.More Gitaly nodes can be added to the cluster to increase the number of replicas. More clusters can also be added for very large GitLab instances.
When adding additional Gitaly nodes to a virtual storage, all storage names within that virtual storage must be unique. Additionally, all Gitaly node addresses referenced in the Praefect configuration must be unique.# Name of storage hash must match storage name in git_data_dirs on GitLab # server ('default') and in git_data_dirs on Gitaly nodes ('gitaly-1') praefect['virtual_storages'] = { 'default' => { 'nodes' => { 'gitaly-1' => { 'address' => 'tcp://GITALY_HOST_1:8075', 'token' => 'PRAEFECT_INTERNAL_TOKEN', }, 'gitaly-2' => { 'address' => 'tcp://GITALY_HOST_2:8075', 'token' => 'PRAEFECT_INTERNAL_TOKEN' }, 'gitaly-3' => { 'address' => 'tcp://GITALY_HOST_3:8075', 'token' => 'PRAEFECT_INTERNAL_TOKEN' } } } }
In GitLab 13.8 and earlier, Gitaly nodes were configured directly under the virtual storage, and not under thenodes
key. -
Introduced in GitLab 13.1 and later, enable distribution of reads.
-
Save the changes to
/etc/gitlab/gitlab.rb
and reconfigure Praefect:gitlab-ctl reconfigure
-
For:
- The “deploy node”:
- Enable Praefect auto-migration again by setting
praefect['auto_migrate'] = true
in/etc/gitlab/gitlab.rb
. -
To ensure database migrations are only run during reconfigure and not automatically on upgrade, run:
sudo touch /etc/gitlab/skip-auto-reconfigure
- Enable Praefect auto-migration again by setting
- The other nodes, you can leave the settings as they are. Though
/etc/gitlab/skip-auto-reconfigure
isn’t required, you may want to set it to prevent GitLab running reconfigure automatically when running commands such asapt-get update
. This way any additional configuration changes can be done and then reconfigure can be run manually.
- The “deploy node”:
-
Save the changes to
/etc/gitlab/gitlab.rb
and reconfigure Praefect:gitlab-ctl reconfigure
-
To ensure that Praefect has updated its Prometheus listen address, restart Praefect:
gitlab-ctl restart praefect
-
Verify that Praefect can reach PostgreSQL:
sudo -u git /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml sql-ping
If the check fails, make sure you have followed the steps correctly. If you edit
/etc/gitlab/gitlab.rb
, remember to runsudo gitlab-ctl reconfigure
again before trying thesql-ping
command.
The steps above must be completed for each Praefect node!
Enabling TLS support
Introduced in GitLab 13.2.
Praefect supports TLS encryption. To communicate with a Praefect instance that listens for secure connections, you must:
- Use a
tls://
URL scheme in thegitaly_address
of the corresponding storage entry in the GitLab configuration. - Bring your own certificates because this isn’t provided automatically. The certificate corresponding to each Praefect server must be installed on that Praefect server.
Additionally the certificate, or its certificate authority, must be installed on all Gitaly servers and on all Praefect clients that communicate with it following the procedure described in GitLab custom certificate configuration (and repeated below).
Note the following:
-
The certificate must specify the address you use to access the Praefect server. If addressing the Praefect server by:
- Hostname, you can either use the Common Name field for this, or add it as a Subject Alternative Name.
- IP address, you must add it as a Subject Alternative Name to the certificate.
-
You can configure Praefect servers with both an unencrypted listening address
listen_addr
and an encrypted listening addresstls_listen_addr
at the same time. This allows you to do a gradual transition from unencrypted to encrypted traffic, if necessary.
To configure Praefect with TLS:
For Omnibus GitLab
-
Create certificates for Praefect servers.
-
On the Praefect servers, create the
/etc/gitlab/ssl
directory and copy your key and certificate there:sudo mkdir -p /etc/gitlab/ssl sudo chmod 755 /etc/gitlab/ssl sudo cp key.pem cert.pem /etc/gitlab/ssl/ sudo chmod 644 key.pem cert.pem
-
Edit
/etc/gitlab/gitlab.rb
and add:praefect['tls_listen_addr'] = "0.0.0.0:3305" praefect['certificate_path'] = "/etc/gitlab/ssl/cert.pem" praefect['key_path'] = "/etc/gitlab/ssl/key.pem"
-
Save the file and reconfigure.
-
On the Praefect clients (including each Gitaly server), copy the certificates, or their certificate authority, into
/etc/gitlab/trusted-certs
:sudo cp cert.pem /etc/gitlab/trusted-certs/
-
On the Praefect clients (except Gitaly servers), edit
git_data_dirs
in/etc/gitlab/gitlab.rb
as follows:git_data_dirs({ "default" => { "gitaly_address" => 'tls://LOAD_BALANCER_SERVER_ADDRESS:2305', "gitaly_token" => 'PRAEFECT_EXTERNAL_TOKEN' } })
-
Save the file and reconfigure GitLab.
For installations from source
- Create certificates for Praefect servers.
-
On the Praefect servers, create the
/etc/gitlab/ssl
directory and copy your key and certificate there:sudo mkdir -p /etc/gitlab/ssl sudo chmod 755 /etc/gitlab/ssl sudo cp key.pem cert.pem /etc/gitlab/ssl/ sudo chmod 644 key.pem cert.pem
-
On the Praefect clients (including each Gitaly server), copy the certificates, or their certificate authority, into the system trusted certificates:
sudo cp cert.pem /usr/local/share/ca-certificates/praefect.crt sudo update-ca-certificates
-
On the Praefect clients (except Gitaly servers), edit
storages
in/home/git/gitlab/config/gitlab.yml
as follows:gitlab: repositories: storages: default: gitaly_address: tls://LOAD_BALANCER_SERVER_ADDRESS:3305 path: /some/local/path
/some/local/path
should be set to a local folder that exists, however no data is stored in this folder. This requirement is scheduled to be removed when this issue is resolved. - Save the file and restart GitLab.
-
Copy all Praefect server certificates, or their certificate authority, to the system trusted certificates on each Gitaly server so the Praefect server trusts the certificate when called by Gitaly servers:
sudo cp cert.pem /usr/local/share/ca-certificates/praefect.crt sudo update-ca-certificates
-
Edit
/home/git/praefect/config.toml
and add:tls_listen_addr = '0.0.0.0:3305' [tls] certificate_path = '/etc/gitlab/ssl/cert.pem' key_path = '/etc/gitlab/ssl/key.pem'
- Save the file and restart GitLab.
Gitaly
To complete this section you need:
- Configured Praefect node
- 3 (or more) servers, with GitLab installed, to be configured as Gitaly nodes. These should be dedicated nodes, do not run other services on these nodes.
Every Gitaly server assigned to the Praefect cluster needs to be configured. The configuration is the same as a normal standalone Gitaly server, except:
- The storage names are exposed to Praefect, not GitLab
- The secret token is shared with Praefect, not GitLab
The configuration of all Gitaly nodes in the Praefect cluster can be identical, because we rely on Praefect to route operations correctly.
Particular attention should be shown to:
- The
gitaly['auth_token']
configured in this section must match thetoken
value underpraefect['virtual_storages']['nodes']
on the Praefect node. This was set in the previous section. This document uses the placeholderPRAEFECT_INTERNAL_TOKEN
throughout. - The storage names in
git_data_dirs
configured in this section must match the storage names underpraefect['virtual_storages']
on the Praefect node. This was set in the previous section. This document usesgitaly-1
,gitaly-2
, andgitaly-3
as Gitaly storage names.
For more information on Gitaly server configuration, see our Gitaly documentation.
-
SSH into the Gitaly node and login as root:
sudo -i
-
Disable all other services by editing
/etc/gitlab/gitlab.rb
:# Disable all other services on the Praefect node postgresql['enable'] = false redis['enable'] = false nginx['enable'] = false grafana['enable'] = false puma['enable'] = false sidekiq['enable'] = false gitlab_workhorse['enable'] = false prometheus_monitoring['enable'] = false # Enable only the Gitaly service gitaly['enable'] = true # Enable Prometheus if needed prometheus['enable'] = true # Prevent database connections during 'gitlab-ctl reconfigure' gitlab_rails['auto_migrate'] = false
-
Configure Gitaly to listen on network interfaces by editing
/etc/gitlab/gitlab.rb
:# Make Gitaly accept connections on all network interfaces. # Use firewalls to restrict access to this address/port. gitaly['listen_addr'] = '0.0.0.0:8075' # Enable Prometheus metrics access to Gitaly. You must use firewalls # to restrict access to this address/port. gitaly['prometheus_listen_addr'] = '0.0.0.0:9236'
-
Configure a strong
auth_token
for Gitaly by editing/etc/gitlab/gitlab.rb
. This is needed by clients to communicate with this Gitaly nodes. Typically, this token is the same for all Gitaly nodes.gitaly['auth_token'] = 'PRAEFECT_INTERNAL_TOKEN'
-
Configure the GitLab Shell secret token, which is needed for
git push
operations. Either:-
Method 1:
- Copy
/etc/gitlab/gitlab-secrets.json
from the Gitaly client to same path on the Gitaly servers and any other Gitaly clients. - Reconfigure GitLab on Gitaly servers.
- Copy
-
Method 2:
- Edit
/etc/gitlab/gitlab.rb
. -
Replace
GITLAB_SHELL_SECRET_TOKEN
with the real secret.gitlab_shell['secret_token'] = 'GITLAB_SHELL_SECRET_TOKEN'
- Edit
-
-
Configure and
internal_api_url
, which is also needed forgit push
operations:# Configure the gitlab-shell API callback URL. Without this, `git push` will # fail. This can be your front door GitLab URL or an internal load balancer. # Examples: 'https://gitlab.example.com', 'http://1.2.3.4' gitlab_rails['internal_api_url'] = 'http://GITLAB_HOST'
-
Configure the storage location for Git data by setting
git_data_dirs
in/etc/gitlab/gitlab.rb
. Each Gitaly node should have a unique storage name (such asgitaly-1
).Instead of configuring
git_data_dirs
uniquely for each Gitaly node, it is often easier to have include the configuration for all Gitaly nodes on every Gitaly node. This is supported because the Praefectvirtual_storages
configuration maps each storage name (such asgitaly-1
) to a specific node, and requests are routed accordingly. This means every Gitaly node in your fleet can share the same configuration.# You can include the data dirs for all nodes in the same config, because # Praefect will only route requests according to the addresses provided in the # prior step. git_data_dirs({ "gitaly-1" => { "path" => "/var/opt/gitlab/git-data" }, "gitaly-2" => { "path" => "/var/opt/gitlab/git-data" }, "gitaly-3" => { "path" => "/var/opt/gitlab/git-data" } })
-
Save the changes to
/etc/gitlab/gitlab.rb
and reconfigure Gitaly:gitlab-ctl reconfigure
-
To ensure that Gitaly has updated its Prometheus listen address, restart Gitaly:
gitlab-ctl restart gitaly
The steps above must be completed for each Gitaly node!
After all Gitaly nodes are configured, run the Praefect connection checker to verify Praefect can connect to all Gitaly servers in the Praefect configuration.
-
SSH into each Praefect node and run the Praefect connection checker:
sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml dial-nodes
Load Balancer
In a fault-tolerant Gitaly configuration, a load balancer is needed to route internal traffic from the GitLab application to the Praefect nodes. The specifics on which load balancer to use or the exact configuration is beyond the scope of the GitLab documentation.
gitaly-ruby
sidecar processes call into the main Gitaly
process. gitaly-ruby
uses the Gitaly address set in the GitLab server’s
git_data_dirs
setting to make this connection.We hope that if you’re managing fault-tolerant systems like GitLab, you have a load balancer of choice already. Some examples include HAProxy (open-source), Google Internal Load Balancer, AWS Elastic Load Balancer, F5 Big-IP LTM, and Citrix Net Scaler. This documentation outlines what ports and protocols you need configure.
LB Port | Backend Port | Protocol |
---|---|---|
2305 | 2305 | TCP |
GitLab
To complete this section you need:
The Praefect cluster needs to be exposed as a storage location to the GitLab
application. This is done by updating the git_data_dirs
.
Particular attention should be shown to:
- the storage name added to
git_data_dirs
in this section must match the storage name underpraefect['virtual_storages']
on the Praefect node(s). This was set in the Praefect section of this guide. This document usesdefault
as the Praefect storage name.
-
SSH into the GitLab node and login as root:
sudo -i
-
Configure the
external_url
so that files could be served by GitLab by proper endpoint access by editing/etc/gitlab/gitlab.rb
:You need to replace
GITLAB_SERVER_URL
with the real external facing URL on which current GitLab instance is serving:external_url 'GITLAB_SERVER_URL'
-
Disable the default Gitaly service running on the GitLab host. It isn’t needed because GitLab connects to the configured cluster.
If you have existing data stored on the default Gitaly storage, you should migrate the data your Gitaly Cluster storage first.gitaly['enable'] = false
-
Add the Praefect cluster as a storage location by editing
/etc/gitlab/gitlab.rb
.You need to replace:
-
LOAD_BALANCER_SERVER_ADDRESS
with the IP address or hostname of the load balancer. -
PRAEFECT_EXTERNAL_TOKEN
with the real secret
If you are using TLS, the
gitaly_address
should begin withtls://
.git_data_dirs({ "default" => { "gitaly_address" => "tcp://LOAD_BALANCER_SERVER_ADDRESS:2305", "gitaly_token" => 'PRAEFECT_EXTERNAL_TOKEN' } })
-
-
Configure the GitLab Shell secret token so that callbacks from Gitaly nodes during a
git push
are properly authenticated. Either:-
Method 1:
- Copy
/etc/gitlab/gitlab-secrets.json
from the Gitaly client to same path on the Gitaly servers and any other Gitaly clients. - Reconfigure GitLab on Gitaly servers.
- Copy
-
Method 2:
- Edit
/etc/gitlab/gitlab.rb
. -
Replace
GITLAB_SHELL_SECRET_TOKEN
with the real secret.gitlab_shell['secret_token'] = 'GITLAB_SHELL_SECRET_TOKEN'
- Edit
-
-
Add Prometheus monitoring settings by editing
/etc/gitlab/gitlab.rb
. If Prometheus is enabled on a different node, make edits on that node instead.You need to replace:
-
PRAEFECT_HOST
with the IP address or hostname of the Praefect node -
GITALY_HOST_*
with the IP address or hostname of each Gitaly node
prometheus['scrape_configs'] = [ { 'job_name' => 'praefect', 'static_configs' => [ 'targets' => [ 'PRAEFECT_HOST:9652', # praefect-1 'PRAEFECT_HOST:9652', # praefect-2 'PRAEFECT_HOST:9652', # praefect-3 ] ] }, { 'job_name' => 'praefect-gitaly', 'static_configs' => [ 'targets' => [ 'GITALY_HOST_1:9236', # gitaly-1 'GITALY_HOST_2:9236', # gitaly-2 'GITALY_HOST_3:9236', # gitaly-3 ] ] } ]
-
-
Save the changes to
/etc/gitlab/gitlab.rb
and reconfigure GitLab:gitlab-ctl reconfigure
-
Verify on each Gitaly node the Git Hooks can reach GitLab. On each Gitaly node run:
/opt/gitlab/embedded/bin/gitaly-hooks check /var/opt/gitlab/gitaly/config.toml
-
Verify that GitLab can reach Praefect:
gitlab-rake gitlab:gitaly:check
-
Check that the Praefect storage is configured to store new repositories:
- On the top bar, select Menu > Admin.
- On the left sidebar, select Settings > Repository.
- Expand the Repository storage section.
Following this guide, the
default
storage should have weight 100 to store all new repositories. -
Verify everything is working by creating a new project. Check the “Initialize repository with a README” box so that there is content in the repository that viewed. If the project is created, and you can see the README file, it works!
Use TCP for existing GitLab instances
When adding Gitaly Cluster to an existing Gitaly instance, the existing Gitaly storage
must use a TCP address. If gitaly_address
is not specified, then a Unix socket is used,
which prevents the communication with the cluster.
For example:
git_data_dirs({
'default' => { 'gitaly_address' => 'tcp://old-gitaly.internal:8075' },
'cluster' => {
'gitaly_address' => 'tcp://<load_balancer_server_address>:2305',
'gitaly_token' => '<praefect_external_token>'
}
})
See Mixed Configuration for further information on running multiple Gitaly storages.
Grafana
Grafana is included with GitLab, and can be used to monitor your Praefect cluster. See Grafana Dashboard Service for detailed documentation.
To get started quickly:
-
SSH into the GitLab node (or whichever node has Grafana enabled) and login as root:
sudo -i
-
Enable the Grafana login form by editing
/etc/gitlab/gitlab.rb
.grafana['disable_login_form'] = false
-
Save the changes to
/etc/gitlab/gitlab.rb
and reconfigure GitLab:gitlab-ctl reconfigure
-
Set the Grafana administrator password. This command prompts you to enter a new password:
gitlab-ctl set-grafana-password
-
In your web browser, open
/-/grafana
(such ashttps://gitlab.example.com/-/grafana
) on your GitLab server.Login using the password you set, and the username
admin
. -
Go to Explore and query
gitlab_build_info
to verify that you are getting metrics from all your machines.
Congratulations! You’ve configured an observable fault-tolerant Praefect cluster.
Distributed reads
- Introduced in GitLab 13.1 in beta with feature flag
gitaly_distributed_reads
set to disabled. - Made generally available and enabled by default in GitLab 13.3.
- Disabled by default in GitLab 13.5.
- Enabled by default in GitLab 13.8.
- Feature flag removed in GitLab 13.11.
Praefect supports distribution of read operations across Gitaly nodes that are configured for the virtual node.
All RPCs marked with ACCESSOR
option like
GetBlob
are redirected to an up to date and healthy Gitaly node.
Up to date in this context means that:
- There is no replication operations scheduled for this node.
- The last replication operation is in completed state.
If there is no such nodes, or any other error occurs during node selection, the primary node is chosen to serve the request.
To track distribution of read operations, you can use the gitaly_praefect_read_distribution
Prometheus counter metric. It has two labels:
-
virtual_storage
. -
storage
.
They reflect configuration defined for this instance of Praefect.
Strong consistency
- Introduced in GitLab 13.1 in alpha, disabled by default.
- Entered beta in GitLab 13.2, disabled by default.
- In GitLab 13.3, disabled unless primary-wins voting strategy is disabled.
- From GitLab 13.4, enabled by default.
- From GitLab 13.5, you must use Git v2.28.0 or higher on Gitaly nodes to enable strong consistency.
- From GitLab 13.6, primary-wins voting strategy and
gitaly_reference_transactions_primary_wins
feature flag were removed from the source code.
Praefect guarantees eventual consistency by replicating all writes to secondary nodes after the write to the primary Gitaly node has happened.
Praefect can instead provide strong consistency by creating a transaction and writing changes to all Gitaly nodes at once. If enabled, transactions are only available for a subset of RPCs. For more information, see the strong consistency epic.
To enable strong consistency:
- In GitLab 13.5, you must use Git v2.28.0 or higher on Gitaly nodes to enable strong consistency.
- In GitLab 13.4 and later, the strong consistency voting strategy has been improved and enabled by default. Instead of requiring all nodes to agree, only the primary and half of the secondaries need to agree.
- In GitLab 13.3, reference transactions are enabled by default with a primary-wins strategy.
This strategy causes all transactions to succeed for the primary and thus does not ensure strong consistency.
To enable strong consistency, disable the
:gitaly_reference_transactions_primary_wins
feature flag. - In GitLab 13.2, enable the
:gitaly_reference_transactions
feature flag. - In GitLab 13.1, enable the
:gitaly_reference_transactions
and:gitaly_hooks_rpc
feature flags.
Changing feature flags requires access to the Rails console. In the Rails console, enable or disable the flags as required. For example:
Feature.enable(:gitaly_reference_transactions)
Feature.disable(:gitaly_reference_transactions_primary_wins)
To monitor strong consistency, you can use the following Prometheus metrics:
-
gitaly_praefect_transactions_total
: Number of transactions created and voted on. -
gitaly_praefect_subtransactions_per_transaction_total
: Number of times nodes cast a vote for a single transaction. This can happen multiple times if multiple references are getting updated in a single transaction. -
gitaly_praefect_voters_per_transaction_total
: Number of Gitaly nodes taking part in a transaction. -
gitaly_praefect_transactions_delay_seconds
: Server-side delay introduced by waiting for the transaction to be committed. -
gitaly_hook_transaction_voting_delay_seconds
: Client-side delay introduced by waiting for the transaction to be committed.
Replication factor
Replication factor is the number of copies Praefect maintains of a given repository. A higher replication factor offers better redundancy and distribution of read workload, but also results in a higher storage cost. By default, Praefect replicates repositories to every storage in a virtual storage.
Configure replication factor
Praefect supports configuring a replication factor on a per-repository basis, by assigning specific storage nodes to host a repository.
Praefect does not store the actual replication factor, but assigns enough storages to host the repository so the desired replication factor is met. If a storage node is later removed from the virtual storage, the replication factor of repositories assigned to the storage is decreased accordingly.
You can configure:
-
A default replication factor for each virtual storage that is applied to newly-created repositories. The configuration is added to the
/etc/gitlab/gitlab.rb
file:praefect['virtual_storages'] = { 'default' => { 'default_replication_factor' => 1, # ... } }
-
A replication factor for an existing repository using the
set-replication-factor
sub-command.set-replication-factor
automatically assigns or unassigns random storage nodes as necessary to reach the desired replication factor. The repository’s primary node is always assigned first and is never unassigned.sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml set-replication-factor -virtual-storage <virtual-storage> -repository <relative-path> -replication-factor <replication-factor>
-
-virtual-storage
is the virtual storage the repository is located in. -
-repository
is the repository’s relative path in the storage. -
-replication-factor
is the desired replication factor of the repository. The minimum value is1
, as the primary needs a copy of the repository. The maximum replication factor is the number of storages in the virtual storage.
On success, the assigned host storages are printed. For example:
$ sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml set-replication-factor -virtual-storage default -repository @hashed/3f/db/3fdba35f04dc8c462986c992bcf875546257113072a909c162f7e470e581e278.git -replication-factor 2 current assignments: gitaly-1, gitaly-2
-
Automatic failover and primary election strategies
Praefect regularly checks the health of each Gitaly node. This is used to automatically fail over to a newly-elected primary Gitaly node if the current primary node is found to be unhealthy.
We recommend using repository-specific primary nodes. This is planned to be the only available election strategy from GitLab 14.0.
Repository-specific primary nodes
Introduced in GitLab 13.12.
Gitaly Cluster supports electing repository-specific primary Gitaly nodes. Repository-specific
Gitaly primary nodes are enabled in /etc/gitlab/gitlab.rb
by setting
praefect['failover_election_strategy'] = 'per_repository'
.
Praefect’s deprecated election strategies:
- Elected a primary Gitaly node for each virtual storage, which was used as the primary node for each repository in the virtual storage.
- Prevented horizontal scaling of a virtual storage. The primary Gitaly node needed a replica of each repository and thus became the bottleneck.
The per_repository
election strategy solves this problem by electing a primary Gitaly node separately for each
repository. Combined with configurable replication factors, you can
horizontally scale storage capacity and distribute write load across Gitaly nodes.
Primary elections are run when:
- Praefect starts up.
- The cluster’s consensus of a Gitaly node’s health changes.
A Gitaly node is considered:
- Healthy if
>=50%
Praefect nodes have successfully health checked the Gitaly node in the previous ten seconds. - Unhealthy otherwise.
During an election run, Praefect elects a new primary Gitaly node for each repository that has an unhealthy primary Gitaly node. The election is made:
- Randomly from healthy secondary Gitaly nodes that are the most up to date.
- Only from Gitaly nodes assigned to the host repository.
If there are no healthy secondary nodes for a repository:
- The unhealthy primary node is demoted and the repository is left without a primary node.
- Operations that require a primary node fail until a primary is successfully elected.
Migrate to repository-specific primary Gitaly nodes
New Gitaly Clusters can start using the per_repository
election strategy immediately.
To migrate existing clusters:
-
Praefect nodes didn’t historically keep database records of every repository stored on the cluster. When the
per_repository
election strategy is configured, Praefect expects to have database records of each repository. A background migration is included in GitLab 13.6 and later to create any missing database records for repositories. Before migrating you should verify the migration has run by checking Praefect’s logs:Check Praefect’s logs for
repository importer finished
message. Thevirtual_storages
field contains the names of virtual storages and whether they’ve had any missing database records created.For example, the
default
virtual storage has been successfully migrated:{"level":"info","msg":"repository importer finished","pid":19752,"time":"2021-04-28T11:41:36.743Z","virtual_storages":{"default":true}}
If a virtual storage has not been successfully migrated, it would have
false
next to it:{"level":"info","msg":"repository importer finished","pid":19752,"time":"2021-04-28T11:41:36.743Z","virtual_storages":{"default":false}}
The migration is ran when Praefect starts up. If the migration is unsuccessful, you can restart a Praefect node to reattempt it. The migration only runs with
sql
election strategy configured. -
Running two different election strategies side by side can cause a split brain, where different Praefect nodes consider repositories to have different primaries. This can be avoided either:
-
If a short downtime is acceptable:
-
Shut down all Praefect nodes before changing the election strategy. Do this by running
gitlab-ctl stop praefect
on the Praefect nodes. -
On the Praefect nodes, configure the election strategy in
/etc/gitlab/gitlab.rb
withpraefect['failover_election_strategy'] = 'per_repository'
. -
Run
gitlab-ctl reconfigure && gitlab-ctl start
to reconfigure and start the Praefects.
-
-
If downtime is unacceptable:
-
Determine which Gitaly node is the current primary.
-
Comment out the secondary Gitaly nodes from the virtual storage’s configuration in
/etc/gitlab/gitlab.rb
on all Praefect nodes. This ensures there’s only one Gitaly node configured, causing both of the election strategies to elect the same Gitaly node as the primary. -
Run
gitlab-ctl reconfigure
on all Praefect nodes. Wait until all Praefect processes have restarted and the old processes have exited. This can take up to one minute. -
On all Praefect nodes, configure the election strategy in
/etc/gitlab/gitlab.rb
withpraefect['failover_election_strategy'] = 'per_repository'
. -
Run
gitlab-ctl reconfigure
on all Praefect nodes. Wait until all of the Praefect processes have restarted and the old processes have exited. This can take up to one minute. -
Uncomment the secondary Gitaly node configuration commented out in the earlier step on all Praefect nodes.
-
Run
gitlab-ctl reconfigure
on all Praefect nodes to reconfigure and restart the Praefect processes.
-
-
Deprecated election strategies
-
PostgreSQL: Enabled by default until GitLab 14.0, and equivalent to:
praefect['failover_election_strategy'] = 'sql'
.This configuration option:
- Allows multiple Praefect nodes to coordinate via the PostgreSQL database to elect a primary Gitaly node.
- Causes Praefect nodes to elect a new primary Gitaly node, monitor its health, and elect a new primary Gitaly node if the current one is not reached within 10 seconds by a majority of the Praefect nodes.
-
Memory: Enabled by setting
praefect['failover_election_strategy'] = 'local'
in/etc/gitlab/gitlab.rb
on the Praefect node.If a sufficient number of health checks fail for the current primary Gitaly node, a new primary is elected. Do not use with multiple Praefect nodes! Using with multiple Praefect nodes is likely to result in a split brain.
Primary Node Failure
Gitaly Cluster recovers from a failing primary Gitaly node by promoting a healthy secondary as the new primary.
To minimize data loss, Gitaly Cluster:
- Switches repositories that are outdated on the new primary to read-only mode.
- Elects the secondary with the least unreplicated writes from the primary to be the new primary. Because there can still be some unreplicated writes, data loss can occur.
Read-only mode
- Introduced in GitLab 13.0 as generally available.
- Between GitLab 13.0 and GitLab 13.2, read-only mode applied to the whole virtual storage and occurred whenever failover occurred.
- In GitLab 13.3 and later, read-only mode applies on a per-repository basis and only occurs if a new primary is out of date.
When Gitaly Cluster switches to a new primary, repositories enter read-only mode if they are out of date. This can happen after failing over to an outdated secondary. Read-only mode eases data recovery efforts by preventing writes that may conflict with the unreplicated writes on other nodes.
To enable writes again, an administrator can:
- Check for data loss.
- Attempt to recover missing data.
- Either enable writes in the virtual storage or accept data loss if necessary, depending on the version of GitLab.
Check for data loss
The Praefect dataloss
sub-command identifies replicas that are likely to be outdated. This can help
identify potential data loss after a failover. The following parameters are
available:
-
-virtual-storage
that specifies which virtual storage to check. The default behavior is to display outdated replicas of read-only repositories as they might require administrator action. - In GitLab 13.3 and later,
-partially-replicated
that specifies whether to display a list of outdated replicas of writable repositories.
dataloss
is still in beta and the output format is subject to change.To check for repositories with outdated primaries, run:
sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml dataloss [-virtual-storage <virtual-storage>]
Every configured virtual storage is checked if none is specified:
sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml dataloss
Repositories which have assigned storage nodes that contain an outdated copy of the repository are listed in the output. This information is printed for each repository:
- A repository’s relative path to the storage directory identifies each repository and groups the related information.
- The repository’s current status is printed in parentheses next to the disk path. If the repository’s primary
is outdated, the repository is in
read-only
mode and can’t accept writes. Otherwise, the mode iswritable
. - The primary field lists the repository’s current primary. If the repository has no primary, the field shows
No Primary
. - The In-Sync Storages lists replicas which have replicated the latest successful write and all writes preceding it.
- The Outdated Storages lists replicas which contain an outdated copy of the repository. Replicas which have no copy of the repository but should contain it are also listed here. The maximum number of changes the replica is missing is listed next to replica. It’s important to notice that the outdated replicas may be fully up to date or contain later changes but Praefect can’t guarantee it.
Whether a replica is assigned to host the repository is listed with each replica’s status. assigned host
is printed
next to replicas which are assigned to store the repository. The text is omitted if the replica contains a copy of
the repository but is not assigned to store the repository. Such replicas aren’t kept in-sync by Praefect, but may
act as replication sources to bring assigned replicas up to date.
Example output:
Virtual storage: default
Outdated repositories:
@hashed/3f/db/3fdba35f04dc8c462986c992bcf875546257113072a909c162f7e470e581e278.git (read-only):
Primary: gitaly-1
In-Sync Storages:
gitaly-2, assigned host
Outdated Storages:
gitaly-1 is behind by 3 changes or less, assigned host
gitaly-3 is behind by 3 changes or less
A confirmation is printed out when every repository is writable. For example:
Virtual storage: default
All repositories are writable!
Outdated replicas of writable repositories
Introduced in GitLab 13.3.
To also list information of repositories whose primary is up to date but one or more assigned
replicas are outdated, use the -partially-replicated
flag.
A repository is writable if the primary has the latest changes. Secondaries might be temporarily outdated while they are waiting to replicate the latest changes.
sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml dataloss [-virtual-storage <virtual-storage>] [-partially-replicated]
Example output:
Virtual storage: default
Outdated repositories:
@hashed/3f/db/3fdba35f04dc8c462986c992bcf875546257113072a909c162f7e470e581e278.git (writable):
Primary: gitaly-1
In-Sync Storages:
gitaly-1, assigned host
Outdated Storages:
gitaly-2 is behind by 3 changes or less, assigned host
gitaly-3 is behind by 3 changes or less
With the -partially-replicated
flag set, a confirmation is printed out if every assigned replica is fully up to
date.
For example:
Virtual storage: default
All repositories are up to date!
Check repository checksums
To check a project’s repository checksums across on all Gitaly nodes, run the replicas Rake task on the main GitLab node.
Enable writes or accept data loss
Praefect provides the following sub-commands to re-enable writes:
-
In GitLab 13.2 and earlier,
enable-writes
to re-enable virtual storage for writes after data recovery attempts.sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml enable-writes -virtual-storage <virtual-storage>
-
In GitLab 13.3 and later,
accept-dataloss
to accept data loss and re-enable writes for repositories after data recovery attempts have failed. Accepting data loss causes current version of the repository on the authoritative storage to be considered latest. Other storages are brought up to date with the authoritative storage by scheduling replication jobs.sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml accept-dataloss -virtual-storage <virtual-storage> -repository <relative-path> -authoritative-storage <storage-name>
accept-dataloss
causes permanent data loss by overwriting other versions of the repository. Data
recovery efforts must be performed before using it.Data recovery
If a Gitaly node fails replication jobs for any reason, it ends up hosting outdated versions of the affected repositories. Praefect provides tools for:
- Automatic reconciliation, for GitLab 13.4 and later.
-
Manual reconciliation, for:
- GitLab 13.3 and earlier.
- Repositories upgraded to GitLab 13.4 and later without entries in the
repositories
table. In GitLab 13.6 and later, a migration is run when Praefect starts for these repositories.
These tools reconcile the outdated repositories to bring them fully up to date again.
Automatic reconciliation
Introduced in GitLab 13.4.
Praefect automatically reconciles repositories that are not up to date. By default, this is done every five minutes. For each outdated repository on a healthy Gitaly node, the Praefect picks a random, fully up-to-date replica of the repository on another healthy Gitaly node to replicate from. A replication job is scheduled only if there are no other replication jobs pending for the target repository.
The reconciliation frequency can be changed via the configuration. The value can be any valid Go duration value. Values below 0 disable the feature.
Examples:
praefect['reconciliation_scheduling_interval'] = '5m' # the default value
praefect['reconciliation_scheduling_interval'] = '30s' # reconcile every 30 seconds
praefect['reconciliation_scheduling_interval'] = '0' # disable the feature
Manual reconciliation
reconcile
sub-command is deprecated and scheduled for removal in GitLab 14.0. Use
automatic reconciliation instead. Manual reconciliation may
produce excess replication jobs and is limited in functionality. Manual reconciliation does
not work when repository-specific primary nodes are
enabled.The Praefect reconcile
sub-command allows for the manual reconciliation between two Gitaly nodes. The
command replicates every repository on a later version on the reference storage to the target storage.
sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml reconcile -virtual <virtual-storage> -reference <up-to-date-storage> -target <outdated-storage> -f
- Replace the placeholder
<virtual-storage>
with the virtual storage containing the Gitaly node storage to be checked. - Replace the placeholder
<up-to-date-storage>
with the Gitaly storage name containing up to date repositories. - Replace the placeholder
<outdated-storage>
with the Gitaly storage name containing outdated repositories.
Migrate to Gitaly Cluster
Whether migrating to Gitaly Cluster because of NFS support deprecation or to move from single Gitaly nodes, the basic process involves:
- Create the required storage.
- Create and configure Gitaly Cluster.
- Move the repositories.
When creating the storage, see some repository storage recommendations.
Move Repositories
To migrate to Gitaly Cluster, existing repositories stored outside Gitaly Cluster must be moved. There is no automatic migration but the moves can be scheduled with the GitLab API.
GitLab repositories can be associated with projects, groups, and snippets. Each of these types have a separate API to schedule the respective repositories to move. To move all repositories on a GitLab instance, each of these types must be scheduled to move for each storage.
Each repository is made read-only for the duration of the move. The repository is not writable until the move has completed.
After creating and configuring Gitaly Cluster:
- Ensure all storages are accessible to the GitLab instance. In this example, these are
<original_storage_name>
and<cluster_storage_name>
. - Configure repository storage weights so that the Gitaly Cluster receives all new projects. This stops new projects being created on existing Gitaly nodes while the migration is in progress.
- Schedule repository moves for:
Bulk schedule project moves
-
Schedule repository storage moves for all projects on a storage shard using the API. For example:
curl --request POST --header "Private-Token: <your_access_token>" \ --header "Content-Type: application/json" \ --data '{"source_storage_name":"<original_storage_name>","destination_storage_name":"<cluster_storage_name>"}' \ "https://gitlab.example.com/api/v4/project_repository_storage_moves"
-
Query the most recent repository moves
using the API. The query indicates either:
- The moves have completed successfully. The
state
field isfinished
. - The moves are in progress. Re-query the repository move until it completes successfully.
- The moves have failed. Most failures are temporary and are solved by rescheduling the move.
- The moves have completed successfully. The
-
After the moves are complete, query projects using the API to confirm that all projects have moved. No projects should be returned with
repository_storage
field set to the old storage.curl --header "Private-Token: <your_access_token>" --header "Content-Type: application/json" \ "https://gitlab.example.com/api/v4/projects?repository_storage=<original_storage_name>"
Alternatively use the rails console to confirm that all projects have moved. Run the following in the rails console:
ProjectRepository.for_repository_storage('<original_storage_name>')
- Repeat for each storage as required.
Bulk schedule snippet moves
-
Schedule repository storage moves for all snippets on a storage shard using the API. For example:
curl --request POST --header "PRIVATE-TOKEN: <your_access_token>" \ --header "Content-Type: application/json" \ --data '{"source_storage_name":"<original_storage_name>","destination_storage_name":"<cluster_storage_name>"}' \ "https://gitlab.example.com/api/v4/snippet_repository_storage_moves"
-
Query the most recent repository moves
using the API. The query indicates either:
- The moves have completed successfully. The
state
field isfinished
. - The moves are in progress. Re-query the repository move until it completes successfully.
- The moves have failed. Most failures are temporary and are solved by rescheduling the move.
- The moves have completed successfully. The
-
After the moves are complete, use the rails console to confirm that all snippets have moved. No snippets should be returned for the original storage. Run the following in the rails console:
SnippetRepository.for_repository_storage('<original_storage_name>')
- Repeat for each storage as required.
Bulk schedule group moves
-
Schedule repository storage moves for all groups on a storage shard using the API.
curl --request POST --header "PRIVATE-TOKEN: <your_access_token>" \ --header "Content-Type: application/json" \ --data '{"source_storage_name":"<original_storage_name>","destination_storage_name":"<cluster_storage_name>"}' \ "https://gitlab.example.com/api/v4/group_repository_storage_moves"
-
Query the most recent repository moves
using the API. The query indicates either:
- The moves have completed successfully. The
state
field isfinished
. - The moves are in progress. Re-query the repository move until it completes successfully.
- The moves have failed. Most failures are temporary and are solved by rescheduling the move.
- The moves have completed successfully. The
-
After the moves are complete, use the rails console to confirm that all groups have moved. No groups should be returned for the original storage. Run the following in the rails console:
GroupWikiRepository.for_repository_storage('<original_storage_name>')
- Repeat for each storage as required.