- Background migrations
- Version-specific changes
- Mandatory upgrade paths for version upgrades
- Updating methods
- Update Community Edition to Enterprise Edition
- Zero downtime updates
- Upgrade Gitaly servers
- Downgrade
-
Troubleshooting
- GitLab 13.7 and later unavailable on Amazon Linux 2
- Get the status of a GitLab installation
- RPM ‘package is already installed’ error
- Package obsoleted by installed package
- 500 error when accessing Project > Settings > Repository on Omnibus installs
- Error
Failed to connect to the internal GitLab API
on a separate GitLab Pages server
Update GitLab installed with the Omnibus GitLab package
Before following these instructions, note the following:
- Supported upgrade paths has suggestions on when to upgrade.
- If you are upgrading from a non-Omnibus installation to an Omnibus installation, see Upgrading from a non-Omnibus installation to an Omnibus installation.
Background migrations
To see the current size of the background_migration
queue,
check for background migrations before upgrading.
Version-specific changes
We recommend performing upgrades between major and minor releases no more than once per
week, to allow time for background migrations to finish. Decrease the time required to
complete these migrations by increasing the number of
Sidekiq workers
that can process jobs in the background_migration
queue.
Updating to major versions might need some manual intervention. For more information, check the version your are updating to:
Mandatory upgrade paths for version upgrades
From GitLab 10.8, upgrade paths are enforced for version upgrades by default. This restricts performing direct upgrades that skip major versions (for example 10.3 to 12.7 in one jump) that can break GitLab installations due to multiple reasons like deprecated or removed configuration settings, upgrade of internal tools and libraries, and so on. Users must follow the official upgrade paths while upgrading their GitLab instances.
Updating methods
There are two ways to update Omnibus GitLab:
Both will automatically back up the GitLab database before installing a newer
GitLab version. You may skip this automatic backup by creating an empty file
at /etc/gitlab/skip-auto-backup
:
sudo touch /etc/gitlab/skip-auto-backup
For safety reasons, you should maintain an up-to-date backup on your own if you plan to use this flag.
When upgrading to a new major version, remember to first check for background migrations.
Unless you are following the steps in Zero downtime updates, your GitLab application will not be available to users while an update is in progress. They will either see a “Deploy in progress” message or a “502” error in their web browser.
Update using the official repositories
If you have installed Omnibus GitLab Community Edition or Enterprise Edition, then the official GitLab repository should have already been set up for you.
To update to the newest GitLab version, run:
-
For GitLab Enterprise Edition:
# Debian/Ubuntu sudo apt-get update sudo apt-get install gitlab-ee # Centos/RHEL sudo yum install gitlab-ee
-
For GitLab Community Edition:
# Debian/Ubuntu sudo apt-get update sudo apt-get install gitlab-ce # Centos/RHEL sudo yum install gitlab-ce
Multi-step upgrade using the official repositories
Linux package managers default to installing the latest available version of a package for installation and upgrades. Upgrading directly to the latest major version can be problematic for older GitLab versions that require a multi-stage upgrade path.
When following an upgrade path spanning multiple versions, for each upgrade, specify the intended GitLab version number in your package manager’s install or upgrade command:
-
First, identify the GitLab version number in your package manager:
# Ubuntu/Debian sudo apt-cache madison gitlab-ee # RHEL/CentOS 6 and 7 yum --showduplicates list gitlab-ee # RHEL/CentOS 8 dnf search gitlab-ee*
-
Then install the specific GitLab package:
# Ubuntu/Debian sudo apt upgrade gitlab-ee=12.0.12-ee.0 # RHEL/CentOS 6 and 7 yum install gitlab-ee-12.0.12-ee.0.el7 # RHEL/CentOS 8 dnf install gitlab-ee-12.0.12-ee.0.el8 # SUSE zypper install gitlab-ee=12.0.12-ee.0
Update using a manually-downloaded package
If for some reason you don’t use the official repositories, you can download the package and install it manually.
Update Community Edition to Enterprise Edition
To upgrade an existing GitLab Community Edition (CE) server installed using the Omnibus GitLab packages to GitLab Enterprise Edition (EE), you install the EE package on top of CE.
Upgrading from the same version of CE to EE is not explicitly necessary, and any standard upgrade (for example, CE 12.0 to EE 12.1) should work. However, in the following steps we assume that you are upgrading the same version (for example, CE 12.1 to EE 12.1), which is recommended.
The steps can be summed up to:
-
Find the currently installed GitLab version:
For Debian/Ubuntu
sudo apt-cache policy gitlab-ce | grep Installed
The output should be similar to:
Installed: 13.0.4-ce.0
. In that case, the equivalent Enterprise Edition version will be:13.0.4-ee.0
. Write this value down.For CentOS/RHEL
sudo rpm -q gitlab-ce
The output should be similar to:
gitlab-ce-13.0.4-ce.0.el8.x86_64
. In that case, the equivalent Enterprise Edition version will be:gitlab-ee-13.0.4-ee.0.el8.x86_64
. Write this value down. -
Add the
gitlab-ee
Apt or Yum repository:For Debian/Ubuntu
curl -s "https://packages.gitlab.com/install/repositories/gitlab/gitlab-ee/script.deb.sh" | sudo bash
For CentOS/RHEL
curl -s "https://packages.gitlab.com/install/repositories/gitlab/gitlab-ee/script.rpm.sh" | sudo bash
The above command will find your OS version and automatically set up the repository. If you are not comfortable installing the repository through a piped script, you can first check its contents.
-
Next, install the
gitlab-ee
package. Note that this will automatically uninstall thegitlab-ce
package on your GitLab server.reconfigure
Omnibus right after thegitlab-ee
package is installed. Make sure that you install the exact same GitLab version:For Debian/Ubuntu
## Make sure the repositories are up-to-date sudo apt-get update ## Install the package using the version you wrote down from step 1 sudo apt-get install gitlab-ee=13.0.4-ee.0 ## Reconfigure GitLab sudo gitlab-ctl reconfigure
For CentOS/RHEL
## Install the package using the version you wrote down from step 1 sudo yum install gitlab-ee-13.0.4-ee.0.el8.x86_64 ## Reconfigure GitLab sudo gitlab-ctl reconfigure
-
Now go to the GitLab admin panel of your server (
/admin/license/new
) and upload your license file. -
After you confirm that GitLab is working as expected, you may remove the old Community Edition repository:
For Debian/Ubuntu
sudo rm /etc/apt/sources.list.d/gitlab_gitlab-ce.list
For CentOS/RHEL
sudo rm /etc/yum.repos.d/gitlab_gitlab-ce.repo
That’s it! You can now use GitLab Enterprise Edition! To update to a newer version, follow Update using the official repositories.
dpkg
/rpm
instead of apt-get
/yum
, go through the first
step to find the current GitLab version and then follow
Update using a manually-downloaded package.Zero downtime updates
It’s possible to upgrade to a newer version of GitLab without having to take your GitLab instance offline.
Verify that you can upgrade with no downtime by checking the Upgrading without downtime section of the update document.
If you meet all the requirements above, follow these instructions in order. There are three sets of steps, depending on your deployment type:
Deployment type | Description |
---|---|
Single-node | GitLab CE/EE on a single node |
Gitaly Cluster | GitLab CE/EE using HA architecture for Gitaly Cluster |
Multi-node / PostgreSQL HA | GitLab CE/EE using HA architecture for PostgreSQL |
Multi-node / Redis HA | GitLab CE/EE using HA architecture for Redis |
Geo | GitLab EE with Geo enabled |
Multi-node / HA with Geo | GitLab CE/EE on multiple nodes |
Each type of deployment will require that you hot reload the puma
and sidekiq
processes on all nodes running these
services after you’ve upgraded. The reason for this is that those processes each load the GitLab Rails application which reads and loads
the database schema into memory when starting up. Each of these processes will need to be reloaded (or restarted in the case of sidekiq
)
to re-read any database changes that have been made by post-deployment migrations.
Single-node deployment
Before following these instructions, note the following important information:
- You can only upgrade 1 minor release at a time. So from 13.6 to 13.7, not to 13.8. If you attempt more than one minor release, the upgrade may fail.
- On single-node Omnibus deployments, updates with no downtime are not possible when using Puma because Puma always requires a complete restart. This is because the phased restart feature of Puma does not work with the way it is configured in GitLab all-in-one packages (cluster-mode with app preloading).
- While it is possible to minimize downtime on a single-node instance by following these instructions, it is not possible to always achieve true zero downtime updates. Users may see some connections timeout or be refused for a few minutes, depending on which services need to restart.
-
Create an empty file at
/etc/gitlab/skip-auto-reconfigure
. During software installation only, this will prevent the upgrade from runninggitlab-ctl reconfigure
and automatically running database migrationssudo touch /etc/gitlab/skip-auto-reconfigure
-
Update the GitLab package:
-
For GitLab Community Edition:
# Debian/Ubuntu sudo apt-get update sudo apt-get install gitlab-ce # Centos/RHEL sudo yum install gitlab-ce
-
For GitLab Enterprise Edition:
# Debian/Ubuntu sudo apt-get update sudo apt-get install gitlab-ee # Centos/RHEL sudo yum install gitlab-ee
-
-
To get the regular migrations and latest code in place, run
sudo SKIP_POST_DEPLOYMENT_MIGRATIONS=true gitlab-ctl reconfigure
-
Once the node is updated and
reconfigure
finished successfully, complete the migrations withsudo gitlab-rake db:migrate
-
Hot reload
puma
andsidekiq
servicessudo gitlab-ctl hup puma sudo gitlab-ctl restart sidekiq
If you do not want to run zero downtime upgrades in the future, make
sure you remove /etc/gitlab/skip-auto-reconfigure
after
you’ve completed these steps.
Multi-node / HA deployment
You can only upgrade 1 minor release at a time. So from 13.6 to 13.7, not to 13.8. If you attempt more than one minor release, the upgrade may fail.
Use a load balancer in front of web (Puma) nodes
With Puma, single node zero-downtime updates are no longer possible. To achieve HA with zero-downtime updates, at least two nodes are required to be used with a load balancer which distributes the connections properly across both nodes.
The load balancer in front of the application nodes must be configured to check
proper health check endpoints to check if the service is accepting traffic or
not. For Puma, the /-/readiness
endpoint should be used, while
/readiness
endpoint can be used for Sidekiq and other services.
Upgrades on web (Puma) nodes must be done in a rolling manner, one after another, ensuring at least one node is always up to serve traffic. This is required to ensure zero-downtime.
Puma will enter a blackout period as part of the upgrade, during which they continue to accept connections but will mark their respective health check endpoints to be unhealthy. On seeing this, the load balancer should disconnect them gracefully.
Puma will restart only after completing all the currently processing requests. This ensures data and service integrity. Once they have restarted, the health check end points will be marked healthy.
The nodes must be updated in the following order to update an HA instance using load balancer to latest GitLab version.
-
Select one application node as a deploy node and complete the following steps on it:
-
Create an empty file at
/etc/gitlab/skip-auto-reconfigure
. This will prevent the upgrade from runninggitlab-ctl reconfigure
and automatically running database migrations:sudo touch /etc/gitlab/skip-auto-reconfigure
-
Update the GitLab package:
# Debian/Ubuntu sudo apt-get update && sudo apt-get install gitlab-ce # Centos/RHEL sudo yum install gitlab-ce
If you are an Enterprise Edition user, replace
gitlab-ce
withgitlab-ee
in the above command. -
Get the regular migrations and latest code in place:
sudo SKIP_POST_DEPLOYMENT_MIGRATIONS=true gitlab-ctl reconfigure
-
Ensure services use the latest code:
sudo gitlab-ctl hup puma sudo gitlab-ctl restart sidekiq
-
-
Complete the following steps on the other Puma/Sidekiq nodes, one after another. Always ensure at least one of such nodes is up and running, and connected to the load balancer before proceeding to the next node.
-
Update the GitLab package and ensure a
reconfigure
is run as part of it. If not (due to/etc/gitlab/skip-auto-reconfigure
file being present), runsudo gitlab-ctl reconfigure
manually. -
Ensure services use latest code:
sudo gitlab-ctl hup puma sudo gitlab-ctl restart sidekiq
-
-
On the deploy node, run the post-deployment migrations:
sudo gitlab-rake db:migrate
Gitaly Cluster
Gitaly Cluster is built using Gitaly and the Praefect component. It has its own PostgreSQL database, independent of the rest of the application.
Before you update the main application you need to update Praefect. Out of your Praefect nodes, pick one to be your Praefect deploy node. This is where you will install the new Omnibus package first and run database migrations.
Praefect deploy node
-
Create an empty file at
/etc/gitlab/skip-auto-reconfigure
. During software installation only, this will prevent the upgrade from runninggitlab-ctl reconfigure
and restarting GitLab before database migrations have been applied:sudo touch /etc/gitlab/skip-auto-reconfigure
-
Ensure that
praefect['auto_migrate'] = true
is set in/etc/gitlab/gitlab.rb
All Praefect nodes excluding the Praefect deploy node
- Ensure that
praefect['auto_migrate'] = false
is set in/etc/gitlab/gitlab.rb
Praefect deploy node
-
Update the GitLab package:
# Debian/Ubuntu sudo apt-get update && sudo apt-get install gitlab-ce # Centos/RHEL sudo yum install gitlab-ce
If you are an Enterprise Edition user, replace
gitlab-ce
withgitlab-ee
in the above command. -
To apply the Praefect database migrations and restart Praefect, run:
sudo gitlab-ctl reconfigure
All Praefect nodes excluding the Praefect deploy node
-
Update the GitLab package:
sudo apt-get update && sudo apt-get install gitlab-ce
If you are an Enterprise Edition user, replace
gitlab-ce
withgitlab-ee
in the above command. -
Ensure nodes are running the latest code:
sudo gitlab-ctl reconfigure
Use PostgreSQL HA
Pick a node to be the Deploy Node
. It can be any application node, but it must be the same
node throughout the process.
Deploy node
-
Create an empty file at
/etc/gitlab/skip-auto-reconfigure
. During software installation only, this will prevent the upgrade from runninggitlab-ctl reconfigure
and restarting GitLab before database migrations have been applied.sudo touch /etc/gitlab/skip-auto-reconfigure
All nodes including the Deploy node
- Ensure that
gitlab_rails['auto_migrate'] = false
is set in/etc/gitlab/gitlab.rb
Gitaly only nodes
-
Update the GitLab package
# Debian/Ubuntu sudo apt-get update && sudo apt-get install gitlab-ce # Centos/RHEL sudo yum install gitlab-ce
If you are an Enterprise Edition user, replace
gitlab-ce
withgitlab-ee
in the above command. -
Ensure nodes are running the latest code
sudo gitlab-ctl reconfigure
Deploy node
-
Update the GitLab package
# Debian/Ubuntu sudo apt-get update && sudo apt-get install gitlab-ce # Centos/RHEL sudo yum install gitlab-ce
If you are an Enterprise Edition user, replace
gitlab-ce
withgitlab-ee
in the above command. -
If you’re using PgBouncer:
You’ll need to bypass PgBouncer and connect directly to the database master before running migrations.
Rails uses an advisory lock when attempting to run a migration to prevent concurrent migrations from running on the same database. These locks are not shared across transactions, resulting in
ActiveRecord::ConcurrentMigrationError
and other issues when running database migrations using PgBouncer in transaction pooling mode.To find the master node, run the following on a database node:
sudo gitlab-ctl repmgr cluster show
Then, in your
gitlab.rb
file on the deploy node, updategitlab_rails['db_host']
andgitlab_rails['db_port']
with the database master’s host and port. -
To get the regular database migrations and latest code in place, run
sudo gitlab-ctl reconfigure sudo SKIP_POST_DEPLOYMENT_MIGRATIONS=true gitlab-rake db:migrate
All nodes excluding the Deploy node
-
Update the GitLab package
sudo apt-get update && sudo apt-get install gitlab-ce
If you are an Enterprise Edition user, replace
gitlab-ce
withgitlab-ee
in the above command. -
Ensure nodes are running the latest code
sudo gitlab-ctl reconfigure
Deploy node
-
Run post-deployment database migrations on deploy node to complete the migrations with
sudo gitlab-rake db:migrate
For nodes that run Puma or Sidekiq
-
Hot reload
puma
andsidekiq
servicessudo gitlab-ctl hup puma sudo gitlab-ctl restart sidekiq
-
If you’re using PgBouncer:
Change your
gitlab.rb
to point back to PgBouncer and run:sudo gitlab-ctl reconfigure
If you do not want to run zero downtime upgrades in the future, make
sure you remove /etc/gitlab/skip-auto-reconfigure
and revert
setting gitlab_rails['auto_migrate'] = false
in
/etc/gitlab/gitlab.rb
after you’ve completed these steps.
Use Redis HA (using Sentinel)
Package upgrades may involve version updates to the bundled Redis service. On instances using Redis for scaling, upgrades must follow a proper order to ensure minimum downtime, as specified below. This doc assumes the official guides are being followed to setup Redis HA.
In the application node
According to official Redis docs, the easiest way to update an HA instance using Sentinel is to upgrade the secondaries one after the other, perform a manual failover from current primary (running old version) to a recently upgraded secondary (running a new version), and then upgrade the original primary. For this, we need to know the address of the current Redis primary.
-
If your application node is running GitLab 12.7.0 or later, you can use the following command to get address of current Redis primary
sudo gitlab-ctl get-redis-master
-
If your application node is running a version older than GitLab 12.7.0, you will have to run the underlying
redis-cli
command (whichget-redis-master
command uses) to fetch information about the primary.-
Get the address of one of the sentinel nodes specified as
gitlab_rails['redis_sentinels']
in/etc/gitlab/gitlab.rb
-
Get the Redis master name specified as
redis['master_name']
in/etc/gitlab/gitlab.rb
-
Run the following command
sudo /opt/gitlab/embedded/bin/redis-cli -h <sentinel host> -p <sentinel port> SENTINEL get-master-addr-by-name <redis master name>
-
In the Redis secondary nodes
-
Install package for new version.
-
Run
sudo gitlab-ctl reconfigure
, if a reconfigure is not run as part of installation (due to/etc/gitlab/skip-auto-reconfigure
file being present). -
If reconfigure warns about a pending Redis/Sentinel restart, restart the corresponding service
sudo gitlab-ctl restart redis sudo gitlab-ctl restart sentinel
In the Redis primary node
Before upgrading the Redis primary node, we need to perform a failover so that one of the recently upgraded secondary nodes becomes the new primary. Once the failover is complete, we can go ahead and upgrade the original primary node.
-
Stop Redis service in Redis primary node so that it fails over to a secondary node
sudo gitlab-ctl stop redis
-
Wait for failover to be complete. You can verify it by periodically checking details of the current Redis primary node (as mentioned above). If it starts reporting a new IP, failover is complete.
-
Start Redis again in that node, so that it starts following the current primary node.
sudo gitlab-ctl start redis
-
Install package corresponding to new version.
-
Run
sudo gitlab-ctl reconfigure
, if a reconfigure is not run as part of installation (due to/etc/gitlab/skip-auto-reconfigure
file being present). -
If reconfigure warns about a pending Redis/Sentinel restart, restart the corresponding service
sudo gitlab-ctl restart redis sudo gitlab-ctl restart sentinel
Update the application node
Install the package for new version and follow regular package upgrade procedure.
Geo deployment
The order of steps is important. While following these steps, make sure you follow them in the right order, on the correct node.
Log in to your primary node, executing the following:
-
Create an empty file at
/etc/gitlab/skip-auto-reconfigure
. During software installation only, this will prevent the upgrade from runninggitlab-ctl reconfigure
and automatically running database migrationssudo touch /etc/gitlab/skip-auto-reconfigure
-
Update the GitLab package
# Debian/Ubuntu sudo apt-get update && sudo apt-get install gitlab-ee # Centos/RHEL sudo yum install gitlab-ee
-
To get the database migrations and latest code in place, run
sudo SKIP_POST_DEPLOYMENT_MIGRATIONS=true gitlab-ctl reconfigure
-
Hot reload
puma
andsidekiq
servicessudo gitlab-ctl hup puma sudo gitlab-ctl restart sidekiq
On each secondary node, executing the following:
-
Create an empty file at
/etc/gitlab/skip-auto-reconfigure
. During software installation only, this will prevent the upgrade from runninggitlab-ctl reconfigure
and automatically running database migrationssudo touch /etc/gitlab/skip-auto-reconfigure
-
Update the GitLab package
# Debian/Ubuntu sudo apt-get update && sudo apt-get install gitlab-ee # Centos/RHEL sudo yum install gitlab-ee
-
To get the database migrations and latest code in place, run
sudo SKIP_POST_DEPLOYMENT_MIGRATIONS=true gitlab-ctl reconfigure
-
Hot reload
puma
,sidekiq
and restartgeo-logcursor
servicessudo gitlab-ctl hup puma sudo gitlab-ctl restart sidekiq sudo gitlab-ctl restart geo-logcursor
-
Run post-deployment database migrations, specific to the Geo database
sudo gitlab-rake geo:db:migrate
After all secondary nodes are updated, finalize the update on the primary node:
-
Run post-deployment database migrations
sudo gitlab-rake db:migrate
After updating all nodes (both primary and all secondaries), check their status:
-
Verify Geo configuration and dependencies
sudo gitlab-rake gitlab:geo:check
If you do not want to run zero downtime upgrades in the future, make
sure you remove /etc/gitlab/skip-auto-reconfigure
and revert
setting gitlab_rails['auto_migrate'] = false
in
/etc/gitlab/gitlab.rb
after you’ve completed these steps.
Multi-node / HA deployment with Geo
This section describes the steps required to upgrade a multi-node / HA deployment with Geo. Some steps must be performed on a particular node. This node will be known as the “deploy node” and is noted through the following instructions.
Updates must be performed in the following order:
- Update Geo primary multi-node deployment.
- Update Geo secondary multi-node deployments.
- Post-deployment migrations and checks.
Step 1: Choose a “deploy node” for each deployment
You now need to choose:
- One instance for use as the primary “deploy node” on the Geo primary multi-node deployment.
- One instance for use as the secondary “deploy node” on each Geo secondary multi-node deployment.
Deploy nodes must be configured to be running Puma or Sidekiq or the geo-logcursor
daemon. In order
to avoid any downtime, they must not be in use during the update:
- If running Puma remove the deploy node from the load balancer.
-
If running Sidekiq, ensure the deploy node is not processing jobs:
sudo gitlab-ctl stop sidekiq
-
If running
geo-logcursor
daemon, ensure the deploy node is not processing events:sudo gitlab-ctl stop geo-logcursor
For zero-downtime, Puma, Sidekiq, and geo-logcursor
must be running on other nodes during the update.
Step 2: Update the Geo primary multi-node deployment
On all primary nodes including the primary “deploy node”
Create an empty file at /etc/gitlab/skip-auto-reconfigure
. During software
installation only, this will prevent the upgrade from running
gitlab-ctl reconfigure
and automatically running database migrations.
sudo touch /etc/gitlab/skip-auto-reconfigure
-
Ensure that
gitlab_rails['auto_migrate'] = false
is set in/etc/gitlab/gitlab.rb
. -
Ensure nodes are running the latest code
sudo gitlab-ctl reconfigure
On primary Gitaly only nodes
-
Update the GitLab package
# Debian/Ubuntu sudo apt-get update && sudo apt-get install gitlab-ee # Centos/RHEL sudo yum install gitlab-ee
-
Ensure nodes are running the latest code
sudo gitlab-ctl reconfigure
On the primary “deploy node”
-
Update the GitLab package
# Debian/Ubuntu sudo apt-get update && sudo apt-get install gitlab-ee # Centos/RHEL sudo yum install gitlab-ee
-
If you’re using PgBouncer:
You’ll need to bypass PgBouncer and connect directly to the database master before running migrations.
Rails uses an advisory lock when attempting to run a migration to prevent concurrent migrations from running on the same database. These locks are not shared across transactions, resulting in
ActiveRecord::ConcurrentMigrationError
and other issues when running database migrations using PgBouncer in transaction pooling mode.To find the master node, run the following on a database node:
sudo gitlab-ctl repmgr cluster show
Then, in your
gitlab.rb
file on the deploy node, updategitlab_rails['db_host']
andgitlab_rails['db_port']
with the database master’s host and port. -
To get the regular database migrations and latest code in place, run
sudo gitlab-ctl reconfigure sudo SKIP_POST_DEPLOYMENT_MIGRATIONS=true gitlab-rake db:migrate
-
If this deploy node is normally used to serve requests or process jobs, then you may return it to service at this point.
- To serve requests, add the deploy node to the load balancer.
-
To process Sidekiq jobs again, start Sidekiq:
sudo gitlab-ctl start sidekiq
On all primary nodes excluding the primary “deploy node”
-
Update the GitLab package
# Debian/Ubuntu sudo apt-get update && sudo apt-get install gitlab-ee # Centos/RHEL sudo yum install gitlab-ee
-
Ensure nodes are running the latest code
sudo gitlab-ctl reconfigure
For all primary nodes that run Puma or Sidekiq including the primary “deploy node”
Hot reload puma
and sidekiq
services:
sudo gitlab-ctl hup puma
sudo gitlab-ctl restart sidekiq
Step 3: Update each Geo secondary multi-node deployment
Only proceed if you have successfully completed all steps on the Geo primary multi-node deployment.
On all secondary nodes including the secondary “deploy node”
Create an empty file at /etc/gitlab/skip-auto-reconfigure
. During software
installation only, this will prevent the upgrade from running
gitlab-ctl reconfigure
and automatically running database migrations.
sudo touch /etc/gitlab/skip-auto-reconfigure
-
Ensure that
geo_secondary['auto_migrate'] = false
is set in/etc/gitlab/gitlab.rb
-
Ensure nodes are running the latest code
sudo gitlab-ctl reconfigure
On secondary Gitaly only nodes
-
Update the GitLab package
# Debian/Ubuntu sudo apt-get update && sudo apt-get install gitlab-ee # Centos/RHEL sudo yum install gitlab-ee
-
Ensure nodes are running the latest code
sudo gitlab-ctl reconfigure
On the secondary “deploy node”
-
Update the GitLab package
# Debian/Ubuntu sudo apt-get update && sudo apt-get install gitlab-ee # Centos/RHEL sudo yum install gitlab-ee
-
To get the regular database migrations and latest code in place, run
sudo gitlab-ctl reconfigure sudo SKIP_POST_DEPLOYMENT_MIGRATIONS=true gitlab-rake geo:db:migrate
-
If this deploy node is normally used to serve requests or perform background processing, then you may return it to service at this point.
- To serve requests, add the deploy node to the load balancer.
-
To process Sidekiq jobs again, start Sidekiq:
sudo gitlab-ctl start sidekiq
-
To process Geo events again, start the
geo-logcursor
daemon:sudo gitlab-ctl start geo-logcursor
On all secondary nodes excluding the secondary “deploy node”
-
Update the GitLab package
# Debian/Ubuntu sudo apt-get update && sudo apt-get install gitlab-ee # Centos/RHEL sudo yum install gitlab-ee
-
Ensure nodes are running the latest code
sudo gitlab-ctl reconfigure
For all secondary nodes that run Puma, Sidekiq, or the geo-logcursor
daemon including the secondary “deploy node”
Hot reload puma
, sidekiq
and geo-logcursor
services:
sudo gitlab-ctl hup puma
sudo gitlab-ctl restart sidekiq
sudo gitlab-ctl restart geo-logcursor
Step 4: Run post-deployment migrations and checks
On the primary “deploy node”
-
Run post-deployment database migrations:
sudo gitlab-rake db:migrate
-
Verify Geo configuration and dependencies
sudo gitlab-rake gitlab:geo:check
-
If you’re using PgBouncer:
Change your
gitlab.rb
to point back to PgBouncer and run:sudo gitlab-ctl reconfigure
On all secondary “deploy nodes”
-
Run post-deployment database migrations, specific to the Geo database:
sudo gitlab-rake geo:db:migrate
-
Verify Geo configuration and dependencies
sudo gitlab-rake gitlab:geo:check
-
Verify Geo status
sudo gitlab-rake geo:status
Upgrade Gitaly servers
Gitaly servers must be upgraded to the newer version prior to upgrading the application server. This prevents the gRPC client on the application server from sending RPCs that the old Gitaly version does not support.
Downgrade
This section contains general information on how to revert to an earlier version of a package.
The example below demonstrates the downgrade procedure when downgrading between minor and patch versions (for example, from 13.0.6 to 13.0.5).
When downgrading between major versions, take into account the specific version changes that occurred when you upgraded to the major version you are downgrading from.
These steps consist of:
- Stopping GitLab
- Removing the current package
- Installing the old package
- Reconfiguring GitLab
- Restoring the backup
- Starting GitLab
Steps:
-
Stop GitLab and remove the current package:
# If running Puma sudo gitlab-ctl stop puma # Stop sidekiq sudo gitlab-ctl stop sidekiq # If on Ubuntu: remove the current package sudo dpkg -r gitlab-ee # If on Centos: remove the current package sudo yum remove gitlab-ee
-
Identify the GitLab version you want to downgrade to:
# (Replace with gitlab-ce if you have GitLab FOSS installed) # Ubuntu sudo apt-cache madison gitlab-ee # CentOS: sudo yum --showduplicates list gitlab-ee
-
Downgrade GitLab to the desired version (for example, to GitLab 13.0.5):
# (Replace with gitlab-ce if you have GitLab FOSS installed) # Ubuntu sudo apt install gitlab-ee=13.0.5-ee.0 # CentOS: sudo yum install gitlab-ee-13.0.5-ee.0.el8
-
Reconfigure GitLab:
sudo gitlab-ctl reconfigure
-
Follow the instructions in the Restore for Omnibus GitLab installations page to complete the downgrade.
Troubleshooting
GitLab 13.7 and later unavailable on Amazon Linux 2
Amazon Linux 2 is not an officially supported operating system.
However, in past the official package installation script
installed the el/6
package repository if run on Amazon Linux. From GitLab 13.7, we no longer
provide el/6
packages so administrators must run the installation script
again to update the repository to el/7
:
curl -s "https://packages.gitlab.com/install/repositories/gitlab/gitlab-ee/script.rpm.sh" | sudo bash
See the epic on support for GitLab on Amazon Linux 2 for the latest details on official Amazon Linux 2 support.
Get the status of a GitLab installation
sudo gitlab-ctl status
sudo gitlab-rake gitlab:check SANITIZE=true
- Information on using
gitlab-ctl
to perform maintenance tasks. - Information on using
gitlab-rake
to check the configuration.
RPM ‘package is already installed’ error
If you are using RPM and you are upgrading from GitLab Community Edition to GitLab Enterprise Edition you may get an error like this:
package gitlab-7.5.2_omnibus.5.2.1.ci-1.el7.x86_64 (which is newer than gitlab-7.5.2_ee.omnibus.5.2.1.ci-1.el7.x86_64) is already installed
You can override this version check with the --oldpackage
option:
sudo rpm -Uvh --oldpackage gitlab-7.5.2_ee.omnibus.5.2.1.ci-1.el7.x86_64.rpm
Package obsoleted by installed package
CE and EE packages are marked as obsoleting and replacing each other so that both aren’t installed and running at the same time.
If you are using local RPM files to switch from CE to EE or vice versa, use rpm
for installing the package rather than yum
. If you try to use yum, then you may get an error like this:
Cannot install package gitlab-ee-11.8.3-ee.0.el6.x86_64. It is obsoleted by installed package gitlab-ce-11.8.3-ce.0.el6.x86_64
To avoid this issue, either:
- Use the same instructions provided in the Update using a manually-downloaded package section.
- Temporarily disable obsoletion checking in yum by adding
--setopt=obsoletes=0
to the options given to the command.
500 error when accessing Project > Settings > Repository on Omnibus installs
In situations where a GitLab instance has been migrated from CE > EE > CE and then back to EE, some Omnibus installations get the below error when viewing a projects repository settings.
Processing by Projects::Settings::RepositoryController#show as HTML
Parameters: {"namespace_id"=>"<namespace_id>", "project_id"=>"<project_id>"}
Completed 500 Internal Server Error in 62ms (ActiveRecord: 4.7ms | Elasticsearch: 0.0ms | Allocations: 14583)
NoMethodError (undefined method `commit_message_negative_regex' for #<PushRule:0x00007fbddf4229b8>
Did you mean? commit_message_regex_change):
This error is caused by an EE feature being added to a CE instance on the initial move to EE.
Once the instance is moved back to CE then is upgraded to EE again, the push_rules
table already exists in the database and a migration is unable to add the commit_message_regex_change
column.
This results in the backport migration of EE tables not working correctly. The backport migration assumes that certain tables in the database do not exist when running CE.
To fix this issue, manually add the missing commit_message_negative_regex
column and restart GitLab:
# Access psql
sudo gitlab-rails dbconsole
# Add the missing column
ALTER TABLE push_rules ADD COLUMN commit_message_negative_regex VARCHAR;
# Exit psql
\q
# Restart GitLab
sudo gitlab-ctl restart
Error Failed to connect to the internal GitLab API
on a separate GitLab Pages server
Please see GitLab Pages troubleshooting.