- Configure Gitaly
- Gitaly Cluster
- Do not bypass Gitaly
- Direct access to Git in GitLab
- NFS deprecation notice
Gitaly and Gitaly Cluster
Gitaly provides high-level RPC access to Git repositories. It is used by GitLab to read and write Git data.
Gitaly implements a client-server architecture:
- A Gitaly server is any node that runs Gitaly itself.
- A Gitaly client is any node that runs a process that makes requests of the Gitaly server. These include, but are not limited to:
The following illustrates the Gitaly client-server architecture:
End users do not have direct access to Gitaly. Gitaly manages only Git repository access for GitLab. Other types of GitLab data aren’t accessed using Gitaly.
Configure Gitaly
Gitaly comes pre-configured with Omnibus GitLab, which is a configuration suitable for up to 1000 users. For:
- Omnibus GitLab installations for up to 2000 users, see specific Gitaly configuration instructions.
- Source installations or custom Gitaly installations, see Configure Gitaly.
GitLab installations for more than 2000 users should use Gitaly Cluster.
Gitaly Cluster
Gitaly, the service that provides storage for Git repositories, can be run in a clustered configuration to scale the Gitaly service and increase fault tolerance. In this configuration, every Git repository is stored on every Gitaly node in the cluster.
Using a Gitaly Cluster increases fault tolerance by:
- Replicating write operations to warm standby Gitaly nodes.
- Detecting Gitaly node failures.
- Automatically routing Git requests to an available Gitaly node.
The availability objectives for Gitaly clusters are:
-
Recovery Point Objective (RPO): Less than 1 minute.
Writes are replicated asynchronously. Any writes that have not been replicated to the newly promoted primary are lost.
Strong consistency can be used to avoid loss in some circumstances.
-
Recovery Time Objective (RTO): Less than 10 seconds. Outages are detected by a health check run by each Praefect node every second. Failover requires ten consecutive failed health checks on each Praefect node.
Faster outage detection is planned to improve this to less than 1 second.
Gitaly Cluster supports:
- Strong consistency of the secondary replicas.
- Automatic failover from the primary to the secondary.
- Reporting of possible data loss if replication queue is non-empty.
- Marking repositories as read-only if data loss is detected to prevent data inconsistencies.
Follow the Gitaly Cluster epic for improvements including horizontally distributing reads.
Overview
Git storage is provided through the Gitaly service in GitLab, and is essential to the operation of the GitLab application. When the number of users, repositories, and activity grows, it is important to scale Gitaly appropriately by:
- Increasing the available CPU and memory resources available to Git before resource exhaustion degrades Git, Gitaly, and GitLab application performance.
- Increase available storage before storage limits are reached causing write operations to fail.
- Improve fault tolerance by removing single points of failure. Git should be considered mission critical if a service degradation would prevent you from deploying changes to production.
Moving beyond NFS
Network File System (NFS) is not well suited to Git workloads which are CPU and IOPS sensitive. Specifically:
- Git is sensitive to file system latency. Even simple operations require many read operations. Operations that are fast on block storage can become an order of magnitude slower. This significantly impacts GitLab application performance.
- NFS performance optimizations that prevent the performance gap between block storage and NFS being even wider are vulnerable to race conditions. We have observed data inconsistencies in production environments caused by simultaneous writes to different NFS clients. Data corruption is not an acceptable risk.
Gitaly Cluster is purpose built to provide reliable, high performance, fault tolerant Git storage.
Further reading:
- Blog post: The road to Gitaly v1.0 (aka, why GitLab doesn’t require NFS for storing Git data anymore)
- Blog post: How we spent two weeks hunting an NFS bug in the Linux kernel
Where Gitaly Cluster fits
GitLab accesses repositories through the configured repository storages. Each new repository is stored on one of the repository storages based on their configured weights. Each repository storage is either:
- A Gitaly storage served directly by Gitaly. These map to a directory on the file system of a Gitaly node.
- A virtual storage served by Praefect. A virtual storage is a cluster of Gitaly storages that appear as a single repository storage.
Virtual storages are a feature of Gitaly Cluster. They support replicating the repositories to multiple storages for fault tolerance. Virtual storages can improve performance by distributing requests across Gitaly nodes. Their distributed nature makes it viable to have a single repository storage in GitLab to simplify repository management.
Components of Gitaly Cluster
Gitaly Cluster consists of multiple components:
- Load balancer for distributing requests and providing fault-tolerant access to Praefect nodes.
- Praefect nodes for managing the cluster and routing requests to Gitaly nodes.
- PostgreSQL database for persisting cluster metadata and PgBouncer, recommended for pooling Praefect’s database connections.
- Gitaly nodes to provide repository storage and Git access.
In this example:
- Repositories are stored on a virtual storage called
storage-1
. - Three Gitaly nodes provide
storage-1
access:gitaly-1
,gitaly-2
, andgitaly-3
. - The three Gitaly nodes store data on their file systems.
Virtual storage or direct Gitaly storage
Gitaly supports multiple models of scaling:
- Clustering using Gitaly Cluster, where each repository is stored on multiple Gitaly nodes in the cluster. Read requests are distributed between repository replicas and write requests are broadcast to repository replicas. GitLab accesses virtual storage.
- Direct access to Gitaly storage using repository storage paths, where each repository is stored on the assigned Gitaly node. All requests are routed to this node.
The following is Gitaly set up to use direct access to Gitaly instead of Gitaly Cluster:
In this example:
- Each repository is stored on one of three Gitaly storages:
storage-1
,storage-2
, orstorage-3
. - Each storage is serviced by a Gitaly node.
- The three Gitaly nodes share data in three separate hashed storage locations.
- The replication factor is
3
. There are three copies maintained of each repository.
Generally, virtual storage with Gitaly Cluster can replace direct Gitaly storage configurations, at the expense of additional storage needed to store each repository on multiple Gitaly nodes. The benefit of using Gitaly Cluster over direct Gitaly storage is:
- Improved fault tolerance, because each Gitaly node has a copy of every repository.
- Improved resource utilization, reducing the need for over-provisioning for shard-specific peak loads, because read loads are distributed across replicas.
- Manual rebalancing for performance is not required, because read loads are distributed across replicas.
- Simpler management, because all Gitaly nodes are identical.
Under some workloads, CPU and memory requirements may require a large fleet of Gitaly nodes. It can be uneconomical to have one to one replication factor.
A hybrid approach can be used in these instances, where each shard is configured as a smaller cluster. Variable replication factor is planned to provide greater flexibility for extremely large GitLab instances.
Architecture
Praefect is a router and transaction manager for Gitaly, and a required component for running a Gitaly Cluster.
For more information, see Gitaly High Availability (HA) Design.
Configure Gitaly Cluster
For more information on configuring Gitaly Cluster, see Configure Gitaly Cluster.
Do not bypass Gitaly
GitLab doesn’t advise directly accessing Gitaly repositories stored on disk with a Git client, because Gitaly is being continuously improved and changed. These improvements may invalidate your assumptions, resulting in performance degradation, instability, and even data loss. For example:
- Gitaly has optimizations such as the
info/refs
advertisement cache, that rely on Gitaly controlling and monitoring access to repositories by using the official gRPC interface. - Gitaly Cluster has optimizations, such as fault tolerance and distributed reads, that depend on the gRPC interface and database to determine repository state.
Direct access to Git in GitLab
Direct access to Git uses code in GitLab known as the “Rugged patches”.
Before Gitaly existed, what are now Gitaly clients accessed Git repositories directly, either:
- On a local disk in the case of a single-machine Omnibus GitLab installation.
- Using NFS in the case of a horizontally-scaled GitLab installation.
In addition to running plain git
commands, GitLab used a Ruby library called
Rugged. Rugged is a wrapper around
libgit2, a stand-alone implementation of Git in the form of a C library.
Over time it became clear that Rugged, particularly in combination with
Unicorn, is extremely efficient. Because libgit2
is a library and
not an external process, there was very little overhead between:
- GitLab application code that tried to look up data in Git repositories.
- The Git implementation itself.
Because the combination of Rugged and Unicorn was so efficient, the GitLab application code ended up with lots of duplicate Git object lookups. For example, looking up the default branch commit a dozen times in one request. We could write inefficient code without poor performance.
When we migrated these Git lookups to Gitaly calls, we suddenly had a much higher fixed cost per Git
lookup. Even when Gitaly is able to re-use an already-running git
process (for example, to look up
a commit), you still have:
- The cost of a network roundtrip to Gitaly.
- Inside Gitaly, a write/read roundtrip on the Unix pipes that connect Gitaly to the
git
process.
Using GitLab.com to measure, we reduced the number of Gitaly calls per request until the loss of Rugged’s efficiency was no longer felt. It also helped that we run Gitaly itself directly on the Git file servers, rather than by using NFS mounts. This gave us a speed boost that counteracted the negative effect of not using Rugged anymore.
Unfortunately, other deployments of GitLab could not remove NFS like we did on GitLab.com, and they got the worst of both worlds:
- The slowness of NFS.
- The increased inherent overhead of Gitaly.
The code removed from GitLab during the Gitaly migration project affected these deployments. As a performance workaround for these NFS-based deployments, we re-introduced some of the old Rugged code. This re-introduced code is informally referred to as the “Rugged patches”.
How it works
The Ruby methods that perform direct Git access are behind feature flags, disabled by default. It wasn’t convenient to set feature flags to get the best performance, so we added an automatic mechanism that enables direct Git access.
When GitLab calls a function that has a “Rugged patch”, it performs two checks:
- Is the feature flag for this patch set in the database? If so, the feature flag setting controls the GitLab use of “Rugged patch” code.
- If the feature flag is not set, GitLab tries accessing the file system underneath the
Gitaly server directly. If it can, it uses the “Rugged patch”:
- If using Puma and thread count is set
to
1
.
- If using Puma and thread count is set
to
The result of these checks is cached.
To see if GitLab can access the repository file system directly, we use the following heuristic:
- Gitaly ensures that the file system has a metadata file in its root with a UUID in it.
- Gitaly reports this UUID to GitLab by using the
ServerInfo
RPC. - GitLab Rails tries to read the metadata file directly. If it exists, and if the UUID’s match, assume we have direct access.
Direct Git access is enable by default in Omnibus GitLab because it fills in the correct repository
paths in the GitLab configuration file config/gitlab.yml
. This satisfies the UUID check.
/var/opt/gitlab/git-data/repositories/.gitaly-metadata
, is not included in the transfer.
Copying this file causes GitLab to use the Rugged patches for repositories hosted on the Gitaly server,
leading to Error creating pipeline
and Commit not found
errors, or stale data.Transition to Gitaly Cluster
For the sake of removing complexity, we must remove direct Git access in GitLab. However, we can’t remove it as long some GitLab installations require Git repositories on NFS.
There are two facets to our efforts to remove direct Git access in GitLab:
- Reduce the number of inefficient Gitaly queries made by GitLab.
- Persuade administrators of fault-tolerant or horizontally-scaled GitLab instances to migrate off NFS.
The second facet presents the only real solution. For this, we developed Gitaly Cluster.
NFS deprecation notice
Engineering support for NFS for Git repositories is deprecated. Technical support is planned to be unavailable from GitLab 15.0. No further enhancements are planned for this feature.
Additional information:
GitLab recommends:
- Creating a Gitaly Cluster as soon as possible.
- Moving your repositories from NFS-based storage to Gitaly Cluster.
We welcome your feedback on this process. You can:
- Raise a support ticket.
- Comment on the epic.