- S3 encryption
- Azure Blob Storage
- Docker Registry images
- LFS, Artifacts, Uploads, Packages, External Diffs, Terraform State, Dependency Proxy, Secure Files
- Backups
- Google Cloud CDN
- Troubleshooting
Configure the GitLab chart with an external object storage
GitLab relies on object storage for highly-available persistent data in Kubernetes.
By default, an S3-compatible storage solution named minio
is deployed with the
chart. For production quality deployments, we recommend using a hosted
object storage solution like Google Cloud Storage or AWS S3.
To disable MinIO, set this option and then follow the related documentation below:
--set global.minio.enabled=false
An example of the full configuration has been provided in the examples.
This documentation specifies usage of access and secret keys for AWS. It is also possible to use IAM roles.
S3 encryption
GitLab supports Amazon KMS to encrypt data stored in S3 buckets. You can enable this in two ways:
- In AWS, configure the S3 bucket to use default encryption.
- In GitLab, enable server side encryption headers.
These two options are not mutually exclusive. You can set a default encryption policy, but also enable server-side encryption headers to override those defaults.
See the GitLab documentation on encrypted S3 buckets for more details.
Azure Blob Storage
Direct support for Azure Blob storage is available for uploaded attachments, CI job artifacts, LFS, and other object types supported via the consolidated settings. In previous GitLab versions, an Azure MinIO gateway was needed.
Although Azure uses the word container to denote a collection of blobs, GitLab standardizes on the term bucket.
Azure Blob storage requires the use of the
consolidated object storage settings. A
single Azure storage account name and key must be used across multiple
Azure blob containers. Customizing individual connection
settings by
object type (for example, artifacts
, uploads
, and so on) is not permitted.
To enable Azure Blob storage, see
rails.azurerm.yaml
as an example to define the Azure connection
. You can load this as a
secret via:
kubectl create secret generic gitlab-rails-storage --from-file=connection=rails.azurerm.yml
Then, disable MinIO and set these global settings:
--set global.minio.enabled=false
--set global.appConfig.object_store.enabled=true
--set global.appConfig.object_store.connection.secret=gitlab-rails-storage
Be sure to create Azure containers for the default names or set the container names in the bucket configuration.
Requests to the local network are not allowed
,
see the Troubleshooting section.Docker Registry images
Configuration of object storage for the registry
chart is done via the registry.storage
key, and the global.registry.bucket
key.
--set registry.storage.secret=registry-storage
--set registry.storage.key=config
--set global.registry.bucket=bucket-name
global.registry.bucket
. The secret is used in the registry server, and
the global is used by GitLab backups.Create the secret per registry chart documentation on storage, then configure the chart to make use of this secret.
Examples for S3(S3 compatible storages, but Azure MinIO gateway not supported, see Azure Blob Storage), Azure and GCS drivers can be found in
examples/objectstorage
.
Registry configuration
- Decide on which storage service to use.
- Copy appropriate file to
registry-storage.yaml
. - Edit with the correct values for the environment.
- Follow registry chart documentation on storage for creating the secret.
- Configure the chart as documented.
LFS, Artifacts, Uploads, Packages, External Diffs, Terraform State, Dependency Proxy, Secure Files
Configuration of object storage for LFS, artifacts, uploads, packages, external diffs, Terraform state, Secure Files, and pseudonymizer is done via the following keys:
-
global.appConfig.lfs
-
global.appConfig.artifacts
-
global.appConfig.uploads
-
global.appConfig.packages
-
global.appConfig.externalDiffs
-
global.appConfig.dependencyProxy
-
global.appConfig.terraformState
-
global.appConfig.ciSecureFiles
Note also that:
- You must create buckets for the default names or custom names in the bucket configuration.
- A different bucket is needed for each, otherwise performing a restore from backup doesn’t function properly.
- Storing MR diffs on external storage is not enabled by default, so,
for the object storage settings for
externalDiffs
to take effect,global.appConfig.externalDiffs.enabled
key should have atrue
value. - The dependency proxy feature is not enabled by default, so,
for the object storage settings for
dependencyProxy
to take effect,global.appConfig.dependencyProxy.enabled
key should have atrue
value.
Below is an example of the configuration options:
--set global.appConfig.lfs.bucket=gitlab-lfs-storage
--set global.appConfig.lfs.connection.secret=object-storage
--set global.appConfig.lfs.connection.key=connection
--set global.appConfig.artifacts.bucket=gitlab-artifacts-storage
--set global.appConfig.artifacts.connection.secret=object-storage
--set global.appConfig.artifacts.connection.key=connection
--set global.appConfig.uploads.bucket=gitlab-uploads-storage
--set global.appConfig.uploads.connection.secret=object-storage
--set global.appConfig.uploads.connection.key=connection
--set global.appConfig.packages.bucket=gitlab-packages-storage
--set global.appConfig.packages.connection.secret=object-storage
--set global.appConfig.packages.connection.key=connection
--set global.appConfig.externalDiffs.bucket=gitlab-externaldiffs-storage
--set global.appConfig.externalDiffs.connection.secret=object-storage
--set global.appConfig.externalDiffs.connection.key=connection
--set global.appConfig.terraformState.bucket=gitlab-terraform-state
--set global.appConfig.terraformState.connection.secret=object-storage
--set global.appConfig.terraformState.connection.key=connection
--set global.appConfig.dependencyProxy.bucket=gitlab-dependencyproxy-storage
--set global.appConfig.dependencyProxy.connection.secret=object-storage
--set global.appConfig.dependencyProxy.connection.key=connection
--set global.appConfig.ciSecureFiles.bucket=gitlab-ci-secure-files
--set global.appConfig.ciSecureFiles.connection.secret=object-storage
--set global.appConfig.ciSecureFiles.connection.key=connection
See the charts/globals documentation on appConfig for full details.
Create the secret(s) per the connection details documentation, and then configure the chart to use the provided secrets. Note, the same secret can be used for all of them.
Examples for AWS (any S3 compatible like Azure using MinIO) and Google providers can be found in
examples/objectstorage
.
appConfig configuration
- Decide on which storage service to use.
- Copy appropriate file to
rails.yaml
. - Edit with the correct values for the environment.
- Follow connection details documentation for creating the secret.
- Configure the chart as documented.
Backups
Backups are also stored in object storage, and must be configured to point externally rather than the included MinIO service. The backup/restore procedure uses two separate buckets:
- A bucket for storing backups (
global.appConfig.backups.bucket
) - A temporary bucket for preserving existing data during the restore process (
global.appConfig.backups.tmpBucket
)
AWS S3-compatible object storage systems, Google Cloud Storage, and Azure Blob Storage
are supported backends. You can configure the backend type by setting global.appConfig.backups.objectStorage.backend
to s3
for AWS S3, gcs
for Google Cloud Storage, or azure
for Azure Blob Storage.
You must also provide a connection configuration through the gitlab.toolbox.backups.objectStorage.config
key.
When using Google Cloud Storage with a secret, the GCP project must be set with the global.appConfig.backups.objectStorage.config.gcpProject
value.
For S3-compatible storage:
--set global.appConfig.backups.bucket=gitlab-backup-storage
--set global.appConfig.backups.tmpBucket=gitlab-tmp-storage
--set gitlab.toolbox.backups.objectStorage.config.secret=storage-config
--set gitlab.toolbox.backups.objectStorage.config.key=config
For Google Cloud Storage (GCS) with a secret:
--set global.appConfig.backups.bucket=gitlab-backup-storage
--set global.appConfig.backups.tmpBucket=gitlab-tmp-storage
--set gitlab.toolbox.backups.objectStorage.backend=gcs
--set gitlab.toolbox.backups.objectStorage.config.gcpProject=my-gcp-project-id
--set gitlab.toolbox.backups.objectStorage.config.secret=storage-config
--set gitlab.toolbox.backups.objectStorage.config.key=config
For Google Cloud Storage (GCS) with Workload Identity Federation for GKE, only the backend and buckets need to be set.
Make sure gitlab.toolbox.backups.objectStorage.config.secret
and gitlab.toolbox.backups.objectStorage.config.key
are not set,
so that the cluster uses Google’s Application Default Credentials:
--set global.appConfig.backups.bucket=gitlab-backup-storage
--set global.appConfig.backups.tmpBucket=gitlab-tmp-storage
--set gitlab.toolbox.backups.objectStorage.backend=gcs
For Azure Blob Storage:
--set global.appConfig.backups.bucket=gitlab-backup-storage
--set global.appConfig.backups.tmpBucket=gitlab-tmp-storage
--set gitlab.toolbox.backups.objectStorage.backend=azure
--set gitlab.toolbox.backups.objectStorage.config.secret=storage-config
--set gitlab.toolbox.backups.objectStorage.config.key=config
See the backup/restore object storage documentation for full details.
Backups storage example
-
Create the
storage.config
file:-
On Amazon S3, the contents should be in the s3cmd configuration file format
[default] access_key = AWS_ACCESS_KEY secret_key = AWS_SECRET_KEY bucket_location = us-east-1 multipart_chunk_size_mb = 128 # default is 15 (MB)
-
On Google Cloud Storage, you can create the file by creating a service account with the
storage.admin
role and then creating a service account key. Below is an example of using thegcloud
CLI to create the file.export PROJECT_ID=$(gcloud config get-value project) gcloud iam service-accounts create gitlab-gcs --display-name "Gitlab Cloud Storage" gcloud projects add-iam-policy-binding --role roles/storage.admin ${PROJECT_ID} --member=serviceAccount:gitlab-gcs@${PROJECT_ID}.iam.gserviceaccount.com gcloud iam service-accounts keys create --iam-account gitlab-gcs@${PROJECT_ID}.iam.gserviceaccount.com storage.config
-
On Azure Storage
[default] # Setup endpoint: hostname of the Web App host_base = https://your_minio_setup.azurewebsites.net host_bucket = https://your_minio_setup.azurewebsites.net # Leave as default bucket_location = us-west-1 use_https = True multipart_chunk_size_mb = 128 # default is 15 (MB) # Setup access keys # Access Key = Azure Storage Account name access_key = AZURE_ACCOUNT_NAME # Secret Key = Azure Storage Account Key secret_key = AZURE_ACCOUNT_KEY # Use S3 v4 signature APIs signature_v2 = False
-
-
Create the secret
kubectl create secret generic storage-config --from-file=config=storage.config
Google Cloud CDN
- Introduced in GitLab 15.5.
You can use Google Cloud CDN to cache and fetch data from the artifacts bucket. This can help improve performance and reduce network egress costs.
Configuration of Cloud CDN is done via the following keys:
-
global.appConfig.artifacts.cdn.secret
-
global.appConfig.artifacts.cdn.key
(default iscdn
)
To use Cloud CDN:
- Set up Cloud CDN to use the artifacts bucket as the backend.
- Create a key for signed URLs.
- Give the Cloud CDN service account permission to read from the bucket.
- Prepare a YAML file with the parameters using the example in
rails.googlecdn.yaml
. You will need to fill in the following information:-
url
: Base URL of the CDN host from step 1 -
key_name
: Key name from step 2 -
key
: The actual secret from step 2
-
-
Load this YAML file into a Kubernetes secret under the
cdn
key. For example, to create a secretgitlab-rails-cdn
:kubectl create secret generic gitlab-rails-cdn --from-file=cdn=rails.googlecdn.yml
-
Set
global.appConfig.artifacts.cdn.secret
togitlab-rails-cdn
. If you’re setting this via ahelm
parameter, use:--set global.appConfig.artifacts.cdn.secret=gitlab-rails-cdn
Troubleshooting
Azure Blob: URL [FILTERED] is blocked: Requests to the local network are not allowed
This happens when the Azure Blob hostname is resolved to a RFC1918 (local / private) IP address. As a workaround,
allow Outbound requests
for your Azure Blob hostname (yourinstance.blob.core.windows.net
).