Gitlab pipelines for Continuous Delivery

Featured image

Originally posted on Medium. This post refers to my working experience as a Tech Lead in Musement.

Introduction

This article aims to share the tools and practices adopted in Musement to ease the development flow and increase the speed of the releases cycle of new backend stacks and services. We’ve seen the benefits of Continuous Delivery in our previous post, we take a deeper look at how Continuous Delivery can be reached using Gitlab in order to provide more insight into the technical aspects.

Reasons

Over the recent months we started several projects on new stacks and we the migration of our repositories to Gitlab. These two factors served as enablers to promote the Continuous Delivery approach and to develop a set of pipelines to achieve this goal. As development teams, we wanted to shape Continuous Integration and Continuous Delivery pipelines based on our varying needs and also to have full control over the steps our code should pass across from a git push command to the deploy in production.

Repeatability and automation

We chose to work with Github flow, in order to open short-lived branches and allow them to be merged (after the code review) into the default branch. Anything in the default branch can always be deployed and should be deployed as soon as possible. We required a completely repeatable development environment, to prevent issues caused by misconfigurations or misalignment and to be able to troubleshoot any problems that may arise in Continuous Integration. With Docker being the leading container platform, the solution was obvious; using containers in local development environment. allows developers to run different stacks without installing anything on their machine, which is particularly effective in a distributed team to avoid misalignments or configuration issues. Another decision made to ease development workflow is to use make commands to define the standard operations on the repository, for example make run to run the project locally, or make test to run unit tests. Developers therefore don’t have to remember all the specific docker run commands for all the projects they’re working on, and the commands are kept up-to-date in the repo. The same set of commands is used in Continuous integration pipeline, which leverages GitlabCI features. All the checks related to the development workflow have been automated and, depending on the project stack, there can be a different set of steps in the pipeline. The basic set of steps consists of unit tests, static code analysis (lint) and source code compilation.

As we can see from the snippet above, the pipeline definition remains concise and easy to read. All the steps extend a base image, which is a custom Docker image used across the pipelines providing the tools needed in the steps. Below, you can see the commands definition.

Delivery and infrastructure

Once the development flow had been automated, we also automated the delivery of the artifacts. This is possible by extending the pipeline to enable the release of the artifacts. When a pull request gets merged into the default branch, a set of jobs are executed to promote the artifact to the pre-production environment and finally to production. Depending on the project stack and on the architectural choices, these jobs can include OpenAPI or Swagger documentation generation, Docker images generation and push on a repository, database migrations execution or Lambda functions upload. The following snippet shows the steps for a Docker based service on ECS Fargate. The steps are executed in an AWS image which provides a specific version of AWS CLI.

A further detail about tags: to centralize the configuration for interacting with AWS accounts, the pipelines run on custom Gitlab runners, which have been tagged with the corresponding environment. In this way, developers can choose the environment they want to deliver their artifact to simply specifying the right tag among dev, qa and prod. Runners configuration is centralized in the infrastructure repository so that it can be changed globally. At the moment of writing, the deploy in production step is still under manual approve: the goal is to automate also this step by running a smoke tests suite after a successful deploy on the pre-production environment. This is how such a pipeline looks like:

CI/CD Gitlab pipeline

The very same approach is being adopted for infrastructure deploy: as the infrastructure repositories have been migrated to Gitlab, the same flow is being adopted to contribute to infrastructure definition (via CloudFormation), allowing also developers to take part into the complete process of developing and running their applications.

You build it, you run it. (W. Vogels)

Conclusion

We’ve seen how Continuous Delivery has been approached in Musement and how it can be reached using Gitlab. Obviously, this is quite a custom use case that can be useful for having an overview and start shaping the pipelines according to your own needs. Gitlab CI is a powerful tool that allows flexibility and full control over the flow you want your software to follow from the moment its code gets pushed to the moment it gets delivered to production.