Recently when we migrated the CI/CD pipeline for a client’s application to Azure DevOps pipeline we were pleasantly surprised.
But before we get into how we re-engineered the pipeline, let me take you through the original setup with CircleCI .
Each project repository has its own build pipeline and generates a Docker image which is pushed to a Docker registry. Finally, it does a commit to the Helm Chart repo in the initial, Staging branch. Separating the Deployment pipeline from the individual projects is a foundational element of larger scale microservice architectures. It provides a clear view and history of a logical part of the application stack, archived in version control.
The Helm Chart repo has a branch for each target environment; Staging, QA and Production. QA and Production also have a queue branch in front of the deployment branch. For example, the queue branch “uat-queue-1234” is merged to the deployment branch “uat” through a Pull Request, which is approved by the appropriate stakeholders. On each CircleCI run for the Helm Chart repo it would look in a configuration file what the “next” environment was and if it used a queue branch for propagation. Based on that information it would either do a commit to the deployment branch for the next target environment or the queue branch.
This setup worked quite well but took quite some time to create. Also, it required the actors for the manual propagation to have knowledge of PRs and how they work. Since nothing prevented anybody from committing to any deployment- or queue branch, merge conflicts sometimes had to be solved.
All this logic was contained in a bash script in the CircleCI configuration. This grew to unmanageable proportions because it had to take care of:
When it was decided the application was going to run in production on Azure, we also took the opportunity to migrate the pipeline into Azure DevOps.
The docker build pipeline was basically transferred to the Azure DevOps build pipeline and contains the exact same steps. In the visual designer of Azure DevOps you can add tasks to a step and select plugins for “npm” or “Docker” from the marketplace.
To start containers for Redis and Postgres we used the Docker plugin and select the “run” command. This opens up a form where you can add all the necessary parameters like “ports”, “Environment Variables” and “Image Name”. Basically the same things you could specify on the Docker command line interface. With these containers running the tests which relied on Datastores were able to run properly.
After a successful test run an image is built and pushed to the Docker registry running in the Azure cloud. After the image push, we run a one-line bash script that updates the image tag in the Helm values.yml which is subsequently published as the artefact of this build. The build number generated by pipeline instance is used to tag the image and make it easy to relate back a specific commit.
The frontend build pipeline is much simpler as it just runs the “npm run build” command to build static files for the client side react application running in the browser. These static files are then copied into an Nginx image when the docker image is built using the dockerfile on the right.
This image is also pushed to the registry for consumption by the Kubernetes cluster. Like for the backend, this also produces an artefact for the release pipeline to pick up
I’ve described how you set all these things up through the web interface of Azure DevOps, which isn’t really practising proper Infrastructure as Code. In the main Build Pipeline configuration screen, there is a convenient option to export the entire build pipeline to Yaml which can be added to your code repository in the root as azure-pipelines.yml to capture the pipeline in Version Control. When this file is discovered in your repository it replaces any visually configured pipeline. Using the web interface for an initial configuration gives you an easier time learning the Azure DevOps yaml syntax and makes it easy to become aware of pipeline plugins that help ease some of the steps.
The release flow is triggered whenever there is an artefact published from either the front or backend builds.
As before, the deployment of the application is done with Helm, the Helm chart setup we defined uses a generic value .yaml which applies to all environments and specific values files for each targeted cluster. For instance, the URL that is defined for the Ingress Resource in each is different. This is part of the values..yaml file. In the projects for frontend and backend, we also prepared generic Helm values.yaml file. The actual deployment runs “helm upgrade” using the “helm” directory containing the default values.yaml and we specify the “-f” option to use the environment specific values.
The image above shows a successful deploy triggered by an artefact produced from the backend build pipeline.
The deployment to UAT and Production requires a manual step. Propagating changes from Staging to UAT is done by the Development team leader. The main reason for the manual gate is to create a stable UAT environment where subject matter experts can test and align the content they have created in a third party system.
In the screenshot of the entire pipeline, the red arrows indicate where manual gates are implemented. The responsible person at the client will propagate changes from UAT to Production when they are satisfied with the state of the UAT environment.