September 26, 2024

Paul Gradie

Read Time: ~10 minutes

<aside> 📌 At Empower, we seek individuals who challenge personal assumptions, value ownership and trust, and strive for excellence to inspire and empower their team. If this article connected with you, join our team!

Join Empower.

</aside>

<aside> 📎 As companies scale, system pressures increase, resulting in challenges like code conflicts and maintenance complexities. At Empower, our monorepo architecture led to significant maintenance overhead due to the tight coupling of build and deployment processes across services. To address this, we restructured our deployment system by decoupling build from deployment, enabling independent service deployments. This post outlines the key principles we developed to maintain clarity and efficiency in our Azure pipelines.

</aside>

As companies grow, pressure inevitably builds on different parts of their systems. With more customers, certain code paths start to see increased traffic, and developers begin to overlap in unexpected areas, leading to code conflicts.

Empower has recently faced these challenges in its delivery system. Our backend is organized as a monorepo, meaning nearly all backend components are housed within a single repository. Among these components is a service uses by our support team. This service shares a data model with our main API, which means we have to version and deploy both pf them together. As teams began needing to deploy these services separately, we encountered maintenance challenges due to coupling of our build and deployment implementations across these services.

We decided to restructure our deployment system to address the general maintenance overhead using a new model. This model was discussed in a previously post - which you can check out if you're interested. In summary - the model was to build the mono-repo continuously, and have the build be decoupled from deployments. So all backend services would be built together, but they would be deployed independently.

Throughout this process of refactoring and rebuilding, we identified and established several key principles to follow to keep our pipelines comprehendible and maintainable. I'd like to share these principles and associated insights with you in this post.

Principles for Developing Deployment Systems in Azure

The following are 6 core principles you can follow to achieve success in your azure pipeline implementations. You come across fair bit of yml in this post - so apologies for that! In that yml, three are variables and parameters referenced. Parameters can come from the pipeline global scope, or from template scopes. Variables will typically come from the global scope.

1. Decouple Build from Deploy

This is more of a reiteration of the general model described in the previous post, but its worth enshrining as a principle: Do not pollute deployment pipelines with build pipelines. Doing so keeps maintenance complexity down and prevents build overhead during deployments, making them much faster.

In times when you need to redeploy quickly, having to wait for a build is a significant time sink. Also - when delivering software, we should always aim to follow a ‘build-and-publish-once’ philosophy. Once your software is built and published - that is the final version prepared for a release. You can test it multiple times, you can deploy it multiple times, and you can promote it through environments - but you should only ever build and publish it once. As soon as you recompile (and download external dependencies…) and publish - you should consider it a new version.

2. Design at the Job Level

Pipelines, whether single or multi-stage, are nothing more than a collection of jobs that execute in some sequence. Jobs represent all of the actions your pipeline will take while on a given agent (i.e. worker computer). Having the ability to define distinct jobs allows for things like parallel execution and encapsulation of tasks.

Jobs should be meaningful encapsulations of components of your build or deployment pipelines.