Skip to main content.

Atomic deployments: How to update your website with zero downtime

Congratulations, the fine folks responsible for developing your website have just built a cool new feature (or something dull like a mission-critical bug fix, yawn), and they’re ready to deploy it! But how are they going to do that? Join us on this journey to learn more about the wonders of atomic deployments.

Rocket launch HQ

Saddle up

The traditional method for deploying changes to your website is one of the signature habits of ‘cowboy coding’. Your typical cowboy coder is the fastest gun in the West, hammering keys by the seat of their pants and answering to no man. A romantic figure in a western movie… a ‘difficult’ one in the office. But the recklessness of this character is usually just a symptom of inexperience. Most of us developers start out as cowboy coders, and here’s what we’d do when deploying new code to a website:

  1. Grab the files that we have changed.
  2. Upload them directly onto the server that hosts the live website.
  3. Mosey on down to the saloon to keep exhausting the metaphor, while the website bursts into flames.

Granted, the website is unlikely to burst into flames. But this workflow does have significant flaws which can result in costly bugs and downtime.

What can go wrong with manual deployments

Pushing even a small change to a website will typically involve adding or replacing many files. This is because the humble little website is actually a complex little ecosystem of co-dependent puzzle pieces. Files are written in different programming languages, responsible for different things. They are further segmented into smaller and smaller component pieces as part of our drive towards a ‘separation of concerns’. This helps developers to navigate and to debug the codebase. It results in large numbers of files, nested deeply within folders and subfolders, that then need to be compiled into an optimised format for delivery to the end user’s browser. These transformations are performed by delicately balanced ‘build processes’ – a further layer of complexity which developers wrestle with in the interests of delivering a modern experience to the end user.

Immediately we can see a prime opportunity to break stuff if we try to deploy in the error-prone way documented above. You have to work your way through the directory structure of the local development site, and move all the relevant pieces to their corresponding locations on the production server. It’s all too easy to forget something, or misplace something, and one tiny errant cog can derail the whole machine.

Connecting to the server and migrating files via File Transfer Protocol (FTP), with help from a drag-and-drop application such as FileZilla, is still commonplace. Beginners and hobbyists routinely do this, as do many freelancers and small agencies who work on low-stakes projects. If you run a cheap-and-cheerful website with low traffic, you’ll probably live with the consequences of occasional mishaps and downtime. Others would be wise to question whether there are alternatives.

You Git what your devs serve

One alternative for a dev team is to configure Git on the production server (if the host supports this) to pull in code changes from the Git repository. This is a step in the right direction. The bulk of the codebase for any project is inevitably living in a repository somewhere like GitHub already, enabling a team to collaborate and organise effectively, or even in its simplest utility, to safely maintain a record of a lone developer’s work over time. So why not have a mirror of this repository on the production server? Now, when we push a new feature to our GitHub repo, we can then pull that altered batch of files into the production server’s repo with a few key strokes. Quicker and easier, we’ve improved our ‘developer ergonomics’ here, and reduced the scope for human-error.

However, even though we leveraged some degree of automation to accurately move all the right things into the right places, we’re not home and dry. The operation to copy over these files still takes time. All the while there are pieces of code being added, the codebase is, by definition, incomplete, and thus unstable. For any visitors using your website while this takes place, their interactions may require functions to be executed which rely upon other functions elsewhere, which in turn rely upon yet others, and so on. Anything incomplete in this chain of events will cause errors. What if this happens during the checkout process on your e-commerce site? Or when the user is midway through completing an important form?

Users have high expectations for a seamless user experience online, and diminishing attention spans. Website owners should be seeking that holy grail: zero downtime.

Step forward, Atomic Deployments

As we progress, and as the web itself matures, developers should graduate from cowboy coding practices to more carefully considered, methodical ones. Atomic deployment is just such a warm, safe cocoon of due diligence, compared to what we have discussed above. They provide near-instantaneous deployment of fully functioning code, with an added insurance policy of enabling us to roll back to an earlier release at any time, as if by magic.

When we trigger an atomic deployment, we are sending our files to the server and having scripts run some clever processes which ensure that our new version of the site is not ‘published’ until everything is operational; no new code is available for execution until the codebase is complete, the build processes successfully performed. Only then does the new version take over.

In just a little more detail, this is how it looks:

  1. The new version’s files are placed into a new directory on the server. This new directory, like its siblings, is identified by a unique name derived from the Git commit hash, or a timestamp. This allows all the versions to be differentiated.
  2. All the required build processes are initiated. This is where a highly optimised bundle of files takes shape from the myriad source files. Uglifying, tree-shaking, auto-prefixing… cryptic jargon for some of what we do to ship a website trimmed of fat, that can be reliably understood and speedily rendered by the user’s browser.
  3. Once the build has succeeded, the web root of the server is symbolically linked to the new version’s folder. This works as though the contents of the web root directory have been instantaneously swapped from the old version to the new version.

Bear in mind that the server holds a chosen number of the most recent versions of the website, to allow for rolling back to any given release by simply linking the web root to it. As such, reverting to an earlier version, for whatever reason, is not a time consuming re-deployment. It’s like flicking a switch. Let there be light!

Admittedly, there’s more work for the development team to do when configuring this whole system. There will be things such as image uploads which are not stored in the Git repository, and these things need to live in dedicated directories which all releases can share. You can let the developers figure out these finishing touches – you’re busy basking in a warm glow of contentment, reassured by the tidy logic of this brilliant deployment system.

No need to compromise

Imagine that your dev team has identified a security vulnerability. It’s critical that they deploy their fix immediately, but it has come at the worst moment for your business when website traffic is at its highest. With atomic deployment, you don’t have to weigh up the potential costs of waiting a few hours to make a sensitive change, against serving loads of users a white screen of death. Just deploy.

We’d argue that any development team worth their salt today will have the expertise and foresight to utilise this robust, dependable method for rolling out updates.