At first glance, the headline of "It's Time to Stop Patching Your Servers" might seem a bit sensational. Doesn't that go completely against every core security principle and wouldn't that just create chaos in your environment? Yes, of course it would and by no means should you not deploy whatever the latest patches are for your systems. What we need to do, though, is change the conversation.

Patching is probably one of the least exciting aspects of the IT profession (unless of course you get excited to see what will break when that new patch is deployed!). Over the years I have seen a lot of issues with patching. Some of the common ones include:

  • Some organizations forget to deploy the patches
  • Some believe that based on the location of the system on their network, patches aren't always needed.
  • Some opt for the selective patching approach on their Windows Server systems older than Windows Server 2016.
  • Some worry about application impacts which adds delay.
  • Some just lack the maturity in their processes

There are, of course, many other reasons - these are just a few samples. 

When we consider the power of the cloud, modernization of DevOps principles, and overall maturity in our processes, patching is really a waste of time that increases the chance of issues or the chance that it just won't happen creating risk. One of the key aspects of modern DevOps processes is the ability to support continuous delivery. What this essentially means is rather than deploying patches and hoping for the best, we should just deploy entirely new systems each time both the application is updated or when the underlying operating system needs to be patched.

This approach builds on the "Pets vs. Cattle" debate. If you aren't familiar with that comparison, traditional servers are like pets. We give them personal names, we care for them significantly, if a pet has an issue, we spend a lot of time and money to care for it. Cattle, on the other hand, are generally not named individually. If cattle have issues, they are more easily replaced. When we apply this to our servers, traditional servers are pets. We spend a lot of time and effort on them and, when we lose them, it can be painful. The shift we need to make is towards cattle where the lose of a server really doesn't matter.

So how do we make this happen? First, we need to look at how we build our applications and transition into a packaging and deployment model that would allow continuous delivery. Second, this requires a move towards infrastructure-as-code where all deployments and completely scripted making them consistent. Deployment teams should not actually be deploying servers and applications manually. Third, in the case of third party applications, many can still be deployed using the infrastructure-as-code approach and should be deployed in this manner when possible.

Obviously there will be a set of applications that won't work with these deployment approaches and will require traditional patching. Over time, though, these should phase out.

Finally, when we talk about the cloud, while we can deploy and manage these applications using traditional virtual servers, that approach often does not provide much, if any, cost savings and does not provide the best benefit in the cloud. Moving towards cloud native applications in which there are no servers at all provides greater application options, can allow us to scale for demand, deliver complete systems anywhere in the world, and, best of all, require no patches!

Embrace the combined power of DevOps processes, infrastructure-as-code, and the cloud and stop patching those servers!