3 Reasons Continuous Delivery Efforts Fail to Scale

With DevOps and Agile adoption on the rise and automation levels reaching all-time highs, many organizations still struggle to scale continuous delivery across their enterprise applications and continue to face an all too familiar scenario.

The development team releases an application to operations where it then fails in pre-prod due to a missing piece of middleware that was not updated, or a covert database configuration change that was made. The build fails and then gets kicked back to development for rework. At that point, the development team has to stop working on their shiny new feature to fix the issue and resubmit the build back to operations.

Awesome! The JIRA ticket is closed. The remediated code is then checked into Git, the Jenkins build kicks off, and the development team gets back to work on their shiny new feature. Oh wait, not so fast. Now one of the OS system libraries is revealed to have a known vulnerability. The build gets kicked back to development once again to remediate, and thus the pattern goes on. Next time it may be logins and secrets that fail because they were not maintained. Finally, the build is released to production where a run-time error occurs due to an unknown database update.  This results in a massive fire-drill, which causes further delays in the next release, in addition to associated losses of customer confidence and business.

In the end it feels like the entire IT team is playing a giant game of hot potato!

CI/CD systems, like Jenkins, are great at automating workflows and orchestrating sequences but do nothing to help reliability across build environments. Cloud-native application architectures help address some of this, but for most organizations this is just a small part of their existing application portfolio. The key to scaling continuous delivery in complex enterprise environments requires teams to recognize the core underlying challenges and then work to address those. We’ve consolidated those challenges into the following three core areas:

  1. Unclear and Additive Delivery Requirements
  2. Developer Overload
  3. Build and Delivery Inconsistencies

Unclear and Additive Delivery Requirements

Delivering a multi-tier application that is part of complex business processes and deployed across multiple environments requires an understanding outside the immediate realm of any one developer or development team. Preventing production failures necessitates that all of the run-time requirements surrounding an application be exposed and tested prior to the application being released to production. Traditionally this was handled via a runbook, which is a written document maintained by domain experts.

Do you know how your organization updates and maintains configuration and environmental variable changes across delivery teams and build environments?

Unfortunately, most runbooks become out of date as soon as they are written and no single group has a clear understanding of all the delivery requirements and applications deployed using a series of additive processes. Developers build and test their code, middleware/platform teams build their pieces, operations teams configure infrastructure, security teams administer audits, and so on. By the time they reach production, most applications are so saddled with custom changes that the maintenance burden multiplies and change failure rates skyrocket.

Developer Overload

The intent of DevOps was always to better align Dev and Ops teams to better work together. The wide adoption of Agile development practices and drive towards Test Driven Development (TDD) has shifted traditional responsibilities from operations teams onto development.  Developers working in Agile sprint cycles need ongoing access to “production like build environments” in order to test their work effectively before being released.

Do you know how much time and effort your developers spend in your organization configuring and scripting their build environments?

This creates a number of challenges. When overloaded developers start cutting corners, less is documented, less is tested, more adhoc fixes are applied, organizational changes lead to the accrual of technical debt, and deployments become more difficult. The opposite of the original intent happens. Development environments drift further and further out of sync with production environments. More failures occur, which causes continuous rework and burden on developers.

Build and Delivery Inconsistencies

Ensuring consistency across builds is a core principle of DevOps and continues to be one of the biggest challenges application delivery teams face. Rapid adoption of Agile practices and loosening of corporate governance to innovate faster have increased the challenges for large complex organizations.  

Do you know how our build and release teams work together to ensure what is built and run in development will be exactly the same in pre-prod and production?

Separate application teams deliver applications differently. This lack of consistency across teams means there isn’t a consistent way to apply changes for things like OS patches or middleware updates that may touch multiple application teams. And because there is not consistency, it allows for run-time failures to slip into production, and nobody wants that to happen. 

In the end this creates a sense of fear within application release teams and slows down deployment frequencies. The fear that an application may fail in production and disrupt business results in slowing down releases so that teams can test more, which is often accompanied by more manual checks. This is the exact opposite of why we create a continuous delivery pipeline! 

Good news! If while reading this blog you were unable to answer these questions or personally related to any of these situations, then you have the opportunity to improve the way your organization delivers applications. To find out how Chef can help you overcome these challenges and others, I invite you to check out the webinar we recently recorded,  Eliminate Application Delivery Failures. Or better yet, request a demo and we can talk more about your organization’s specific challenges!

Tags:

Eric Heiser