Quantifying DevOps Outcomes: Increasing Speed

On March 22, we held our third webinar in a four part series focused on digital transformation. In that session, we focused on the considerations around increasing speed. In case you missed it, you can watch a recording of the second episode, “Increasing Speed” below.

The interactive portion of this series means that we’re able to tailor the conversation according to the discussions most relevant to our audience. Our first webinar set a context around the overall role of DevOps transformation initiatives and measuring success. Based on data in that first session, we then prioritized covering the need to focus on our risk mitigation strategy before anything else. That led us to our third session, where we covered the role of deployment pipelines and how to strike the right balance between speed and safety. There’s good data from that session that I look forward to sharing in the next (and last) webinar in this series on April 5th.

In the meantime, we received a number of questions during the live Q&A that we didn’t have time to cover in the webinar. We’ll get to those now.

Q: How do you measure “time from commit to deploy?” Is that average time, max time, or other? For example, in our environment we currently deploy once a quarter. Some commits are made months before deployment and some are made days or hours before deployment. How do we measure that?

The idea behind this metric is measuring the time between an initial business objective (the idea) and when your company starts realizing its value (shipping a feature). You could have developed a new killer feature, but until your users have it in their hands it hasn’t helped move the needle forward. The specific answer to your question is: average time. But that might be missing the entire picture.

Part of the reason this metric is a powerful driver is that when you explore it to its logical conclusion, it suggests that small batch incremental change deployments are critical. That’s the only way to drive that average value down into the range of minutes. In order to achieve that it forces you to rethink how features are introduced into your applications, what your workstreams would look like to achieve that, and how business objectives should be set in order to deliver on that cadence. It’s a small metric, but one with big implications. That’s why we refer back to it often in this series and why it’s useful as a guide in transforming organizational practices.

Q: One of the current trends is to store the configuration of the deployment pipeline as an artifact and have a controlled change management process around it. This creates a hierarchy. Where will you stop in building this hierarchy?

At Chef, we solve this problem with the use of ‘build’ cookbooks. Just like your apps have cookbooks, so too can your pipeline configs. The pattern is to have each cookbook in its own pipeline. The configuration of that pipeline is handled by an embedded build cookbook inside the application cookbook. By managing it this way, the pipeline ‘shape’ rides along with the application. That means any config changes to the pipeline are managed just like changes on the app itself. The shape of your pipeline rides along with the code it delivers. Adopting that type of solution is what prevents the hierarchy you describe from becoming turtles all the way down.

Q: What is meant by “infrastructure coverage”? How closely does a “stage” environment need to mimic production? How do you manage costs of replicating production and how do you introduce load to test performance?

Mimicking production is a complex game of diminishing returns. Staging environments are only simulations: they can never be the same as production. The main question that sets up this game is, ‘how acceptable is failure in a production scenario?’ The answer for a low-use non-strategic app will be different than the answer for something more critical like, for example, a medical life support system. So there’s no one-size-fits-all answer.

But digging in a step deeper, there’s a reason we called that stage of the pipeline a ‘rehearsal’ environment. The goal at that point in the pipeline is a dress rehearsal of your production deployment. Just like in theatre performance, how formal that stage is differs for an improv sketch group vs. a Broadway show ensemble. On one end of the production facsimile spectrum, it’s worthwhile and necessary for some orgs to build a tightly-coupled near approximation of production. At the other end, some orgs could never even come close if they tried (think: web scale giants), so they don’t. What those two orgs have in common is that in the ‘rehearsal’ stage, the goal is to verify that automation coordinating change across distributed systems doesn’t fail. For either situation, in this stage you go from deploying from one host to multiple. Even for orgs that couldn’t possibly mimic production, at a minimum you need one representative service for each part of the stack you’re modifying.

Customizing actions in that phase is at the root of your question. If load testing prior to production is an absolute necessity (is it, really?) then you could introduce it in this phase. Managing costs might be possible using ephemeral cloud infrastructure or containerization, but then again it may not be depending on the data sets you manage or the load testing constraints for your applications. Again, this is an exercise in finding your particular threshold for diminishing returns. But those are the places I’d start when peeling that onion.

Test coverage for your infrastructure means that along with testing applications, you’re constantly testing the state of your infrastructure to make sure that it didn’t unexpectedly change (those humans sure are wily). That’s a good place to leave this post for now, because we’re just about to cover testing approaches a bit more.

Up Next

In our next installment, we’ll specifically look at what practices matter when focusing on better efficiency. Join us live on April 5 at 10:00 AM PT so that we can address your specific concerns during the interactive Q&A session.

George Miranda

Former Chef Employee