DevOps Processes | Getting better or worse
July 12, 2018 Data, DevOps

How Do You Know if Your DevOps Processes Are Getting Better (or Worse)?

[Average: 4.7]

A recent survey by Statista found 90 percent of the world’s software developers have begun implementing DevOps. With that in mind, it’s time to develop a plan for measuring the success of those DevOps processes, which means defining metrics.

This process begins with a clear understanding of DevOps goals. Simply stated, companies demand faster delivery of features and capabilities with no impact to availability or performance: higher quality, velocity and efficiency.

Through better collaboration of all participants in the software development process and full exploitation of automation, businesses should be able to improve time-to-market of needed business functionality. The challenge is balancing the need for speed against the requirement for operational stability; you can’t be successful unless you achieve the balance.

But how do you? As the notable management consultant and author Peter Drucker once said, “If you can’t measure it you can’t improve it.” You have to measure your DevOps processes and key in on what’s making them better, or worse.

Defining Success Metrics

Here are a few metrics that will help you understand how well you are doing with DevOps processes. Without these, it’s hard to know what you need to work on to achieve the outcomes you’re striving towards.

  • The number of software releases per a given period
  • The number of defects found in that period
  • Number of defects not found in testing
  • Number and frequency of outages and performance problems during that period
  • Cost/impact of outages and performance problems during the given period
  • Cost per release cycle including all resources
  • Time from request to deployment per release
  • Mean time to detection and recovery per problem/incident occurrence
  • Compliance with service level agreements

When you have these kinds of numbers, you can determine how well you’re improving the quality, velocity and efficiency of DevOps processes. Our service zAdviser can help. It uses machine learning to find correlations between developer behaviors and key performance indicators (KPIs) based on DevOps toolchain data and Compuware product-usage data.

See zAdviser

In some cases, the solutions become obvious as you determine which factors are not meeting your goals. But in other cases, the cause is more elusive. You may have to look at other factors, such as culture and processes. zAdviser helps here, too, but that’s not the only issue you may face when considering what’s impeding improvements to your DevOps processes.

Data Quality Issues

An increasing number of companies are recognizing that a roadblock to the success of their DevOps processes lies in their testing protocols. In the interests of speed, it’s easy to believe that you have the right testing procedures and the right data to assess the impact of code changes; but do you?

The complexity of the test data management task is only increasing. To get it right, data has to be:

  • The right data. This means gathering data across LPARs, processors and even systems to accurately reflect the way users access the data with your application. To get this right requires that you understand the way the application works and how that functionality changes with each iteration.
  • The right subset of data. You can’t use all the production data, so you need to ensure it is representative of the production set.
  • Accessible. Each database or file has its own protocol, so you have to understand the vehicle necessary to scrape the data you need as well as obtain the right permissions to access it.
  • Protected and secured. Increasing amounts of production data involve customer-sensitive information, which must be masked, scrambled or encrypted to protect the customer and your company.
  • Maintainable. With each iteration, changes to the functioning of the system may result in a need to change the test data. Even without these, the way people interact with a system may change, so you have to understand it.

Learn more about the importance of these and other test data management best practices in our webcast with TCF Bank, “The Importance of Data for DevOps: How TCF Bank Meets Test Data Challenges.”

Register Today!

The challenge of improving test data management to improve your DevOps processes may look daunting and yet it’s oh so necessary. Given the short cycle of change in DevOps, you have to rely on automation of your test data management process.

With an editor that can manage all input data, visualization to help you understand data relationships, and maintenance tools that help you audit and manage the data, you’re well on the way to eliminating data issues as a problem for your DevOps metrics.

Just as zAdviser can help you improve DevOps processes by enabling faster, smarter measurements, Topaz for Enterprise Data can help you eliminate DevOps problems related to test data. Be part of the solution to deliver code faster, while keeping production stable and performing well.

See Topaz for Enterprise Data

The following two tabs change content below.
Chief Innovator, Denise P. Kalm began her career in biochemical genetics (M.S. University of Michigan). She went on to have a career in IT, beginning with programming, moving to performance and capacity planning and then to work for vendors. Her experience as a performance analyst/capacity planner, software consultant, and then marketing maven at various software companies grounds her work providing contract writing, editing, marketing and speaking services. She is a frequently published author in both the IT world and outside. Kalm is a requested speaker at such venues as SHARE, CMG and ITFMA and has enhanced her skills through Toastmasters where she has earned her ACG/ALB. She is also a personal coach at DPK Coaching.