Compuware | DevOps | Performance and Capacity

What Do Performance and Capacity Roles Have to Do with DevOps?

 

A recent TechBeacon article discussed the roles you should define to ensure DevOps success, including some relatively new—the DevOps evangelist, the experience assurance (XA) professional—and some commonplace—the software developer/tester. Rather than looking at the combination of these roles as a DevOps team, the article says, DevOps should be seen as an approach with these roles managing functions that need to be carried out for the development process to work.

Some of these functions existed prior to the emergence of DevOps, but even these have been redefined to ensure the success of continuous improvement and delivery under DevOps. But what does this mean to performance and capacity professionals?

One role defined in the article is the utility technical player, which loosely translates to the person or people who keep the sucker running. It goes on to describe the role as being involved in all phases of “improved quality of service, resource management or security, prioritized alongside those delivered from the [business].” Sounds familiar, doesn’t it?

The True Utility Technical Players

There’s no way anyone could really perform all the functions of the utility technical player; there has to be a variety of technical experts with the skills and abilities defined. But inherent in the definition is the role of performance and capacity management. What is that function if not improving quality of service (response time and throughput) and resource management (ensuring adequate capacity at the best possible price)?

Yet, inherent in the idea of DevOps is the need for speed. Development and deployment are ongoing, so rather than having the luxury of a long development cycle (Waterfall, anyone?), analysts must be able to quickly assess the impact of even small changes continuously and as early as possible. Read Compuware’s eBook to learn more about moving from Waterfall to DevOps.

How to Ditch Waterfall for DevOps on the Mainframe

While some performance and capacity professionals have always worked closely with developers, reasoning that it’s easier to get them to make performance-focused changes while they are still coding, many wait until the testing phase to understand how changes will affect the system. You can no longer wait till testing, otherwise, you will be the bottleneck in the Agile system. And performance and capacity professionals hate bottlenecks, don’t they?

As part of the utility technical player team, performance and capacity professionals need to become more agile, working closely with the software developers and testers, understanding the applications as they evolve and being a voice for the value of performance-centric, yet resource-sparing, code choices. As developers learn more about how to code for performance and all are rewarded for producing fast yet lean applications, everyone will step up their game

Automating Performance and Capacity

As the TechBeacon article notes, almost every role is going to have to rely more on automation to achieve DevOps goals. You won’t have the luxury of running endless SAS code against SMF and RMF data on the new code, taking your time to analyze the results. At the same time, keeping your production systems running well is going to be a challenge.

Having modern tools for application performance management, failure resolution and automated batch processing will make monitoring and optimizing your production systems a lot easier while freeing up some time to work more closely with developers so you can anticipate what’s coming.

It’s actually fun to work more closely with developers, and as you both learn more about each other’s jobs, you will find more support and interest in performance awareness from the rest of the IT organization. That cross-organization collaboration is what DevOps is all about.