Aligning Dev and Ops with Automation, Integration and Measurement
Overview: Learn how mainframe development and operations teams can align under the same business objectives and leverage automation, integration and measurement to achieve those goals for true mainframe DevOps.
During side conversations at SHARE Phoenix this past March, I spoke with several developers and operators who felt they were on a DevOps journey but not making the progress management had envisioned. Despite the many procedural sessions on the subject, I got the impression there was more to the problem than what was discussed at the conference.
I offered my opinion on what I believe is the biggest barrier to DevOps success for these people and their organizations—Development and Operations do not share the same goals. Everyone should be striving for greater quality, velocity and efficiency to incrementally deliver new capabilities that meet customer and business needs faster. If these groups don’t see that they’re taking a journey together, they’ll never get to your DevOps destination.
Ultimately, that means working together to build a stable and robust systems delivery architecture that can weather more rapid change without compromising quality. From a technical point of view, both Development and Operations can achieve this through three things: Automation, Integration and Measurement.
Automation is essential for making massive improvements to the quality, velocity and efficiency with which development and operations teams perform their duties. Manual processes create latency in the delivery pipeline and leave room for errors that eventuate in poor customer experiences.
These constraints decelerate your rate of innovation and reduce the profitability of your business. Fortunately, there are several key development and operations areas where you can implement automation to eliminate these challenges.
With automated testing, you can support a “shift left” testing approach and perform testing earlier in the development lifecycle, namely through automated unit testing in the initial stages of development. This makes it easier and cheaper to find and fix low-level bugs than it would be later in the pipeline. Automated functional, integration and regression testing also shorten the development cycle while increasing program and application quality.
Without the need to manually develop tests or create test data, automated testing can operate continuously and be configured to address code changes as they occur, helping your DevOps team feel confident when implementing even the largest, most impactful changes.
Automated Batch Processing
Another critical area where manual processes dominate is batch processing. Mainframe batch is critical and must be automated. Batch management is a difficult, esoteric process, and the experts who know how to do it are retiring.
The only rational option is to implement a solution that automatically and intelligently optimizes batch processing. This empowers new mainframe ops pros to easily balance workloads and increase batch throughput, leading to reduced monthly license costs and even the deferment of costly mainframe upgrades.
Integration is critical to enabling the cross-platform and cross-team collaboration, communication and transparency required for an advanced systems delivery architecture to work. By integrating modern mainframe and distributed solutions into one DevOps toolchain, you can de-silo platform-based project value streams and unify them to support one product delivery pipeline.
Continuous Integration/Continuous Delivery (CI/CD) Pipeline Orchestration
Inevitably, automation often supports meaningful integrations. For instance, through integrations with Jenkins, you can support CI/CD pipelines with orchestration through a modern, Agile mainframe source code management tool. This automates the promotion of code through analysis, editing, building, testing and deployment, all the way to production.
Build and Deployment
Likewise, an SCM integration with XebiaLabs XL Release can automate, standardize and monitor cross-platform code deployments at every stage of a delivery pipeline. You can visualize and manage deployments from one place and monitor them with release-flow analytics.
Having the ability to assess performance early in the development cycle ensures fewer changes impact production. The right application performance monitoring solution will provide a variety of integrations with performance stalwarts like AppDynamics, BMC and Dynatrace. These give you a holistic view of coding inefficiencies that arise early in the lifecycle, allowing changes to be made more quickly.
Even with automation and integration in place, these are hardly meaningful if you can’t measure improvement, or lack thereof. Development and operations teams should measure their performance against the same quality, velocity and efficiency key performance indicators (KPIs) to determine where they need to improve or pivot.
The best way to accomplish this is with a machine learning solution that correlates user behavior with the above KPIs based on DevOps data and product usage data. This insight allows teams to improve how they use their tools, and it gives them quantifiable data to base intelligent decisions on—rather than going with their gut.
On the Same Journey
Whether you call it DevOps or something else, your mainframe team is on a journey to modernize and accelerate application development and delivery. Achieving this isn’t the job of one team or group, but the responsibility of an entire organization.
Through automation, integration and measurement, you can help your teams of developers, testers, operators and more become a single, powerful force to achieve the quality, velocity and efficiency your evolving business requires to succeed.
Latest posts by Denise Kalm (see all)
- Speed and Streamline Your Code Management Efforts with ISPW - August 1, 2019
- Dig Deeper: What Else Can Abend-AID Do for You? - June 20, 2019
- Why It’s Time for a Mainframe Batch Renaissance - May 16, 2019