Cross-platform Code Quality
March 1, 2018 Code Quality, DevOps, Testing

How to Modernize COBOL Testing for Cross-platform Code Quality

[Average: 4.4]

Despite mainframe developers’ obsession with quality, a gap has formed between these teams and their non-mainframe colleagues when it comes to accurately tracking and validating code coverage of COBOL testing and that of “mainstream” code.

For years, mainframe teams have gathered code quality information through reports, having people look through them and make personal judgements of whether code is good or bad, rather than having an automated quality gate enforced to make those determinations based on standard criteria.

Poor testing and quality are the results of those manual processes, and even when done well, the people who put those processes together have moved on or are retiring. Today, new talent doesn’t understand those esoteric processes, which slows development and worse, compromises code quality.

Organizations may have built their own static analysis tools, allowing them to find bugs and determine whether code meets standards, but there has never been an easy way to correlate results of static analysis during the coding process, nor has there been much visibility into those issues, and there definitely hasn’t been an automated way to consolidate that data with data from other platforms. Everything has been done manually in a silo without any out-of-the-box integrations.

It used to be this way in distributed too, but there have been advancements through Continuous Integration, continuous code quality and Continuous Delivery. Still, the gap between mainframe and non-mainframe code quality will only hurt both sides.

Closing the Code Quality Gap

Today especially, mainframe teams need their code coverage metrics and test metrics integrated with those of other teams. As applications span platforms, integrating metrics is the only viable means of achieving a holistic view of cross-platform application quality. Doing so allows you to see how you’re trending, enabling you to manage it versus being lost in stream of things one individual may be aware of but that team members don’t have insight into.

Through the integration between Compuware Topaz for Total Test and SonarSource SonarQube, you have a modern solution to the dilemma of poor testing and code quality practices as well as reduced skills. It gives you an accurate, unified view of unit testing code coverage and code quality metrics across platforms.

For an in-depth look at how this integration works, watch “Modernize Your COBOL Testing Processes with Compuware and SonarSource,” an IBM Systems Magazine webinar we hosted with our partner SonarSource. During the webcast, SonarSource CEO Olivier Gaudin and I discuss how:

  • Unit testing can be automated across all platforms
  • COBOL code coverage shows what code has been executed and what percentage of an application has been tested
  • Continuous Integration shortens feedback loops to speed time-to-benefit
  • You can use the same quality metrics across both COBOL and non-mainframe applications

DevOps teams need an easy way to accurately track and validate the code quality of COBOL applications—especially as mainframe experts retire and less-experienced developers take their places. Compuware and SonarSource can help you accelerate delivery, increase maintainability and close your COBOL skills gap, ensuring your mainframe delivers new services at the same speed as the web and mobile applications it supports.

The following two tabs change content below.

Steve Kansa

Steve Kansa is a product manager at Compuware. He has extensive experience in Agile application development across a wide variety of technologies and industries. Working with Compuware's Topaz products, he is focused on enabling modern DevOps capabilities for mainframe technologies.
Share: