What You Need to Know About Debugging Under COBOL’s TEST Compiler Option
With COBOL programmers migrating to IBM’s Enterprise COBOL Versions 5.2 and 6.1, there has been increased interest, discussion and, unfortunately, confusion around use of the TEST compiler option. Since debugging is a critical step in the COBOL DevOps lifecycle, in this post we’ll look at the recent changes in the use and format of the TEST compiler option and how those will affect your debugging best practices.
In fact, we’ll find that not only TEST, but ARCH and OPTIMIZE compiler options should also be considered and that all three play an integral role when developing your debugging best practices for COBOL Versions 5.2 and 6.1.
Lastly, we’ll cover an example of a COBOL program that demonstrates the good, bad and ugly when using ARCH, OPTIMIZE and TEST compiler options and its variants.
Introduction to TEST Compiler Option
The TEST compiler option can generate additional debugging data to the executable binary and also add useful information to the compiler listing to assist with debugging programs. The amount of added data and extent of listing changes varies with the sub-options used when coding the TEST compiler option. Programmers control the use of the debugging information by use of TEST (with sub-options) or, alternatively, by specifying NOTEST (which does not add information to the executable binary). Let’s look at how the TEST compiler option has changed over the past five years:
For COBOL V4.2 and below, use of the TEST compiler option always adds basic debugging information to the load module:
- Programmers can alter the amount of information added by use of the (NO)HOOK, (NO)SEPARATE and (NO)EJPD sub-options.
- The basic debugging information written to the load module with TEST (exclusive of the HOOK and EJPD information) cannot be altered.
- No debugging information is added if NOTEST is specified.
- TEST(NOEJPD,NOSOURCE) always adds basic debugging information to the program object, but programmers can control the amount of EJPD and SOURCE information.
- NOTEST(NODWARF) does not add debugging information.
- NOTEST(DWARF) adds a subset of debugging information.
DWARF Debugging Data
DWARF debugging data is used in COBOL Versions 5 and 6 and will result in specific named z/OS Binder class data to be written to the program object with the NOLOAD attribute. Programmers can display the name, size and extent of these CLASSES by viewing the z/OS Binder SYSPRINT output. Specify “PARM=MAP” when running the z/OS Binder, and simply look for CLASS data names with a “D_” prefix and the “ATTRIBUTES” of “NOLOAD.”
These DWARF generated NOLOAD attributed class data are not loaded when the program object is executed and are only read and used with the IBM z/OS Debugger (aka IBM® Debug Tool for z/OS® and its variants). A subset of the DWARF debugging information can be written using the NOTEST(DWARF) option and is used by IBM’s application debugging tools such as CEEDUMP and IBM® Fault Analyzer for z/OS®. Through customer conversations, we’re not aware of any ISV using this DWARF data.
Debugging Best Practices
Debuggers always work best with non-optimized programs. For optimized code, the complexity of associating source code and generated binary code is well known as problematic in compiler optimization literature. This is because optimization is not confined to individual COBOL statements but instead across COBOL statements. When cross statement optimization occurs, the IBM COBOL optimizer may move some machine instructions in each individual COBOL statement to a preceding or a following COBOL statement—sometimes for more than one COBOL statement.
This machine instruction movement resolves performance problems that may appear, which are due to processor and cache designs on z Systems hardware (these problems are not limited to z Systems). For reference, there’s a good slide deck called “IBM z Systems Processor Optimization Primer” from IBM Distinguished Engineer C. Kevin Shum.
The side effect of moving machine instructions around is “jumping” when statement-stepping through a program. This jumping gives the programmer a queasy feeling that something is wrong, but, really, there is nothing wrong. It’s just an artifact of the optimization process. Therefore, to allow debuggers to be most visually pleasing, less confusing and more efficient, debugging with optimization turned off is preferred.
These types of debugging issues have created and fueled a continual discussion of the merits and benefits of debugging with optimization enabled, and it questions the viability of optimization itself. Regardless of all of the pros and cons, the simple fact is: if the same problem can be reproduced with optimization off, it’s usually a program bug. If the problem cannot be reproduced with optimization disabled, it may likely be an optimization issue. Likely optimization issues should be reported to IBM for resolution.
COBOL V5+ Optimization and TEST
To invoke the COBOL optimizer, use the OPTIMIZE (abbreviated OPT) compiler option. The TEST compiler option plays a concurrent role, and by use of both options, additional debugging capabilities can be realized. Currently, these levels of optimization are offered, per the IBM Enterprise COBOL for z/OS V6.1 Programmer’s Guide:
- OPT(0) with minimal optimization (not zero) provides the shortest compilation time.
- OPT(1) improve runtime performance with basic inlining, strength reduction, simplification of complex operations, removal of unreachable code and block re-arrangement, and intrablock optimizations such as common subexpression elimination and value propagation.
- OPT(2) is for more aggressive optimization, involving the longest compilation times and memory usage, and allowing instruction scheduling and interblock optimizations, global value propagation and loop invariant code motion.
The higher optimizations use significantly more CPU and memory resources during compilation, but should produce faster-executing programs. Programs that run repetitively, perhaps hundreds of thousands of times per day/hour, are prime targets to consider. Saving even one-tenth of a second per execution will quickly add up to large savings.
Compuware’s TEST and OPTIMIZE Recommendations
Both TEST and OPTIMIZE and their sub-options can be used with Compuware solutions. We recommend that programs be developed and debugged with OPT(0) and final testing and deploy to production be with OPT(1 or 2). Note that all sub-options and combinations of the TEST compiler option are compatible with Compuware tools. However, we recommend that TEST(NOEJPD,NOSOURCE) be used with Compuware Xpediter products to enable a smooth debugging experience and to minimize the jumping behavior caused by higher levels of optimization.
Observations During Test Runs
The below observations were made with a very contrived COBOL program specifically designed to use a repetitive invocation of:
- Data types, COMP and COMP-3, PIC, REDEFINES, OCCURS DEPENDING ON, INDEXED BY
- Simple Arithmetic
- Data type conversions
There are observational differences when moving from ARCH(9) to ARCH(10) and again to ARCH(11). This reveals the need to set a standard for use of ARCH. Consider your many different machines, and, of course, don’t forget Disaster Recovery (DR) machines.
As you can see in Figure 1 below, OPT(0) shows consistent compile times, regardless of the TEST specification. There are no significant changes to the z/OS Binder program module and DASD sizes.
When moving to the OPT(1) level, the compile times vary widely, with the lower values using NOTEST, the next lower values with TEST(NOEJPD…) and the highest with TEST(EJPD…).
Additionally, use of NOTEST gives dramatically small (compared to OPT(0) values) z/OS Binder program module and DASD sizes.
Lastly, use of TEST(EJPD…) gives the highest compile times and largest z/OS Binder program module and DASD sizes.
Use of OPT(2) shows the highest overall compile times with significant increases from OPT(1) values when using TEST(NOEJPD…).
The z/OS Binder program module and DASD sizes did not seem to change from those values with OPT(1), showing the COBOL statement mix was not exploited for OPT(2).
When debugging with the TEST compiler option, programmers need to develop best practices that also include ARCH and OPTIMIZE. Use of TEST involves making both debugging and efficiency decisions. There is no one-stop answer, and your implementation depends on debugging style paired with performance and disk capacity decisions.
Here’s a summary list and recommendations:
- Sub-options with TEST should be selected depending on the desired need (how much debugging) and use (what features).
- NOTEST and OPT(0) are recommended if there is concern on load module (program object) disk size, little concern over execution time and debugging isn’t an issue.
- OPT(0) is recommended when wishing the fastest compile time, while OPT(2) will give the slowest time but has the potential to produce much faster code.
- OPT(0) is recommended during development and testing, especially for larger programs, as MIPS costs of compiles with higher OPT levels can be significant, especially when many repetitive compiles are needed.
- Test production and subsequent production should be with OPT(1) or (2), as final testing should be at a production level.
For more information related to new COBOL Version 5.2 and 6.1 updates, read this blog series.
Latest posts by Bob Yee (see all)
- Compuware Supports IBM’s Container Pricing for IBM® Z - January 7, 2019
- Four Unbeatable Benefits of the New IBM z14 ZR1 Processor - April 10, 2018
- Six SHARE Sessions You Won’t Want to Miss in Providence - August 3, 2017