Getting to Trust But Verify on the Mainframe
April 9, 2019 Workforce 0 Comments

Getting to Trust But Verify on the Mainframe

[Average: 5]

Overview: Learn how you can transition from a culture of “command and control” to one of “trust but verify” on the mainframe, a platform naturally fit for this environment with the right processes and tools in place.

 

When automobiles first appeared on the roadways of the United Kingdom in the late 19th century, there was a law that required a pedestrian to walk in front of the vehicle waving a red flag to warn others about the vehicle’s approach. As a result, no automobile could go faster than the man with the red flag walking in front of it.


The Locomotive Act of 1865

The law, the Locomotive Act of 1865, was first applied to steam locomotives, which were indeed a danger when traveling across or around pedestrian pathways. Hence the need for the man with the red flag. Sadly the law was absurd when applied to small vehicles powered by the internal combustion engine. Fortunately, the act was repealed in 1896 to the benefit of motorists everywhere.

This story reveals an interesting pattern that holds true even today: the proliferation of a particular technology cannot move at a rate faster than that which can be tolerated by the norms of the culture in which it exists.

This is not only true for technology in mass culture but in corporate culture as well, particularly when it comes to the notion of “trust but verify” when applied to mainframe computing in enterprise IT. There’s a lot of tech out there that promotes the freedom, independence and safeguards necessary to foster developer-driven innovation as advocated by the trust but verify movement. Sadly, the cultural attitudes that prevail in much of mainframe computing are slowing down the adoption.

Allow me to elaborate.

Understanding Trust but Verify

Trust but verify is based on the premise that developers do their best, most innovative work when they have a high degree of freedom. It makes sense. Try as it might, it’s rare for an organization to prescribe its way to creative solutions in today’s marketplace. Innovation comes from thinking outside the box. Developers need to be given the freedom and resources necessary to experiment.

The forward-thinking company gives a developer his or her own playground in which to pursue ideas. Then, when a developer’s work evolves to the point where it’s considered a viable contribution, the code is escalated toward a production release.

Part of that escalation process is to subject the code to a rigorous verification regimen that ensures the work is constructed well, meets the need at hand and is safe to use. The developer is trusted to make the best product possible.

The escalation process verifies that the developer’s work passes muster. Hence the term trust but verify.

Applying Trust but Verify to Software Development in a Mainframe Environment

As surprising as it may seem to the contrary, mainframe technology is well suited to support trust but verify. Mainframes were designed from inception to support hundreds, maybe thousands of users on a single piece of hardware.

Compuware mainframe IBM z14

Every person that logs onto the mainframe gets an  identical virtual address space. These address spaces are created instantly and are protected from interfering with each other. In terms of coding on the mainframe, each developer automatically gets their own development workspace that is separate from all other developers. Please understand that this separation is not analogous to deliberately siloed Waterfall development, but is an inherent and practical virtual separation that’s necessary to accommodate experimental work within a culture of collaboration, communication and transparency.

Just as developers experiment with code in isolation, so too can applications work in this way. Batch jobs, a standard feature of mainframe computing, run in their own address spaces. Developers write batch job scripts that run one or more applications using Job Control Language (JCL). The implication is that since a batch job runs in isolation in its own address space, developers have the flexibility to try new ideas without fear of affecting other work. It’s a significant benefit.

CICS systems also start in their own unique address space. CICS systems (Customer Information Control System) are multi-user applications that are referred to as CICS regions. It is very common to have dozens to hundreds of CICS regions running, including regions that are dedicated to development only. CICS developers enjoy a high degree of independence not only in terms of computing environment but also in terms of implementation language. A CICS application can be written in any one of a variety of programming languages–for example, modern programming languages such as C, C++ and Java or languages such as COBOL, Object Oriented COBOL and PL/1. In fact, should a developer have the need to work at a very low level, he or she can code in Assembler.

Source code version control has always been an integral part of the mainframe ecosystem. Today source control management (SCM) technologies such as ISPW allow developers to work on a common codebase independently without affecting the work of others. And, let’s not overlook the fact that there are tools for unit testing mainframe code, for example, Topaz for Total Test. Effective unit testing means developers can verify code is running to expectation even before it leaves their hands.

Mainframe technology also has the tools that are necessary to verify the work of developers within a Continuous Integration/Continuous Deployment process. Jenkins, the open source CI/CD orchestration solution used by many x86 shops, is also available for integration with mainframe workflows. (To learn more about how Compuware applies Jenkins to mainframe workflows, go here.)

Jenkins integration means it’s possible to use a well-known, industry-standard tool to automate the process of getting code out of source control; profiling it; running the unit tests as well as integration, performance and security tests; and then, once verified, queuing it all up for release to production.

This is no small feat. Remember that, in order for trust but verify on the mainframe to work effectively, the process needs to be automated. Given the volume of code that’s being produced in the modern enterprise, having ongoing human intervention at a low level in the verification process will create unacceptable bottlenecks. Automating verification is essential.

As you can see, many of the tools and processes required to make trust but verify a viable, valuable part of mainframe software development are already in place. What’s puzzling is that many companies that depend on mainframe technology to do business have yet to realize the benefits of the trust but verify movement. It’s as if they’re the early automobile owners in 19th century England, held back, never being able to move faster than the man with the red flag. The question is, why?

Compuware mainframe IBM z14Addressing a History of Risk Aversion

Trust but verify has been slow to catch on in the mainframe world for the same reason the man with the red flag was deemed a necessary precaution in early motoring: historical fear.

Mainframe technology has a historical memory that’s almost twice that of PCs. Its cultural sensibilities were baked in well before the PC came along.

Initially, mainframe customers were government agencies, banks, insurance companies and large businesses–organizations that are by nature risk-averse. The first PC customers were independent hobbyists. Those using mainframes were part of a big, hierarchical organization. The typical PC user, on the other hand, was a single person who bought the machine out of what is fundamentally a desire for self-expression.

Mainframe developers were paid a salary to use the machine for the benefit of the organization. PC developers paid good money to have a machine they could use for the benefit of themselves. The sensibilities were opposite each other.

And, importantly, the mainframe developer was subject to command and control, which is the nature of big organizations. PC developers were accustomed to doing what they wanted to do when they wanted to do it, at least until the PC made its way onto the corporate desktop. As such, there was, and still is to this day, justifiable anxiety and fear among those in the mainframe world that PC developers move too fast and too recklessly.

Forty years ago, in order for a PC to be considered a viable business asset, it might very well have needed a symbolic man with a red flag preceding it to warn others of the potential dangers at hand. But that was then and this is now.

Today the lone PC has evolved into one of many machines on a rack in the modern data center. As part of that evolution, corporate discipline and organizational safeguards, as well as the spirit of independence, permeate the fabric of modern software development.

Companies realize there’s a lot of benefit to giving developers freedom and independence. Companies also understand that the freedom to code does not translate into a license to ride roughshod through the digital infrastructure. Still, the old sensibilities that were necessary to safeguard the environment during the early days of mainframe computing have not gone away.

So, what’s to be done?

Trust but verify on the mainframe | Compuware

How to Get to Trust but Verify on the Mainframe

The first, best thing a company can do to move from the command and control way of doing development to one that’s based on trust but verify is to adopt Agile.

Agile puts the self-directed, cross-functional team at the center of the software development process. Under Agile, the team is completely accountable for all aspects of the product it makes. The team has complete freedom to meet the demands at hand. It’s also completely responsible for its successes and its failures.

Agile is not a make-it-up-as-you-go-along thing, nor is it corporate anarchy. Management has a definite role in the Agile process. It’s just that the role is not about command and control. It’s more about clearly defining expected results and taking appropriate action when the expected results are not achieved.

Agile has evolved into a well-known process. Volumes have been written on the topic. There are thousands of case studies to read. There are a myriad of tools that can be used to monitor all aspects of the team’s development activities, including those that fall under the label “trust but verify.”

Agile works, and there’s a reason why–pretty much for the same reason the man with the red flag was deemed unnecessary. Eventually, the powers-that-be figured out that in order to realize the full benefit of the technology, it was better to not only trust drivers to drive safely but also create a licensing system that verified those intending to operate a motor vehicle according to the law actually could. The rest is history.

The same is true for getting to trust but verify on the mainframe. The tools and processes exist and have been proven to work. It’s just a matter of letting go of old fears.

The following two tabs change content below.

Bob Reselman

Bob Reselman is a nationally-known software developer, system architect and technical writer/journalist. Bob has written four books on computer programming and dozens of articles about topics related to software development technologies and techniques, as well as the culture of software development. Bob lives in Los Angeles. In addition to his activities in software development, Bob is working on a book about the impact of automation on human employment.
Share: