Why Developers Should Measure IMS Transaction Performance
Today, you won’t find many companies adopting IBM’s IMS database, one of the original mainframe databases. Most companies are opting for the relational database structure of DB2 instead, or a Big Data database like Hadoop.
However, IMS is still used by some of the largest and most important mainframe customers in the world. It still has the ability to be incredibly fast and efficient. But developers often forego measuring IMS transaction performance, a critical component to developing and delivering mainframe software in a digital economy that expects speed and quality.
If developers aren’t measuring or even thinking about IMS transaction performance, how can there be a reasonable expectation an IMS transaction will be able to meet its service level agreement (SLA), a promise to end-users a transaction will perform as expected?
If you’re checking your bank account balance or viewing your 401k statement in your browser, your customer satisfaction dwindles each time the SLA is passed, extending time to retrieve data you need. This is the essence of why developers should consider making measuring IMS transaction performance fundamental. So why don’t developers take the time to measure performance?
Why Your Developers Aren’t Measuring IMS Transaction Performance
Measuring the performance of a specific IMS transaction is like looking at a page in Where’s Waldo?—you don’t know what or where to measure. And who has the time to find out? This is the short answer of why developers typically neglect measuring IMS transaction performance.
Here’s the long answer: There are two ways to access IMS—batch process and online process. Online, you can access IMS through CICS or using an IMS message processing region (MPR). The larger the organization, the more IMS MPRs they’re going to have. If you’re going to kick off an IMS transaction, you know the IMS subsystem it’s going to run in, but you don’t know which region specifically. Not knowing which region it’s going to run in is a major inhibitor for developers measuring IMS transaction performance.
For some organizations, there may be only one region for transactions to execute in. But for larger organizations with near 100 regions, finding where a transaction can run becomes problematic. Common strategies around this are to force the transaction to run in a specific region by assigning a specific class to the transactions or to measure multiple regions at the same time. However, it takes time to get the right people involved to set up tests to make sure transactions run in specific regions. There are just too many time-consuming steps for developers to measure IMS transaction performance. There’s no easy way to do it.
Making It Easier for Developers to Measure IMS Transaction Performance
One final, significant reason developers neglect measuring IMS transaction performance is they aren’t held accountable for it. Developers are expected to handle application debugging and ensuring applications do what they’re designed to do. An IMS transaction running for one or two seconds longer in testing than it should isn’t of large concern to developers because they aren’t told to think it is, and the thought might also be that it will run quicker in production.
Anyone involved with IMS and application performance management knows the struggle of convincing developers that performance is an important component of the development lifecycle. It has been said many times that the cost to address a performance issue in production is eight to 10 times more than if the issue is caught earlier in the lifecycle. So far, our efforts have been deflected. It’s time for a change in the status quo. Let’s make it easier for developers to capture IMS transaction data and curtail the hassle and time it takes with existing methods.
Flickr: Sean MacEntee