Chapter 8 – Better Code Understanding and Analysis

Chapter 8 – Better Code Understanding and Analysis

The Modern Mainframe ·
Send Us Feedback

Listen on Itunes

Listen to Stitcher

Matt: Hello, welcome to the latest installment of building a better software delivery platform. I’m Matt DeLaere. Each week, Compuware Executive DevOps Solution Architect Rick Slade discusses a different aspect of the software delivery system and how you can leverage DevOps and Agile methodologies to create a world-class delivery system. We have a new podcast every Monday, followed by Office Hour with Rick Slade, a live online Q&A session, each Friday. So, let’s say hello to Rick. Hey, Rick, how are you?

Matt: Good, Matt, good. On a cold, chilly day in North Carolina. It’s June, and it’s like 55 degrees out here, so crazy. But anyway, doing well, doing well.

Matt: 55 is not cold and chilly for Michigan. So, today we’re talking about code understanding and analysis. So, Rick, can you give us a brief overview and explain why application and program understanding are so important in the DevOps environment?

Rick: Yeah, glad to, Matt. You think of DevOps, you think of speed—that’s what’s driving a lot of the investment in modernizing your software delivery system, your software delivery environment, it’s about speed. Now, especially in our world, in the mainframe world, it’s speed without sacrificing quality, but you invest that money, invest those resources and training in delivering services faster to your customers, to your employees, and to your partners, so it’s about speed. And, from an application and program understanding perspective, it’s one of those items, it’s one of those tasks within that software delivery life cycle, that, quite honestly, consumes a significant amount of time. In doing work in several different organizations over the last few years, I spent a lot of time, oh ten years ago, in re-engineering or modernizing existing applications to hopefully deliver them on an architecture that will allow changes to be made faster.

Testing & Impact Analysis

And so, if you look at the work required to do software development work, and we used to track that in the re-engineering work that I was responsible for, there were two major tasks, pieces of work, that consume the most time in that software delivery cycle.

First, probably not a surprise, was testing, and I’m talking about all the testing that occurs from unit all the way through UAT type testing, it does consume a lot of time. And there are lots of reasons that it does consume a lot of time, and we’re going to spend a couple of episodes upcoming to talk about testing and why it takes longer and what organizations are doing to expedite and to increase the velocity of their testing.

But the second item that takes up a significant amount of time in that SDLC is impact analysis and discovery work. And what I mean by that, it’s the work required for an architect or for a developer to determine what changes have to be made to an existing code base to accommodate some business change or business requirement. And especially in some of these very large, very complicated, very monolithic, tightly-coupled mainframe applications, that work can be extensive. And if you talk to most mainframe developers and ask them how they determine what code changes will need to occur, most of that work is still being done manually, and again, we’re looking to improve the speed of not just development and testing in delivery, but all of the aspects of software delivery. And so, there are things that we can do today leveraging technology, leveraging tools, to have a significant, positive impact on the time required to do impact analysis and discovery in the creation of identified work to be done in order to satisfy a new requirement or a change to that particular system.

And so looking at that time and looking at tools that can automate that discovery process, that can automate that impact analysis effort and produce, essentially, a list that can be converted into stories for developers with regards to the changes they have to make, can have a significant impact on reducing the total time within that software delivery life cycle. And so that was why I wanted to spend a little bit of time today talking about that. So, the time that it requires is significant. There are things that we can do, we’ll spend a few minutes talking about that today.

The second reason that this is important is that organizations are starting to bring in new developers, developers who may not have a lot of experience in some of these systems. And again, as we’ve talked about, some of these systems have been around 15, 20, even 30 years or more. And so there may not be a lot of knowledge or experience with that particular code. And so, if you’re going to bring in new resources who are now being charged with the modification of that code base to satisfy these changes, then new developers are spending even more time trying to understand how that code works, what the integration points are, where changes have to be made, so that impacts upstream and downstream of that particular processing unit have to be made.

And so, with modern technologies, tools that can graphically represent and provide data on that code, you’re going to significantly reduce the amount of time that those developers are spending just trying to figure out how these things work. And so, that’s a significant benefit as we bring in new developers, and especially developers who may not have an extensive background in green screen software development on the mainframe.

For a lot of us who’ve been doing this a long time, I know many—I used to do this—we’d got create REXX files or I’d create dialogs, tools, my own tools to help me find code, find data variables, find files that were being used that were going to be impacted by the change I was being asked to make and I’d create my own set of tools to do that. Well, those tools worked fine, and we were pretty proficient with it, but they’re not known to most new developers coming in, who now have responsibility for those changes. And so, providing a modern, graphical-type interface that can do that work against an existing code base is going to significantly reduce the learning time and impact analysis time for the developers charged with making those changes. And, as I said earlier, it’s all about speed, speed, we’ve got to get faster at every event in that software delivery lifecycle. We need to be looking at the work that we’re doing very closely and determine if there are ways to automate that. And again, you typically don’t think of application understanding and discovery as an automated task, but there are technologies that can do a much better job of finding that information, producing that information and those lists much, much better and quite honestly, with a higher level of quality than can be done when trying to do it manually.

Key Capabilities

Matt: Right, so what kind of capabilities or features do these tools need to help programmers find these things, to help them understand what they’re working on?

Rick: Well, there are lots of different tools out there. We’ve got one, Program Analysis, and again, I won’t spend a lot of time on it today, I hope that folks listening will go to Compuware.com and do a search on Topaz for Program Analysis, and they can get all the information on that, that they want. But, there’s really two, I think, key features, capabilities that can make a difference. And the first one I’ll call dynamic runtime data capture and reporting capabilities. And that is the ability, as you are looking at code, actually running that code—potentially through something like Xpediter, where you just run that code—you have the ability now with some tools to toggle switch, and there will be capabilities in program analysis to actually capture information from that execution, and it will produce a tremendous amount of data. Now, that data can be reviewed in a table format, which again, makes it very easy to produce these work lists that I’m talking about.

But there’s also the ability to produce that information in a graphical format that makes it very easy and very quick for a developer to understand how that program is actually executing. And so, I consider that dynamic runtime data capture and graphical reporting, and that’s important. That’s critical.

The other capability and feature is more around static code information collection and again, reporting. And again, visually and data driven. And what you see typically from static analysis tools, the ability to produce structure charts, if I’m a COBOL programmer and I’ve got a very intense, very complex, very large piece of code or a program, that structure chart will help me at least get an understanding of how information flows through that program, through that structure. And so those charts are informative, they’re easy to understand and provide a lot of information graphically that makes learning easier for that developer.

Other capabilities… charts, process flows to show flow of information as it moves through the code. Data flows is an important one, now. If you’re looking to modify a data variable or a field, or you’re going to expand it or you’re going to do something to it, or you’re going to make a modification to an algorithm that’s making changes to that data field, being able to see how that data flows from different files to other files, data structures to other data structures, that data flow chart, again, in a graphical representation, makes it very, very easy to understand all the different, or to determine different points within that code and possibly other code with regard to changes that have to be made. Advanced searching capabilities are so critical. Again, if you’ve been charged with making a change to an element on a screen and you want to be able to trace where that data that’s entered on that screen is going to be affected throughout that system and throughout that program. And so, having the ability to search on specific variables or files becomes important.

And then, key I think to a lot of this is that in these graphical representations in that structure chart, in that data flow, in some of those advanced search capabilities, I want to be able to link a line of code or define lines of code with my graphics. So, if I’m setting out a structure chart and I’m looking at a main line or some branch paragraph, I’m want to be able to click on that in the chart, and I want to be able to go directly to that line of code in my editor. So, again, all of these things are about improving the speed at which work can be done.

Matt: So, working from past experience myself, I know that when you’re passing along a job to somebody who’s coming in or somebody who’s new, you don’t always think of everything you need to pass along to them. So, it’s a great idea to have a set to place where they can find these things. Is it a good idea to have kind of more experienced developers get up to speed with these tools, understand them and how everything works through them, and then pass that along to the newer developers?

Rick: Yeah, it’s a good idea, Matt, because they’ve got… the more experienced developer has, probably, knowledge of that application or of that code. They can validate what they’re seeing in a graphical representation of that, and then passing along a picture is certainly going to be easier to understand for the new developer, than trying to explain it textually or verbally. And so, yeah, absolutely. That makes a huge difference.

It actually brings about another point that’s important when you’re looking at technologies, program understanding technologies, is you want to make sure that the tool can produce good reports that can be printed out. We want to be able to go as much paperless as we can, but sometimes having that structure chart of that data flow on a piece of paper and working with a peer to educate them on how something works can go a long, long way. So, having the ability to report that is critical and having that information in a centralized repository where it’s easily accessible, again, another advantage and something else that’s going to speed that process up.

Strategic Benefits

Matt: It certainly helps toward redundancy, even with the people working on processes. Yeah, so that’s key. So, what we’ve talked about so far have been kind of the tactical benefits. Are there other benefits, strategic benefits to using this tooling?

Rick: I think there are, Matt. I think that’s… I haven’t heard a lot of people talk about the strategic benefits of these type of technologies, but I think they’re significant, and let me tell you why. Organizations are working hard, especially mainframe shops, mainframe groups, they’re working very hard to transition to more of an Agile approach to managing work within their software development projects. I think that the type of information that can be garnered and leveraged will actually help in the transition to Agile.

One of the core tenets of Agile is being able to segregate work into smaller increments, being able to define tasks that can be completed in a two-week sprint. I think that this information, especially at a system level, at an application level, provides the information to the architect, to the product owner, that allows them to more easily break work apart. If I’ve got graphical representation of these systems and I see where the integration points are and I see where the upstream and downstream connection points are and what they are, with that information, it’s easier for me, again, as a product owner or as an architect, that’s been assigned with defining these stories to support this change, to segment that work into smaller increments, if I’ve got that information available to me. So, I think that makes a huge difference, and that supports… when I talk about a strategic advantage or strategic benefit, that’s a huge benefit in your transition to using Agile.

One of the things that I hear over and over and over again, “Well, my systems are too big, they’re too complex or too tightly coupled in order to leverage Agile techniques like smaller batches. It just can’t be done.” Well, it can’t be done because it is hard, but with the right information and information that’s provided in an easy-to-understand format, that task becomes much simpler. Not easy, but much simpler.

And so, that adds to an organization’s ability to leverage Agile, because defining work in smaller batches is absolutely critical to Agile software delivery. Being able to break that work into smaller units that can be made quickly, again in a two-week sprint, and tested in that same period is going to shorten that overall software delivery lifecycle. I have seen this over and over and over again. But, you’ve got to have the ability to segment work, and so organizations, especially without these type of tools, you can imagine the manual work effort required to break that work down into small manageable units or stories. And so, with tools like this, that can provide information at a system level and at a program level, that work becomes easier, and thus shorter. So, I think that that’s a huge strategic benefit to application understanding and discovery tools.

The other strategic advantage is in not, so much around DevOps as it is around modernizing the application stack itself. But it’s worth talking about here. In order to take advantage of DevOps and in order to get as fast a throughput and as fast a velocity as you can get, having applications architected at a component-based level provides tremendous advantage with regard to leveraging and utilizing automation tools and the types of tools that are part of a DevOps ecosystem.

And so, what I’ve seen and what I have done before, I’ve personally used these tools to help in application re-engineering efforts. There’s lots of ways to break these very large systems down. Again, I’m not saying it’s easy, but with the right information and in a format that’s easy to understand and digest, I can begin to segment these applications into more of a component-based architecture, and that has tremendous strategic benefits to the organizations. The closer I can get in breaking these very large applications apart, but still operating and executing as expected, the faster I can make changes to that code. And so, those tools are invaluable in the restructuring, or some people would refer to it as re-engineering of these very large complex systems. And the strategic benefit there is two-fold, as I’ve already said. One, it’s going to increase the speed at which you can apply change. It’s also going to have a tremendous advantage or benefit to leveraging modern DevOps automated pipelines. Again, the smaller I can define this work, the faster I can execute my pipes from a build, testing and delivery perspective, and that is significant, significant.

So, two strategic benefits, other than the tactical ones we talked about, are better information application understanding that can help with an organization’s transition to Agile, and the ability to help modernized, not the delivery pipeline, but the application stack itself from an architectural standpoint. Those two things are strategic and will have tremendous impact on software delivery going forward.

Matt: Is there also an impact on streamlining or on finding, say, dead code or code that isn’t needed anymore?

Rick: Absolutely. People talk about technical debt, and there’s a lot of technical debt in a lot of these older, very complex applications. And the reasons are fairly common or easy to understand in that as changes have been made to these applications over time, we go in and sometimes we make changes to that code without cleaning it up, I’ll use that term. And so there’s a lot of code, dead code, as you refer to it, that can be left in there. Well, that dead code has an impact, potentially, on future work, and if you’re looking to automate a lot of your testing and you’re trying to build test case scenarios that are as efficient as possible, minimizing that technical debt will, again, reduce time in executing those pipes, whether manually or automated. And so, having the ability to find that technical debt, that dead code that’s in these existing applications—and I think anybody who has been on the mainframe space will agree with me on this, there’s a significant amount in there—well, that technical debt consumes time because potentially we’re building test cases and testing code that has no value. And so, absolutely, it can make a difference and another reason for using these tools.

Program Understanding vs. Application Understanding

Matt: So, is there a difference between program understanding and application understanding?

Rick: Good question, good question. With our Topaz for Program Analysis, it’s targeted at the program level, and that’s a good thing, because the developers typically are going to be responsible for the discovery work as well as the change work and testing work at a program level where they’ve been assigned changes. And so, program understanding is providing the analysis at a program level to understand how that program works. It’s a technology or tool that almost every developer is going to have need for. And so, that’s your broader market, that’s where you can have a significant… those technologies can have a significant impact on programmer productivity. Going back to what I said earlier, with regard to segmenting work in order to support transition to Agile, you’ve got architects, and again, there’s not as many, but their work is critical, they’ve got to not only look at change requirements from a program level, but from a system level. And so, those type of tools give you, or at an application level, those type of tools give you the ability to look at a potential change from a system perspective, and that information will be critical again in defining your stories or your work list of changes to be made. But, just as important, it’s going to help those people in the prioritization of work when you’re doing your sprint planning, or your application or your project planning.

And so, having a system view that can be leveraged by architects and product owners is critical for the big picture understanding of an application or a system, and that is critical for those folks. In the definition of work or in an application re-engineering initiative, that information is absolutely critical. Developers may or may not be as interested in that whole perspective as they are in making changes to a program. So, they’re both important, they’re both critical. They are both targeted to different individuals on the team who are doing their specific work.

Matt: Well, it sounds like those tools have a lot of different uses and a lot of different impact on the entire process.

Rick: They really do, Matt, and I think it’s one of those things you don’t… It’s one of the things that I think excites when I go in and talk to folks about their transformative efforts with regard to modernizing their software delivery, it’s one of those things that people get excited about, especially those organizations that are starting to see a transition in resources who are bringing in new people or less experienced folks into the modification work of these very large, very critical systems that require change because of some new business requirement or a compliance requirement. So, yeah, it’s very important.

Matt: Well, thanks for explaining everything, that was great. And thank you to everyone listening. If you have any questions about today’s topic or about any other aspect of the software delivery system, please join us this Friday at 11 AM Eastern Daylight Time for Office Hour with Rick Slade. That’s the live online session where Rick answers your questions. You can visit https://www.compuware.com/goodcoding/ to set a calendar reminder for the Q&A session, listen all the episodes in the series and watch past Office Hour recordings. You can also submit questions on Twitter using the hashtag #goodcoding. So, Rick we’ll see on Friday, and I’ll give you the last word.

Rick: Matt, I appreciate it. This topic aligns real well with next week’s topic, where we’ll talk about code quality and security. We work with partners, specifically Sonar, who we’re going to talk with next week about leveraging their tools in an automated fashion or in a manual fashion to do a lot of the work that you see done in code reviews. And so, again, another step in that SDLC where there is the opportunity to reduce the time required, and those two go very well together. So, I hope you’ll join us on Friday for the live Q&A. If you’ve got questions about this topic or others, I hope you’ll join us and I hope that you will take time to listen to this and also next week because the two go hand-in hand. And with that, we’ll close this out. Again, thank you for joining us today. We appreciate your time. Look forward to a productive week, and we’ll see you soon. Good coding, everybody. Take care.