Step 4: Provide Developers with Graphical, Intuitive Visibility into Existing Code and Data Structure
Part Four of the “10 Steps to True Mainframe Agility” Series
Mike: Okay, cool. So, I know we do this at Compuware with two products primarily. There’s Topaz for Program Analysis and then there’s Topaz for Enterprise Data. I know they do different things, but they both leverage this idea of visualization. So how would you both describe those tools respectively?
I know, Jim, you work a lot with Topaz for Program Analysis; and, Irene, you work a lot with Topaz for Enterprise Data. So, whoever wants to go first on that, I’d love to share with our audience more about these two tools and how they help with this graphical visibility.
Jim: Program analysis really has two parts. It has static analysis and dynamic analysis, and the dynamic analysis is referred to as Runtime Visualizer.
Jim: And this will take an execution of a program and show you exactly the program calls that were involved and the I/O that was involved. And it’s a nice snapshot as to what actually occurred during the execution of a program. And this kind of supersedes that institutional knowledge that we touched on earlier, where this is actually kind of a “facts don’t have feelings” thing. So, no matter how the programmer thinks it works, program visualization gives them a picture on how it really worked.
Jim: And then program analysis has more traditional static analysis that will dynamically build a flow chart of the program as you’re editing. It then allows you to look at how data flows across variables. Here, the complexity will often render the flowchart as really hard to use. But what we do is we offer drill-down capabilities so you can focus on specific areas where you’re changing the code. So, it allows you to kind of drill into the visualization to bring clarity to the specific area you’re working on.
Irene: That is super valuable. The counterpart to that is visualization of data, and Topaz for Enterprise Data really does provide visualization in a couple of ways as well. First, it allows developers to really visualize relationships between data, and secondly, it allows visualization of the actual extract process.
Irene: Relationship visualization is important. When you think about a related set of data in an application, usually we’ll need a related set of data, but identifying how that data is related can really be challenging. You’ve always got relationships that are defined in the database itself. But there’s a whole other type of relationship that’s really only maintained within your application code. And those relationships may include relationships between Db2 tables and VSAM files. So we often refer to this kind of relationship as an application relationship.
So, our relationship visualization component really allows developers and testers to see a graphical representation of how that set of objects is related, and they can easily identify which are maintained within the database and then which relationships are maintained within the application code itself. The visualization view allows you to drill in to see more detail on each of those relationships to see the actual detail of the columns that are involved in the relationship.
Irene: And then the second type of visualization that we have in data is being able to visualize the actual extract process. And that is super important when you’re extracting a subset of data. When you extract a related set of data, we start by extracting one object and then we move on and extract from the objects that are related to the first object, and then onto objects that are related to each of those objects and so on—until we navigate our way through the entire set of related data.
Now if you’ve got a lot of objects in that related set, it is really easy to end up with way too many records and rows being extracted. So, visualization of the extract process itself really allows testers to see what happened during each step of the extract. And that allows them to easily pinpoint where the extract went beyond what they were expecting. Now armed with that information about where the extract has exploded, if you will, they can easily come back and refine their extract to select just the scope of data that was actually required to test their application.
Both of the visualizations have filters that are built into them, which allow a developer or a tester to easily home in on critical information and easily identify the effect of a change. So, it’s really powerful not only in seeing how data is related but also in seeing how the flow of the extract process itself is working against that set of related data.
Mike: Both tools are clearly very robust with what they offer in terms of functionality. I’m curious what direction these tools are headed in. How can you take something that’s already so awesome and make it even better? Can you provide any insight into that?
Jim: Mike, I think the big push with the program analysis piece is to make it actionable, much like Irene’s piece is. So, the runtime visualization—well, again, it’ll give you a picture of what programs called what programs; what programs did what I/O; what programs did what sequel calls—we’d like to make it much more actionable.
We’d like to add right click options. So, from the visualization you could edit the program or edit or browse the Db2 table, which actually you can do today, so you can begin to use the picture not just for understanding but also to drive actions as you’re doing work.
Mike: Okay. And now I can see how that would probably also really increase your agility then.
Jim: Right. One of our big pushes is what’s known as Team Profiles sharing, where one person does some set-up work and then other users benefit from that work. And this would be a good example where Topaz for Program Analysis could create a runtime visualization and then share it across the team, and every member of the team could then pick it up and right click to edit a pertinent program or right click to edit or browse a VSAM file, that kind of thing. So kind of using it as a driver across the team to accomplish common goals.
Irene: It really is true that a picture’s worth a thousand words. We are actively working to integrate visualization into more parts of our product so that when you actually create an extract, you should be able to see how the objects you’ve selected for that extract are related—and even, as Jim said, take actions from that visualization to modify the way in which that extract will process when you execute it.
Mike: So, bringing this full circle, how does all of this really help developers improve what they do day to day?
Jim: I think Compuware has always been focused on three main components: quality, velocity and efficiency. And what we’re trying to do here is level the playing field so that you can get those same attributes from even newer programmers that you were getting traditionally from those very veteran programmers. So, we’re providing information in a way that can be easily consumed so every programmer, in the privacy of their own desk, can be confident in the work they’re producing based on the input that they have.
Mike: Awesome. Thanks a lot, Jim and Irene, for sharing all of this great information. For those listening, I’ll put the link to our “10 Steps to True Mainframe Agility” eBook in the show notes. Otherwise, we’ll keep moving through this series and have a few other experts on to go through the rest of the steps in this series. Thanks, Jim and Irene. I appreciate it.