Matt: Welcome to The Modern Mainframe, I’m Matt DeLaere. Thanks for joining us for chapter two in our series, Building a Better Software Delivery Platform, in which we discuss some of the disciplines and methods behind modernizing your software platform and maximizing efficiency without sacrificing quality. Joining me again is Executive DevOps Solution Architect Rick Slade. Hey, Rick, how are you?
Rick: Hey Matt. Good, man.
Matt: Good, thanks for being here. We also have a special guest for today’s discussion, Compuware Vice President of Product Engineering David Rizzo.
David: Hi Matt, it’s good to be here.
Matt: So, today we will be talking about software delivery system architecture. To lay the ground work for this discussion, Rick, can you give us a brief definition of what a software delivery system is?
The Software Delivery System
Rick: Thanks, Matt. Yeah, the software delivery system is really nothing more than the software delivery ecosystem or infrastructure and the processes that you use to design, to build, to test, and to deploy software. For the last six months I have been on an absolute mission to get organizations to start thinking about this existing ecosystem as a critical business system. It’s just as important as any other critical business system that’s being supported by your organization, if you think about it. Our abilities, our organization’s abilities to deliver software is absolutely critical to our success these days.
And look, if you believe that Agile and DevOps methods are the best way to manage and to support these existing critical business applications, then it’s my contention, why not leverage these same techniques to support potentially the most critical system within your organization, the software delivery system, or SDS as I like to refer to it. I’m going to go into more detail on this topic in a future episode, but for now, let me briefly talk about the architecture of the modern software delivery system.
You know, 30-40 years ago, all we knew in the development of software was the traditional waterfall method and it certainly didn’t produce bad code, evidenced by the fact that there still remains a significant portion of that code that is still very active and running today. I think that’s actually a statement to the quality of work that has been done. So, waterfall, I don’t think… it as a method doesn’t produce bad software, it just doesn’t produce software at a speed that organizations have to work at today. Again, the problem with traditional software development processes has not been for a lack of quality—it’s not to say that bad code hasn’t been written; certainly, I’ve done enough of that in my career—but it’s for a lack of speed. And so why is it unable to produce a velocity or a level that is needed by organizations today? I think the problem lies in the total cycle time required to deliver. So, we have to ask ourselves, “What is elongating that cycle time?” And the answer varies.
My years of software development, what I’ve seen is a tremendous amount of rework that typically occurs in that overall cycle time. And then you have to ask yourself what causes that rework.
Sure, there’s some bad code that is causing that but also, I think a lot of it occurs in miscommunicated and unidentified requirements.
Think about it: any of us who’ve ever worked on a traditional waterfall software project where all the requirements were perfect before you started your coding effort… I’ve never been involved in one of those. I haven’t talked to anybody who’s been involved in one of those. And so with miscommunication and unidentified requirements, the tendency to get things wrong, and then to have to rework that during the project lifecycle is fairly typical, and then that reduces time to do other important things with regard to delivery of software, like testing.
I believe that the minimization of miscommunicated requirements and unidentified requirements along with more and better feedback and better collaboration are the primary benefits of Agile as a framework for producing quality software at a faster pace.
And you can’t do this without supporting technology, so there are opportunities throughout the software development lifecycle to gain efficiencies. If you think about some of the key tenets of Agile, one of them is managing and executing work in small batches, or increments. And the reason for that is to get better feedback, better direction better, prioritization before venturing into work that may not be correct or a priority of the effort. Again, typically due to miscommunicated or misunderstood requirements.
Working in smaller batches or increments may seem counter-intuitive to traditional mainframe software developers, or development, or even maintenance of traditional systems. A lot of times, a task used to modify and to test code changes to existing systems can be significant in scope and complexity—it typically is. Typically because of the legacy application architectures, they’re very tightly coupled and linear in design—it makes it hard to segment work with a lot of these systems, but it certainly can be done. Breaking work into smaller units so that it can be completed in two-week sprints can seem impossible, especially if your hair is the same color as mine and you’ve been doing quality work for decades in a way that doesn’t seem to align with small batches of work, when we think about these very large complex systems.
But look, I certainly believe it’s possible, I’ve seen enough of it over the last 15 years to know it can be done. There will always be naysayers that will tell you that it can’t be done, and I’ll admit there are certain systems and applications where it will be extremely difficult. But I still believe it can be done, based on my experiences over the last 20 years in helping organizations achieve this, having seen it and customers who originally thought, “No way.” It does require resetting of attitudes and I habits for large system modification work. Fortunately, I think, today there are tools that can help us break down that work into smaller chunks; tools that can help us execute the development work, building process, and even test work in smaller batches allowing us to do work faster, more often, in an iterative style, which, I think produces quality code at a higher velocity. And we’ll certainly be talking about those technologies in much more detail in subsequent episodes.
3 Critical SDS Components
Finally, before I asked David to talk about what he and his team have built here at Compuware, there are three critical software delivery system components that I think are must-have. First, it begins with automation. Automation technologies are critical—technologies that are easily integratable. Not sure if that’s a word, but I think that certainly is key and I’m sure David will touch on that in talking about our architecture. And finally, the ability to continuously monitor and measure software delivery systems’ key performance indicators. As I think I stated in last week’s episode, I believe automation is the biggest key to effective and efficient software delivery.
You think about all the work that’s required in that software delivery lifecycle, from ideation all the way to production code—if you think about all that has to be done, a lot of that work can be done by computers and it’s certainly easier to automate if the work is broken into smaller chunks. Think about all the manual effort, typically done to test the typical mainframe system. Standing up the environment, provisioning test data—those activities by themselves consume huge amounts of time and we need to start thinking about how we can automate that work that is performed throughout that software delivery lifecycle.
Second, integration. And again, think about all of those activities within that lifecycle. Honestly, it requires multiple technologies to support—it can be complex can be long.
And another huge key to success is minimizing the amount of work you have to do to make those tools work together and to move information from one activity to the next within your lifecycle. Your software delivery system has to be built leveraging supporting technologies that integrate easily and perform well together.
And then finally, measurement. In a future episode, we’re going to dedicate a lot of time to discuss the metrics and the KPIs that we’ll need to measure and to monitor and to assess on a continuous basis. That information will provide us the ability to maintain optimal efficiency and effectiveness for your software delivery systems.
That’s all I wanted to talk about at this point in time. At this point I’m going to turn it over to David or if you guys have questions, I’ll take a breath and we can answer those.
Matt: I do have a couple of questions. I don’t know if they would better come after David but… So, a company would need to fully replace their software delivery system, right? Can it be done in fits and starts? Does it have to all be done at once? Can it be incremental?
Rick: Good question. To me, the management of the software delivery system is an evolutionary-type activity. Now, some people will disagree, some people will say you’ve got to blow it up, start all over again. I can certainly see instances where that might be the case, but I think there is in most large enterprises, there is a lot of good work that’s been done over the years, and I think a lot of that work can be leveraged just under a new way of doing things to the incorporation of Agile and to the incorporation of modern tooling to automate and manage a lot of that work.
I believe that it’s an evolutionary process. And so, when I talk to clients about how to begin their transformation, how do they transition to a more Agile, more DevOps-centric way of doing things. I ask them to first start with an assessment of their current state architecture to understand what methods, what technologies, what tools that they are leveraging today and to determine what can we recoup from that existing architecture, are there tools, are there pieces that we can leverage? Are there methods from a testing standpoint, that we will be able to leverage?
And I certainly… My experience tells me that yes, there is. So, the key to this is understanding that existing architecture, and then doing analysis to determine what can be used, what needs to be thrown away, and how can it be executed or leveraged under an automated ecosystem?
So, it’s an evolutionary process to me. I think that the incorporation of new capabilities and features, Matt, into that modern software delivery process should be managed just like any other Agile project, as I stated earlier. That work is defined, there’s a minimal viable product, so to speak with regards to the software delivery system, it begins with an IDE for developers; the ability to make code changes or to create code and to test that code, to commit that code to some source management repository, so that might be your minimal viable solution or minimal viable product. And then the addition of new features, new capabilities, to make you even more efficient, to help you deliver faster, can be added in an incremental fashion.
I help clients set up sprints. We do planning sessions, just like I think probably David does with his organization in the build out of our tools. But we’ll execute planning sessions not for the build out of tools, but for the build out of the software, a delivery system, and we’ll define that work in a set of sprints, typically no more than four sprints initially to get started, and then as an organization, begins to manage that software delivery system, again, as a critical business application for their organization, then those capabilities can be added incrementally built on top of an architecture that is proven and works well. And so, I think leveraging Agile, doing this in an iterative evolutionary style is, in my opinion, one of the better ways to manage an organization’s transformation.
Matt: So, adopt an Agile mindset and don’t keep it siloed, basically.
Rick: Yes, that’s exactly right, exactly right. The same methods, the same techniques that we all agree to, are working better with regard to the delivery of critical business applications, a retail banking system, a commercial banking system, a manufacturing system; leveraging Agile, leveraging DevOps to incorporate change into those systems, an evolution is I think most will agree that the right way to do that. I’m just saying that those same techniques are just as applicable to an organization’s software delivery capabilities.
Matt: Okay great, thanks for answering. I’ll let you get back into the topic at hand.
Rick: Well, I’m glad today… we were talking about things we want to talk about. And I came to Compuware because of the work that these guys, or we, have done and a lot of that work is attributed to David Rizzo, who is on the podcast with us today. David was one of the first guys I interviewed with when I came to Compuware from IBM and I was just amazed at his grasp on Agile and building software. One of the reasons actually, that I wanted to join the company. So I asked David to join us today and to talk about his role within Compuware and what he has done, or along with his team—I’m sure he’ll give credit to all the folks that have been involved in this—but what they’re doing to deliver software, what’s Compuware’s software delivery architecture? Its processes, its tools, its organization. What’s it about. What are we doing? And to talk about the measurements that he uses a from management, from an executive management standpoint, to assess our software delivery effectiveness and efficiency. So, Dave, tell us about your role, and about Compuware’s software delivery architecture.
David: Well, thank you, Rick, and thank you for that nice introduction and it’s great to have you part of the Compuware team, and giving us insights into how things are done throughout the industry and your experience, and as we continue to grow.
So for me, I lead the engineering organization for Compuware, which means I’m responsible for the teams that deliver our 57 different products—maintaining them, enhancing them and making sure that we’re continually meeting the needs of our customers. And it’s been a very exciting and interesting time for Compuware over the past several years, and to think about how the evolution is and how it’s become what we have. When you talk about the software delivery architecture, I started with Compuware about 23 years ago and one of the first assignments that I was given was to take over building and packaging one of Compuware’s products because the individual that had been doing that was preparing to retire. And that process, when I started that 23 years ago, was to deliver on physical cartridge tapes—something very different—but that process to get from all of the coding being done… and you talk about waterfall, and we were all in waterfall, in that waterfall process there were the distinct phases of waterfall: design, code, test, and then deliver. And at that time, it was a two-week process to get from everyone is done coding to getting to a package that could be tested.
Rick: This was after the coding had been done, right? This is the code’s finished, now we’ve got to package it up.
David: Exactly. And it was often looked at as just something that was done; it was not really important, it just had to happen. And as we evolved now to moving 20 years into the future, and where we’ve come as we went to Agile, and we started doing our Agile delivery, one of the things that we made a commitment as a company to deliver every 90 days and we do new feature enhancement and enhancements we have to make updates to our current technology to keep it current, and we have to deliver defects and also manage our technical debt. So, we look at four kinds of work that we’re balancing and have to get done, and we do all of that work over the course of… to be able to deliver out to our customers, a collective package every 90 days. And six years ago when we started on that, our goal was to go from making it once a year to doing it four times a year, every 90 days. When we looked at that, you talked about the three pieces and the first is automation. You can’t say enough about what automation does. Automation can be scary for people, because they feel like they won’t have any work to do. And as you realize automating manual processes and processes that just require repetitive work allows you to free up your time to do things that deliver value to your end users. And that’s the goal for everybody.
Rick: I think, as I stated earlier, I think it’s probably the key to effective fast software delivery and it’s not just from the standpoint of producing code faster, but I think it also allows you to take the best practices within your organization. If you can understand how that work is done in the best way and codify that so that it can be executed automatically then it also included improves your quality.
David: Exactly. So, from a Compuware perspective, as we started and we built the process and we understood that we wanted to continually get better, and we are still always looking to be continually better. So today, we look six years later today at Compuware, every code change that is made in Compuware and approved to move on by a developer after peer review, once that development is done every night for every product, those code changes are incorporated into a build that goes through a set of automated tests and ultimately every morning, we have a package for every product that could be released to our customers and end users if—we have one manual process left, there’s a few manual, but the big manual process is to press the button to release out to our customers. We still are very cautious about that, to make sure it goes through, but we’re actually getting to a point where that will be, we will be able to do it automatically using key metrics that come in. But our objectives have been to meet the needs of our customers, to be focused on doing the things that is most important to them, to benefit our end users, and do that as efficiently as possible.
And what that means is whenever we do something, once we’re done with the coding piece and getting it good from that perspective, my belief and what our goal has been is that we should be able to give that that update to the customer in a formal package in near real time. Now obviously, it has to go through testing and so forth, and today we do that overnight. So essentially, any change to a product can be delivered to a customer once the coding is done in less than 24 hours, being fully validated and tested. And how we do that is obviously through a huge amount of automation and after a developer is done, developing the code and uses, we have an internal system that we’ve built where we move changes through that integrates with ISPW and Bitbucket, our 2 source change management systems—one for mainframe code one for non-mainframe code—once they’ve committed that code in the source change management system and put it to the appropriate level in those systems, everything after that is fully automated. There’s no intervention needed to get to the point where we get a report the next morning, that gives us a look at the quality. And we have dashboards that tell us for every product for every test suite that was run, how many tests were run, how many passed, how many failed, and we have criteria set up as to what those numbers need to be to allow us to move that and push it out to our end users, if we wanted to do that. And we use traditional red, yellow, green to look at certain things are to give us that confidence.
And so, we use things like, like I said, ISPW and Bitbucket. We have Jenkins, which is the orchestrator in the middle, which identifies any changes that were introduced and initiates the automated test suites that need to be run based on the changes that were made and put into the source change management system that day. If no changes were made in a product, we don’t run automated tests on it, because we’d be testing the same thing that was done previously. So, we identify what’s changed through Jenkins, and we get feedback using Sonar and other tools, and then we have our Hiperstation automated testing tool on the mainframe that executes the scripts on the mainframe that need to be run—those results are fed back to Sonar. We can get all of that information, so we know all the past fail rates and look at those. So, we have a delivery system that is built on automation. There’s integration between our products and the the UI products that interface with the back-end products, so that integration of our products requires integration of the process, of the delivery process, so that when we put all the things together, we make sure that we test the changes on the front-end side, the back-end side, and all of those integrations work together. And a lot of that orchestration is done through Jenkins and we feed a lot of our results into our zAdviser product as well.
So, that’s how we go through our delivery process, and it is, we treat it as a product. So, for all of our products, we use JIRA to track issues. So, any issue that comes in for a product if a customer reported an issue or we’re going to do a new feature, we have a JIRA for it. We have JIRAs for our software delivery process if we’re going to automate something, if we’re going to add a new tool, we create JIRAs for that—we treat it just like we treat a product and work through the process just like we’re delivering something to an end user, as it facilitates our delivery, to end users. So we do planning meetings, we do the whole process of it, we have product owner, that looks after making sure we’re moving those things forward, because as I said, today we can do it overnight, we have a automation in place, there’s a couple of manual steps; we’re continually looking how to make that better, can we get those tests running faster can we be more efficient? So that’s basically what we have in place for software delivery and how we’ve done it and how we look at it. And again, it’s been… it’s evolved over time.
Rick: Yeah, it’s an amazing system. It truly is a critical business system for our company. I think software delivery, regardless of what business you’re in, if it’s not critical, it should be critical for all organizations because the use of software to communicate with our customers, with our employees, with our partners is absolutely critical. So, what I heard there is… We’ve got in our shop, it can be different depending on the requirements of every shop, but in our shop, we’ve got a 24-hour continuous integration cycle where developers, they spend their days working on modifications, those are applied daily, and then as part of an evening batch process our system takes those changes, does the build, does the appropriate testing and then delivers or deploys that to potentially, as you said, a production state. Now, we manage our releases and how that code goes out. But the ability to deliver that code almost real-time is really quite amazing.
Keys to Efficiency & Effectiveness
A Common Goal
Let me ask you a question with regard to what you believe are the keys to maximizing the efficiency and the effectiveness in our software delivery systems, which obviously are world class, you just don’t see this in a traditional mainframe software provider. So, what do you think are the keys to providing the efficiency and the effectiveness that you are delivering for Compuware?
David: I think one of the big keys is that it is looked at by everybody in the organization as a critical function. When we are adding a new feature which is going to change our install process, which is going to change our build process, the people that are assigned to the team for our delivery teams, will, they will become part of the strong team that’s working on those features and they will scrum with them to talk about how things are going to be implemented in the products that are going to affect delivery. And so, it’s part of what we do. It’s not an afterthought, that we just have to put the bits and bytes together, and get them shipped out. It’s really thinking about: we’re going to have to test this, build this, and deliver this, and how are we changing and impacting that as we develop? And I think that’s one of the keys that has helped us to become better because we don’t look at it as somebody else’s problem.
Just like testing, we don’t look at it as a separate piece that’s done. And using automation, integrating using modern DevOps tools, we use the same tools wherever possible across the enterprise and across the platforms, so using those consistent tools so that… Jenkins works across the whole enterprise. We’re not using a different tool for mainframe, and non-mainframe we’re consistent in the tools we use where we can and we’re making sure that we’re using those to the best of their abilities and being smart about what will give us the best ability to deliver in the future. So, we choose tools, and we evaluate tools, and we bring new tools on all the time. As you said, it’s continuous learning, and that’s really, to maximize your efficiency and continue to improve you have to be looking at what’s new and looking to update that as needed. Not changing for the sake of changing, but changing where it can truly impact the process in a positive way.
Rick: Continuous improvement is not just a statement, it’s something that I think all organizations need to be constantly doing to improve upon the effectiveness because our competition certainly is trying to do that.
The Automation Process
You talked a little bit about the automation and what a big part that is of our software delivery system and again it’s almost completely all automated. And again, you don’t do that overnight. And I tell my customers you can do it incrementally, but what do you think, or can you talk a little bit about that process over time that allowed us to automate? Were we able to leverage existing work, existing processes, and then to incorporate that into an orchestration engine, like Jenkins, I know we use Jenkins, but what’s that process look like for organizations that are looking to automate and how do we do it? How long did it take? I’m sure we’re probably still doing things, but what’s that process look like?
David: Sure, so yeah, as far as how long did it take, when we’re done I’ll let you know. It’s obviously continuous, but when I look back at how we started, there’s the, the question of, “Do you go and just build a whole new system do you use what you have?” And the approach that we took, basically being from Detroit and liking car analogies, basically, we were given the direction that we couldn’t stop the process, as most companies can’t just stop and wait for things to be done. So, we changed the wheels as we were driving down the road. So, we looked at the processes we had in place. We did take in the beginning, we took some resources, and we increased the size of our delivery team for a period of time, and we simply said, “Let’s look at a manual process that can be automated and let’s think about what takes us the most time, and can we automate that?” And we just went piece by piece to begin to automate what we could and looking at how we could leverage the resources we had to inject automation. So, we did it, it was continuous. And I can today talk about how we deliver. And that’s really a process that we’ve been able to do for about a year now, so it took us a long time to get to that final state. We did it for us, we talked about products and we did products at a time—we took one or two or a product family and then we leveraged what we learned and we built that for others. And the key is really, for us to be able to do that, was consistency, getting a consistent way to do things. So, we don’t have with over 50 products, we don’t have 50 different ways to do it. We looked at which process was going to give us the most efficiency and was what we felt was the best and we went with it, and then spread it to the other products, or where we needed to so that we could leverage the best practices. Because Compuware, like many mainframe companies, we were siloed and we were working in siloed ways, so we took the best practices, implemented them as rapidly as we could to meet the needs and get us to that where we are today. And it was… We always look at things in terms of… “Let’s just be a little better tomorrow than we are today.” We didn’t set out with a goal five years, six years ago, I didn’t say I want less than 24 hour turnaround on a fix to get out fully tested. I didn’t define this end state that we have today because I recognized it was just… It’s too big to chew.
You want to deliver as rapidly as you can, and meet the needs of your customers. Look at step one, look at step two, and just continuously improve and ultimately get better. And, again, if you’d asked people five years ago, were we ever get to a state where we could deliver build, test, and deploy every night every product they would have thought we were crazy, and now we can do that because we’ve taken it in a piece at a time.
The Role of Next-Generation Developers
Matt: David, I have a question. In last week’s podcast, we talked a little bit about the evolution of the culture, and how that would need to change; and Rick mentioned earlier about resetting of attitudes and habits; and I’ve talked to you in the past about working with the college students or the new graduates who are coming into the industry, and mentorship, and how when they’re paired with veteran developers, they can learn from each other. Do you think that the younger generation is kind of helping propel the Agile methodologies and helping to verify for the more veteran developers that this is the way to go?
David: That’s a really interesting question, and certainly when you look at what people graduating from college today, in the last five years, their mindset is, I’ll say “smartphone app mentality.” A little bit of, “It gets updated every night, what’s the problem?” And the speed at which things change, they’re very accustomed to that, so they bring that mindset that you can continuously change and they’re much more used to much more rapid change in their life because that’s how they’ve been conditioned; that’s how they were brought up. So, I think it definitely has had an impact because they have what I’ll say is less fear. Some will say more methodical and thoughtful about how things are done, but sometimes it just comes down to the faster you move, sometimes it gets a little scary or you want to make sure you’re delivering the best for your customers. So, I think they do help to eliminate some of that fear and realize that you can move a little bit faster, you can do things in a different way. And they are very much geared much more towards automation. Again, it’s been part of what they do; they’re very keen on writing a script that’ll do something for them rather… they don’t do things, typically, they won’t do the same process five times, they’ll write something that will do it for them.
Matt: That’s great. Well, this has been a very interesting topic, a really good conversation. David, thank you for joining us.
David: Thank you for having me. It’s been very good. I appreciate it.
Matt: And thank you to our listeners. If you have questions or would like to learn more about software delivery system architecture, please join us Friday, May 15th at 11 AM EDT for Office Hour with Rick Slade. Rick will be answering your questions live. You can find the link in our show notes or at https://www.compuware.com/goodcoding, where you can also catch up on past episodes and look for future episodes. So, until then, Rick, can you take us out?
Rick: Yeah, thanks, Matt. Thanks for the time today and David, thank you very, very much. I can honestly say I’ve learned so much listening to you, but seeing the results of your work… David’s a doer and it’s evidenced by the fact that we’re dropping code every 90 days and have been doing it for a long, long time now. It’s fun to watch. It’s fun to be a part of. So, thank you, David, for all you do for Compuware and for our customers. I appreciate you spending time with us today with regards to this. Look, I’m excited about these podcasts, I hope you find them interesting and will continue to join us. For today, that’s it, and remember, good coding. Take care, folks.