Image of MM BP header 2

Mainframe modernization: what changed to make it possible now?

By Mainframe Modernization

For the last thirty-odd years, it’s been safer and cheaper for many businesses to keep operating legacy mainframe systems than to modernize.

But that was never going to be the case forever. Where we stand today in mainframe modernization history, we’re past the pivotal moment. The balance of risk and cost has tipped, and mainframe modernization is now the safer, more economical option than the status quo in most cases.

You may be skeptical if you were involved in a mainframe modernization project in the last 10 or 15 years and have vivid memories of the challenges (and perhaps spectacular failures). So, let’s go back through a bit of history and see how we got to this exciting moment.

Note: When I say we, I mean “all of us working with computing technology,” from the advent of mainframes to today. Getting here took the combined effort of generations of IT professionals across many companies and functions.

Mainframes are not bad systems

First, I want to emphasize that mainframes are not bad systems. They were built in the best way we knew how to build them at the time, with patterns that are very different from today, for all the right reasons. Mainframes have been at the heart and soul of a lot of organizations.

Back in the day, the mainframe was the only system. Everything happened inside it, and its many terminals were the only point of access. It wasn’t designed to talk to other systems. It’s important to understand this siloed design pattern as we trace the evolution of what came after.

Image of Artboard

The Y2K bug exposes the need for modernization

So, we built these mainframe systems, knowing that they would evolve, and we’d modernize as new systems became available. We didn’t anticipate how hard that would be.

We've been talking about modernization since the Y2K bug, which exposed some of the limitations of mainframes:

  • It’s difficult to find people with the skills to maintain them.
  • Documentation is missing or incomplete.
  • They're not very flexible, with lengthy dev and test cycles to deploy changes or enhancements.
  • They require maintenance windows to execute large batch programs.
  • Full testing environments don't usually exist. There might be a partition for a test environment, but rarely are there separate dev, staging, and production environments like we have now.

For the Y2K bug, we invented code scanners to help find where we needed code updates. The code scanners would read the COBOL line by line, identify places where the two-digit integer occurred, and alert a human to come and figure out what change to make. Where we saw patterns in the required changes, we created code injectors that would automatically make the change, but still bring in a human to do a last verification. And that worked. We made a robot that reads COBOL.

From code convertors to replatforming

Around 2002, the scanners and injectors started to get turned into code converters. They could do a syntactical line-by-line port from COBOL to Java and COBOL to .NET Intermediate Language. After the line-by-line conversion, we would spend massive effort manually tweaking it to run, and it worked. We could get it to compile, and we could throw it on a server.

The problem was that we couldn't put it on a machine big enough to make it run well.

Then around 2008, we started to get better at converting the data. Most people were dropping VSAM and hierarchical/sequential data storage and getting it into DB2. Relational databases were there. We developed architectures where we would drop data in multiple locations, avoiding round trips from UI to mainframe where we could. We were getting the mainframe to work for us and creating flexibility around it, but it was kind of a hack.

By 2013, the code converters got smarter and more sophisticated. They were breaking down the COBOL into blocks and converting it into blocks of Java; instead of line-by-line, we were getting outputs that had efficient patterns and code.

Image of MM BP body2

And we found a way to get off the Z/OS operating system and create a much more portable wrapper around the mainframe. We call that a replatform now. We could swoop up an entire mainframe ecosystem and run it inside of what is effectively a big container, and then move that container into the cloud. Then we could get rid of the MIPS costs and turn off the mainframe, which is often the most expensive machine in the data center. This was a big epiphany, and lots of replatforming happened between 2013 and 2018.

The balance tips in favor of mainframe modernization

Then, suddenly, a beautiful thing happened. Moore's Law caught up. Processing power grew exponentially to the point where the converted code could be put onto a regular server, and the CPU was now powerful enough to handle the inefficiencies of Java and .NET as compared to COBOL on RISC. In short, it was fast enough to work.

Even for the somewhat intelligently converted COBOL (we sometimes called this “JOBOL”), processing speed had gotten so good that it didn't matter. It was just as fast. And so, suddenly, the companies that had been creating these sorts of code-block-based code converters, now they've got something.

They further invested in their products, and now some of them take COBOL and break it down into a semantic model and generate new Java or C#. Think of it like code conversion in the age of ChatGPT. The code converter reads and understands the COBOL code, and it recreates a functional equivalence in a modern language. It even makes suggestions about how to break that down into services. It has proper code comments and intelligent variable naming. It's good.

Image of MM BP body3

The cloud era is forcing the issue

When COVID arrived in 2020, many people signed up for three-year commits to the cloud. We lifted and shifted as fast as we could, and all the easy stuff went first. The workloads and data that were moved are all running well in the cloud, most of it in your IaaS flavor of choice.

That brings us to today, as those three-year commits expire in 2023. You've got a bunch of stuff in the cloud. You're up for renewal. You look at your data center—and what’s left is the big stuff. The hosts, the monoliths, the gooey systems that rely on a hardware spec that doesn’t migrate easily unless you replatform or refactor. Oracle, various mainframes, AS/400, and any system that sits next to them that knows how to talk their language and translate into what everybody else wants to do: APIs. That's the hard stuff. That's the future.

The floodgates have opened. For the first time in mainframe modernization history, there is a path from monolith to digital mesh.

So, knowing what’s changed, how do you get a return on investment from mainframe modernization? I’ll cover that in the next article. And in the meantime, if you want to explore the potential for your organization, you can get in touch to set up a conversation or a workshop with us.

Jeff Kempiners, managing director at Slalom, spent most of his career as a software engineer and integration architect spanning back to the early 90s. Throughout his technical career, he’s worked with large enterprises to modernize technology that has included all types of host systems. Jeff is passionate about application and data modernization, and he leads the Amazon Center of Excellence Global team for Slalom, which includes modernization, enablement, security, optimization, and advisory services.