Build, protect and deploy apps across any platform and mobile device
Deliver Awesome UI with the most complete toolboxes for .NET, Web and Mobile development
Automate UI, load and performance testing for web, desktop and mobile
Rapidly develop, manage and deploy business apps, delivered as SaaS in the cloud
Automate decision processes with a no-code business rules engine
Build mobile apps for iOS, Android and Windows Phone
A complete cloud platform for an app or your entire digital business
Deploy automated machine learning to accurately predict machine failures with technology optimized for Industrial IoT.
Optimize data integration with high-performance connectivity
Connect to any cloud or on-premises data source using a standard interface
Build engaging multi-channel web and digital experiences with intuitive web content management
In this podcast, Calvin Fudge of DataDirect and Susan Eustis of WinterGreen Research discuss the DataDirect Shadow TCO Calculator for measuring the cost and resource savings of zIIP exploitation and the business value the calculator brings to mainframe users. The podcast lasts for 16:59.
Click on the following link to listen to the podcast: http://blogs.datadirect.com/media/TCO_Calculator.mp3
Hello everyone, and thank you for joining us on Progress DataDirect DataConnections blog. Today we’re joined by Susan Eustis, president and senior analyst at WinterGreen Research, and Calvin Fudge, the Director of Marketing for the DataDirect Shadow Product Group. We’re here to discuss the creation and the availability of the DataDirect Mainframe TCO Calculator, and the impact that it will on mainframe zIIP users and products.
So let me start by asking Calvin to give us a little bit of background or overview about the DataDirect Shadow product, and why you decided to engage with WinterGreen Research to create the TCO Calculator.
Sure, Mike. And welcome Susan, it’s good to have you on the call. The Shadow product is a mainframe middleware software. What that means is it’s a product that allows the mainframe to integrate with other platforms that you commonly would integrate with something like a Windows platform or a Linux platform, and so on and so fourth. And you need that integration so that these proprietary databases or applications that are on the mainframe can interact seamlessly with other applications. So when you go up to an ATM teller and you want to check your bank balance, that program that runs that ATM teller will interact through a piece of middleware, go back to the mainframe and then come back. Shadow is that piece in the middle.
The reason we ended up working with WinterGreen Research on the creation of this TCO calculator really starts at the mainframe. The mainframe has a reputation, and I don’t know that it’s deserved, but it has a reputation for being costly. The processing on a mainframe, every bit of it is charged. If you turn the machine on, it’s charged. It’s not like when you work on a Windows platform or a Linux platform where you have pretty much unlimited processing anytime you want to and it doesn’t get charged; you just buy the machine and the memory. On a mainframe everything you do is charged to something. So there is a need to always look at how you can lower the processing cost associated with various types of mainframe applications and processing. So in the integration side of supporting this new service oriented architecture that everyone is using, there’s a lot of processing used to turn a mainframe application or a query to a mainframe database into a reusable web service, but the benefit is really worthwhile. It makes the mainframe much more of an enterprise player, and so organization are definitely going to SOA as a way to integrate the mainframe.
They are also looking in these economic times, how do we lower the cost associated with something like service-oriented architectures and web services on the mainframe. So we came up with a technology within our Shadow product at DataDirect to leverage a little piece of hardware on the mainframe called the specialty engine. It’s really a coprocessor, similar to the general purpose processor on a mainframe. And our software allows you to take that coprocessor and use it to redirect processing to areas on a mainframe that aren’t charged. So essentially you can take what would be a charge – what would be capacity that you’re being charged for – and move it into an area that you’re not being charged for it. The ability to use a zIIP specialty engine can be a real benefit to an organization wanting to lower their mainframe costs.
WinterGreen had an expertise in measuring mainframe total cost of ownership when it comes to the entire ecosystem of utilizing a mainframe; the energy cost, the personnel expense, the hardware itself. Every burdensome cost associated with running a mainframe, WinterGreen had tracked that in their ROI engine. And what we wanted to do was leverage that engine to measure a very precise area of mainframe cost, and that’s really the integration related cost. In particular, how a zIIP specialty engine – that stands for System z Integrated Information Processor, correct me if I’m wrong there Susan. And we wanted to look at the benefit associated with moving workloads to the zIIP and how that affects mainframe TCO. Long story, but that’s how we ended up getting our collaboration in place with WinterGreen Research.
I think this is probably a good time to ask Susan to give a little bit of a background on WinterGreen and their ROI Engine. So Susan, can you tell us a little bit about your organization?
Thank you, Calvin. It’s so nice to be on this call with you today. We have a research organization, and we have 35 distributors worldwide. We produce about 100 studies a year, market research studies on a whole range of topics. And we developed an ROI Engine; I started putting investments from the research organization into automated processes to build something so that we could make the analyst process a bit more automated, bring down the cost for people in terms of getting good analytical data for software, something you might get from an independent consultant that came in. We’re trying to create this ROI Engine so people can really understand what the costs are in a very concise and very knowledgeable way. So we’re trying to do knowledge transfer out to the industry the same way we write market research studies.
And we have a lot of clients, we’ve been going in and out of IT departments on quite a regular basis for the last year with our mainframe analysis, and we look at – as Calvin said – labor, network service level availability, security, electricity, floor space, scalability, hardware, software costs in the context of what’s it look like in a distributed environment, what’s it look like on the mainframe. Interestingly enough, it’s ten times cheaper on the mainframe even though people get hit with charge backs, and they feel like it’s more expensive. In fact, the mainframe is really a better way to do your computing, your automated processing when you look at all the costs.
So we helped IBM. They had 360 datacenters; they moved into 30 mainframe centers. Obviously they could go either way; they could use mainframes or they could use servers. They build really good equipment on either side and they saw that there was huge cost advantage to the mainframe. But once you get things on the mainframe, and you have a services-oriented architecture – which we’ve worked in for a lot of year looking at the invocations and the new workload that’s been brought by the internet – we could see that the zIIP engines – these specialty engines that Calvin talked about – offer tremendous advantages to people in terms of cost. So that’s what we wanted to do to work with you.
I think that when we came about, when we started looking at specialty engines, it was probably about two years ago. And really there’s only a handful of people out in the industry that even understood what these engines did. Primarily the interest was around the IFL – the integrated facility for Linux specialty engine – because there is a lot of excitement around running Linux on the mainframe. But IBM came out with the zAAP specialty engine, which focuses on Java – it allows you to run Java on the mainframe.
So the zIIP is really exciting, because with the zIIP, which was originally introduced to support DB2 workloads, and allow you to do large queries related to business intelligence, or supporting ERP packages where it involved DB2. And so probably the first 18 months or so of the zIIP’s life, anyone who knew about it understood it to be related to DB2. But increasingly with middleware like Shadow from DataDirect, vendors have found ways to open up the zIIP if you will and use it for different types of workloads. And with Shadow, because we’re focused on integration, one of the biggest areas of need that we saw was, if you could take this SOA capability that’s really turning the mainframe into an industry standard server, if you could take those costs and run them to the zIIP, what a benefit that would be to organizations. They could accelerate their SOA initiative, and at the same time not have the burden of increased cost. Because anytime you run up against increased cost, at some point it’s going to slow down that acceleration. So with Shadow we found a way to open up the zIIP, run workloads for CICS web services, IMS web services, Natural web services, and even queries to mainframe databases via SOAP interface or a SQL interface. So all of the workloads that Shadow handles, and believe me it’s a wide variety – anything that IBM does on the mainframe, that’s DB2, VSAM, IMS, CICS; anything that CA does, which would be the IDMS products; and anything that Software AG does, Adabas, Natural – all of those proprietary environments can run through Shadow, run on the zIIP.
And Susan, I know you know this because we’ve been doing this on the calculator, but right now we’ve got customers that are using Shadow and offloading 99% of their SOA related integration costs. Now that’s amazing. Essentially, that’s breaking through the SOA price barrier. Allowing you to run almost for no cost associated with integration of your mainframe with service oriented architectures. This is huge. I really believe that this is a paradigm shift in the market. This type of paradigm shift in the market will allow the mainframe to continue to accelerate forward and really be the platform of choice when you’re talking about service oriented architectures supporting financial services applications, supporting insurance applications, supporting manufacturing, government – all the places that the mainframe is typically the dominant platform.
I go on and on about Shadow, but really what I wanted to talk about a little bit was, Susan, you’re out there interacting with various customer organizations. Can you give us a little feedback about what you’re seeing in terms of mainframe usage, cost concerns, and also, how organizations are going about trying to capture some of the understanding involved?
Well the new workload on the mainframe is the biggest issue, just plain handling new workloads. The services-oriented architecture promises to be a change, a tremendous change in how IT manages programs. The application paths that are pretty well siloed. They do what needs to be done now, but what organizations are finding is that they need the new SOA to manage internet, to open the channel, and then to be able to communicate between their siloed applications. So that’s where we see the growth in the industry, and that’s SOA – services oriented architecture – and those related to invocations. And what we see is IT departments – all over the world really – struggling to figure out, what’s my platform decision? How much is it going to cost? Where should I put these SOA applications? How am I going to integrate everything?
So I was so interested when I started working with you, because I go in and out of these IT departments, and I could see that Shadow brings a lot to almost everyone. And what was interesting to me is I’m an analyst, and I get to look at the numbers. And I could see that as you move 99% of the workload off the GPP – the general purpose processor of the mainframe – and move it onto the zIIP engine, my goodness! It’s exciting to start putting costs on that. And we did it by looking at invocations and relating invocations back to MIPS – millions of instructions per second – and MSUs. And we could start correlating the number of SOA invocations to the usage on the mainframe both on the GPP, which costs about $3.50 a day, and then if you move that off onto something which is $.02 a day, a zIIP engine. Your costs go to almost nothing. So that’s fun to be able to document.
And what we’ve done is developed a software tool that’s credible. The way you make these numbers credible is to expose them out to people in a manner that’s simple enough that they can understand what the numbers are, but there’s no black box. In other words, if you present numbers in a black box, no one understands them; no one believes them. So what we’ve tried to do is develop a tool where an ordinary analyst can grasp what the numbers are, and look at it in context.
And I was going to say, that’s a very good point because when you’re setting about to develop a TCO calculator, you know you start with some fundamental mathematics in trying to set a framework in place of, how do you actually capture usage? How do you actually make it an accurate profile of what a customer might experience in their environment? And working with WinterGreen Research, we built a framework for capturing usage across various sizes of web services, and various types of web services. So if you can imagine, with web services one size does not fit all. Some are very small, some are very complex. And at the same time, some web services involve transforming mainframe screen. Some involve transforming a pure piece of business logic on the mainframe. Some involve a request for a web service off the mainframe. So by setting up the framework to capture usage accurately, we then could use mathematics underneath the covers to capture offloads for the conversion of MIPS to MSUs. We could capture the amount of offload that Shadow does, because Shadow – because of the way that it’s architected – allows you to move a certain percentage of that workload to the zIIP. And the end result, we tested over and over again against customer input – you know, what are they experiencing?
There is some internal capabilities within the z/OS operating system. There’s a thing called Select CPU, correct me if I’m wrong, I think that’s the particular command, but it allows you to gage zIIP eligible workloads. So we’ve looked at that in terms of validating our results. Also, we have mainframes in our organization. So we ran tests on our own mainframes in our mainframe labs in Sugar Land and in Raleigh-Durham to see what we would get. And lastly, we actually went out to customer instillations that have Shadow and that are using it for zIIP offload to gage what they’re getting. So for it being an online tool, the results out of it, while they are not guaranteed, they’ve been validated in about four or five different ways to insure that what you’re seeing on the calculator is an accurate representation of what you might see if you took Shadow and applied it to your zIIP in your environment.
And so in our interest, rather than just listen to Susan and I talk about it, what we would like to do is invite you to setup a test drive for this Calculator. It’s really a simple process. We’ve put a little Flash widget on the DataDirect website that you can look at and play with. It’s not the calculator; it’s a little toy. But you can go to that page on the DataDirect website and click to reserve a spot to do a test drive. Someone will walk you through the calculator. So if you want to do that, you go to DataDirect Shadow TCO Calculator.
Go there, click on the URL, take a look, play with the Flash widget, there’s some sliders there you can checkout, some simplistic modeling for usage. You’ll get a feel for what we’re talking about. Then click on the reserve a test drive. We’ll setup time for you to take your own information in your organization, plug it into the calculator and get you the results back that can tell you how much you’ll save per day, how much you can save the first year, and even there’s some modeling capabilities that you can carry this out over a five year period. So you can actually utilize this calculator as a planning tool. There’s no obligation involved in it. It’s just something that we’re trying to do to help you understand the benefit that owning a zIIP specialty engine can bring to you. The benefit that it mean in terms of how it can enable your mainframe SOA initiatives to go forward. And sure, we’re going to sell some software in the process. You can’t get the full benefit of the zIIP unless you utilize something like Shadow.
Susan, is there anything else that you’d like to add in our comments today?
I was so impressed with the level of effort that we both went to to validate what we’ve presented in the models. I thought that this lends a lot of credibility to the system.
And you know, the nice thing is is that we’re also taking this model forward to create a SQL version of the calculator. What that means is right now the calculator is going to gage your usage related to web services. We have in development now a calculator that will also gage your usage in terms of SQL data queries or data queries to non-relational mainframes to show you just how much you could potentially save by offloading various workloads to the zIIP related to data queries.
Well I think if that’s all the comments I want to turn this back over to Mike, and I really appreciate Mike you setting this up today.
Thank you, Calvin. And thank you Susan as well for sharing the partnership between DataDirect Technologies and WinterGreen Research. What a great project in creating the TCO Calculator. It sounds like the calculator you’ve developed will provide the market with fast ROI analysis and demonstrate the value of the mainframe for new workloads through zIIP exploitation. Again, thank you Calvin and Susan. Congratulations on the TCO Calculator.
View all posts from Vancemoore on the Progress blog. Connect with us about all things application development and deployment, data integration and digital business.
Copyright © 2017 Progress Software Corporation and/or its subsidiaries or affiliates.
All Rights Reserved.
Progress, Telerik, and certain product names used herein are trademarks or registered trademarks of Progress Software Corporation and/or one of its subsidiaries or affiliates in the U.S. and/or other countries. See Trademarks for appropriate markings.