Build, protect and deploy apps across any platform and mobile device
Leverage a complete UI toolbox for web, mobile and desktop development
Automate UI, load and performance testing for web, desktop and mobile
Rapidly develop, manage and deploy business apps, delivered as SaaS in the cloud
Automate decision processes with a no-code business rules engine
Build mobile apps for iOS, Android and Windows Phone
Deploy automated machine learning to accurately predict machine failures with technology optimized for Industrial IoT.
Optimize data integration with high-performance connectivity
Connect to any cloud or on-premise data source using a standard interface
Build engaging multi-channel web and digital experiences with intuitive web content management
In this podcast Gregg Willhoit helps listeners understand IBM’s intent with its IFL, zAAP and zIIP Specialty Engines.
Gregg, there’s been a ton of buzz this week around Neon’s decisions to file a suit against IBM, creating even more fear, uncertainty and doubt on how to effectively and properly exploit the specialty engines. Can you remind us of why IBM first introduced the specialty engines, and why they are so important to mainframe users?
Sure, I’d be glad to, Mike. I think the specialty engines were all about bringing, ‘new workloads’ to System z. Making it more cost effective to run System z, which in turn, would enable application developers and companies to actually benefit from the strengths of System z, the reliability, availability, the serviceability, the power, all that great stuff that System z is well known for.
The first significant specialty engine that comes to mind is the IFL, the Integrated Facility for Linux. We’ve seen how successful that has been, the growth in IFL MIPS is very significant. We’ve read time and time again about various migrations from non-z platforms to z-Linux, which is really cool. And obviously, z-Linux on the IFL is a very compelling story, and then when you take into consideration, the strength of IBM’s VM, Virtual Machine, compared to any other platform, IBM’s Virtual Machine is just far, far, far superior. So, that allows, of course, for a plethora of z-Linux environments to execute within their own virtual machines, and greatly eases the consolidation effort when folks are moving from non-z platforms to a VM implementation of these various Linux platforms.
Another significant benefit, is if you are a System z customer, and you’re moving to zLinux, is that you’re co-located, typically, with where your critical data is. And what that gives you is a high-performance tunnel, if you will, to these Legacy and newer applications, DB2 for one, to be accessed via z-Linux from a high performing interface called HiperSockets, which basically gives you memory to memory kind of transfer speeds, between a z-Linux and z/OS image. I mean it’s not memory to memory per se, but it’s extremely fast. We at Progress DataDirect have done some benchmarking of this and it’s really quite incredible, so all the pieces are there for z-Linux applications and for z-Linux users to co-locate on the z platform and achieve all the benefits of IBM’s hardware, and in particular, the Virtual Machine.
After the IFL, I guess there’s the zAAP. I think the zAAP came out somewhere around 2003, 2004, I don’t remember exactly, but the zAAP as originally intended and offered was for taking care of the issue of executing Java code on the mainframe. Prior to that, the attempts that were made to make Java more efficient on the mainframe were somewhat successful, but not successful enough in that Java still had to run on a general purpose processor (GPP), which we all know is burdened with issues in regard to software licensing costs and MIPS charging, and all that. By moving Java workloads to the zAAP, to a specialty engine, IBM completely ameliorated that situation by eliminating any costs related to running Java, it was great stuff. There are some less significant users of Java, if you’re executing XML system services in TCB mode, you’ll get offloaded, I think, to the zAAP, and there’s a couple of other similar cases.
And then, last but not least, is the zIIP. The zIIP is one of the most interesting from the standpoint of what the industry is talking about; it’s also quite interesting from the standpoint that the zIIP was made available to independent software vendors in order to be able to take advantage of the offload capabilities and the TCO benefits. Why was the zIIP created? Again, it all goes back to TCO, of course. And in this case, what the zIIP did was it enabled users of DB2, in particular, to offload database query type MIPS consumption to the zIIP. And why that’s important is, for DB2, what we want is basically DB2 on z/OS. What you have is this wonderful platform and why not allow for business intelligence type queries? Decision support system type queries, to access DB2 directly on the mainframe, directly where it sits, where it belongs, instead of replicating that data from DB2 into some non-IBM platform, where all of a sudden, you’re now working with data that’s not current and less relevant, you’re burdening the network, performing a bunch of transformations on the data, there’s a lot of downside to offloading the data. In particular the zIIP made keeping the data on DB2, and executing dynamic queries via DRDA a very cost effective method for IBM to compete with the other database vendors of note.
View all posts from Gregg Willhoit on the Progress blog. Connect with us about all things application development and deployment, data integration and digital business.
Copyright © 2017 Progress Software Corporation and/or its subsidiaries or affiliates.
All Rights Reserved.
Progress, Telerik, and certain product names used herein are trademarks or registered trademarks of Progress Software Corporation and/or one of its subsidiaries or affiliates in the U.S. and/or other countries. See Trademarks for appropriate markings.