Risk Free With Shadow

Risk Free With Shadow

Posted on January 18, 2010 0 Comments

In this podcast, Gregg Willhoit explains how Progress DataDirect customers take advantage of moving workloads beyond DB2 to the zIIP specialty engine, and do so risk free and worry free using DataDirect Shadow.

Gregg's podcast runs for 5:22:

//

Gregg Willhoit:

Customers are able to take advantage of the zIIP beyond DB2 worry-free using DataDirect Shadow because when we built our TCO, our zIIP offload in Shadow version 7.1, we did so with IBM’s help and guidance. We signed the required paperwork in order to be able to use the zIIP API, and we followed the agreement to the letter. I have, and always will maintain, that IBM can at anytime look at our code. We’ve offered that from the very beginning, and they know this, and we believe in full disclosure, and we know that we are a 100% supported and approved user of the zIIP. We’ve got a lot of support from IBM, from Bob Rogers and Mark Anzani in this area, and hopefully we will continue to get that. They’ve been extremely, extremely helpful to us.

What we did in Progress DataDirect Shadow – and you can read about these exploits in various articles and presentations I’ve done – is we took a very holistic approach in Shadow. We decided to offload all of Shadow, or to make all of Shadow zIIP eligible. Workloads can be offloaded if there’s a zIIP available and the appropriate configuration statements are there to allow that. So we eliminated, we eschewed all SVCs, we wrote our own timer dies, we used all sorts of different services, changed our services to be able to execute in the enclave SRB mode, wrote our own when there was no z/OS equivalent of the service to be able to execute in the enclave SRB mode, so we completely changed the DNA of our product to do this. But the benefit was, because of all the workloads that run in our architecture, they all benefit. I like to call it “the rising tide floats all boats” kind of architecture. So when we did this, web services, our web services products benefited, our SQL engines, SQL to non-relational products benefited, our advance products benefited, and so on and so forth. It was a very good decision to do it this way, it’s worked out quite well for us.

One of the things that is maybe more unique about our workload than many, possibly other ISVs is that we are a transaction manager. We do things similar to what IMS or CICS does. But we don’t replace IMS, we don’t replace CICS. We work in conjunction with them, but we are in the middle of the business unit of the transaction that’s running in a high volume and high CPU consumption type of scenario. Because doing web services or non SQL to non-relational is extremely computationally intensive. And what we’re able to do with our offload to the zIIP is to expose mainframe assets, both old and new, to standards-based API’s, or languages, in a very, very, very effective and compelling way in terms of performance and total cost of ownership. We’re offloading all of the web services work to the zIIP, basically 99.9% of all of that is offloaded to the zIIP. Same with the events, and then same with our SQL to non-relational, it’s a very compelling story. There's no need to move your web services off the mainframe to use appliances, to use any off the mainframe implementations for web services support. You can execute the web service on the mainframe in a coherent memory, against the backend, against IMS or CICS, and all that web services work is offloaded, is made zIIP eligible. So it’s a very, very, very significant story. Same thing with our SQL support to non-relational, that also can be quite computationally intensive. In the past, folks had moved the data, they’ve replicated it to non-mainframe platforms, and we don’t believe that’s a very cost effective approach, especially, in this day and age characterized by the need for more agile computing and where timeliness and currency meaning so much more – especially with what’s going on with regulation and compliance and those kinds of things today.

We believe that you should leave the data on the mainframe. If you want to, use standards-based API’s to access that data, for data mining, for business analytics – leave the data there on the mainframe, and simply do all the heavy lifting, the SQL transformations from relational to non-relational and so on, on the zIIP. It’s extremely compelling, you don’t have these other platforms to manage, you’re on the System z, you’re in a coherent memory, you don’t have to go TCP/IP back and forth, and the data is current. I mean it’s just a really, really, compelling story for us.

Gregg Willhoit

View all posts from Gregg Willhoit on the Progress blog. Connect with us about all things application development and deployment, data integration and digital business.

Comments

Comments are disabled in preview mode.
Topics

Sitefinity Training and Certification Now Available.

Let our experts teach you how to use Sitefinity's best-in-class features to deliver compelling digital experiences.

Learn More
Latest Stories
in Your Inbox

Subscribe to get all the news, info and tutorials you need to build better business apps and sites

Loading animation