Create and deliver personalized experiences across digital properties at scale
Build engaging websites with intuitive web content management
Leverage a complete UI toolbox for web, mobile and desktop development
Build, protect and deploy apps across any platform and mobile device
Build mobile apps for iOS, Android and Windows Phone
Rapidly develop, manage and deploy business apps, delivered as SaaS in the cloud
Automate UI, load and performance testing for web, desktop and mobile
Host, deploy and scale Node.js, Java and .NET Core apps on premise or in the cloud
Optimize data integration with high-performance connectivity
Automate decision processes with a no-code business rules engine
Transform your businesses in order to survive in a completely digitized and connected world driven by software innovation.
Globally scale websites with innovative content management and infrastructure approaches
Content-focused web and mobile solution for empowering marketers
Faster, tailored mobile experiences for any device and data source
UX and app modernization to powerfully navigate today's digital landscape
Fuel agility with ever-ready applications, built in the cloud
In this podcast Gregg Willhoit explains the best practices associated with benchmarking zIIP offload. This podcast runs for 4:40. You can listen to the podcast by clicking on the following link: http://blogs.datadirect.com/media/GreggWillhoit_BenchmarksBestPrac_1.MP3
Basically once we completed the re-architecting of Shadow Version 7, it was imperative upon us to be able to demonstrate the relative performance gains and TCO gains verses the previous version of Shadow, which was Version 6. Again the main difference between the two versions in terms of TCO was the zIIP enablement of Shadow. In our environment what we basically did was when we compared the two products we just did completely isolated runs using a common benchmark driver – a web services driver tool – which simulated quite a bit of load. We insured, for example, that the LPAR that the load tests were run on was not shared. It had dedicated resources. It had the zIIP dedicated to it. So we tried to eliminate all the variability that we possibly could to make sure that the benchmarks were repeatable. We ran several benchmarks and we came up with very repeatable benchmarks in our environment and with our techniques.
As with any benchmark testing there has to be an agreed upon method for load testing and measuring – probably the most important aspect of a benchmark. Once you’ve achieved the ability to isolate the workload from anything which may impact the repeatability of the benchmark, you then have the capability to measure consistency. And what we did was we experimented with a plethora of options. We looked at the RMF Monitor I, Monitor III, RMF Monitor II, RMF Monitor I, SMF type 30 records, and also our own numbers which we gather in our own monitor which is part of the Shadow product. Our monitor basically allows us to measure zIIP efficiency for all the areas, not just web services, but SQL and Event Publishing as well. This monitor aggregates metrics gathered by the threads executing on behalf of a Web Service, SQL, or Event based thread. These threads execute IWMEQTME calls as well as TIMEUSED call to gather zIIP qualified, zIIP eligible (the sum of zIIP and zIIP on CP time). We chose to use zIIP eligible as opposed to zIIP qualified time due to a somewhat arcane issue we discovered with zIIP eligible being greater than zIIP qualified under some circumstances. The gist of the issue is that what actually runs on the zIIP can be greater than what is reported as zIIP qualified. The difference between the two metrics is not large but we chose to go with zIIP eligible when possible nonetheless.
Interestingly enough in the early days of this project we discovered that the monitoring and measurement of the zIIP wasn’t an exact science, especially with RMF Monitor III. I think there were some measurement issues with all of the products we were using. There were various fixes that we had to install to get some of the measurements done correctly, but we basically ended up deciding that the gold standard for our project was going to be the SMF 30 record. We then validated our own measurement numbers against the SMF 30 records. Once we were happy through validation that our numbers basically agreed with the SMF 30 records, then we were comfortable publishing numbers that were based on either the SMF 30 or our own. But again, we treated the SMF 30 records as the gold standard with regards to measuring zIIP efficiency and zIIP offload.
When we calculate the zIIP offloads – our percentages – we actually do use the time that actually runs on the zIIP as well as the time that the product is zIIP eligible, but the execution was actually diverted to the General Purpose Processor. And the reason that we did that is in the environment we have, at the time that we ran the test; we had one zIIP and two General Purpose Processors. So it is quite possible that dispatchable units of work would not be able to be dispatched upon the zIIP due to the difference in the ration of the number of zIIPs to General Purpose Processors. The other reason we decided on this particular methodology, or this particular formula, is that if the product is zIIP eligible – and some of the work is being dispatched to a GP – that’s really a configuration issue. So our thought was basically if we’re going to report the zIIP eligibility of Shadow we’ll include both the actual time on the zIIP as well as the time that it could have executed on the zIIP but didn’t because the zIIP was busy. And by using that technique or that formula, we came up with a repeatable methodology. One that was not subject to the vagaries of hardware configuration permutations.
So one of the interesting things that came out of this particular benchmark performance analysis of DataDirect Shadow Version 7 vs. Version 6 was that during this process we came up with so many measurements and measurement gathering anomalies that we actually contemplated doing a skit. Kind of like the Who’s on First? Skit with Abbott and Costello, but from a geeky perspective with measuring zIIPs. Because honestly in configurations where the zIIPs run at a faster speed than General Purpose Processors, that is if General Purpose Processors are kneecapped, then some of the measurement methodologies that were in place just weren’t quite up to accurate CPU measurement and gathering. In fact, we found that different monitors were computing vastly different zIIP offload numbers, which is why we actually decided to use the SMF Type 30.
View all posts from Gregg Willhoit on the Progress blog. Connect with us about all things application development and deployment, data integration and digital business.
Copyright © 2016, Progress Software Corporation and/or its subsidiaries or affiliates.
All Rights Reserved.
Progress, Telerik, and certain product names used herein are trademarks or registered trademarks of Progress Software Corporation and/or one of its subsidiaries or affiliates in the U.S. and/or other countries. See Trademarks or appropriate markings.