Deliver superior customer experiences with an AI-driven platform for creating and deploying cognitive chatbots
Deliver Awesome UI with the most complete toolboxes for .NET, Web and Mobile development
Automate UI, load and performance testing for web, desktop and mobile
A complete cloud platform for an app or your entire digital business
Detect and predict anomalies by automating machine learning to achieve higher asset uptime and maximized yield
Automate decision processes with a no-code business rules engine
Optimize data integration with high-performance connectivity
Connect to any cloud or on-premises data source using a standard interface
Build engaging multi-channel web and digital experiences with intuitive web content management
Personalize and optimize the customer experience across digital touchpoints
Build, protect and deploy apps across any platform and mobile device
Rapidly develop, manage and deploy business apps, delivered as SaaS in the cloud
In this podcast, Gregg goes into further detail on some of the best practices for zIIP exploitation. The focus is on measurement and zIIP offloads. The podcast runs for 2:01.
To listen to the podcast, please click on the following link: http://blogs.datadirect.com/media/GreggWillhoit_MoreBestPracticeszIIPExploitation _3.mp3
The product to be truly useful with regards to a TCO environment has to be able to present a matric, which allows the user to determine how successful they have been with regards to zIIP offloads. Measurement and matric are key. The product should be able to, at a detailed level, to identify what’s being offloaded and what’s not within a quick glance. The product should also use SMS Facilities to record zIIP offloads and CPU consumption matric by whatever category the user requires. For example, in an SOA environment that might be a web service or the operation. For a JDBC or ODBC type client that might be by SQL statements or users ID, things like that.
Because the zIIP requires basically to exploit the zIIP on the Z platform, one has to operate under both and SRB and a TCB, which are two different dispatchable units. So the product must allow the user to have end to end monitoring and control, which provides for a facile method to follow the work through both the TCB and the SRB.
Because the zIIP does require executing in two different dispatchable unites – modes – it has to have an integrated and coherent recovery strategy so that if errors occur in either mode they are represented back to the user as executing under a common type dispatchable unit for easier diagnostics. The product has to be able to communicate with WLM in a manner which allows the product and the user to define how much and which workloads are offloaded to the zIIP.
Some of these facilities are now available within WLM itself at this particular time. One of the best practices for using the zIIP should include that ability to – at a very granular level – determine the percentage of the zIIP offload based upon various WLM classification methods, such as user ID and web service and operation.
View all posts from Gregg Willhoit on the Progress blog. Connect with us about all things application development and deployment, data integration and digital business.
Copyright © 2018 Progress Software Corporation and/or its subsidiaries or affiliates.
All Rights Reserved.
Progress, Telerik, and certain product names used herein are trademarks or registered trademarks of Progress Software Corporation and/or one of its subsidiaries or affiliates in the U.S. and/or other countries. See Trademarks for appropriate markings.