Home Services Partners Company
More on Best Practices for zIIP Exploitation (second podcast)

More on Best Practices for zIIP Exploitation (second podcast)

May 13, 2009 0 Comments

In this podcast, Gregg goes into further detail on some of the best practices for zIIP exploitation. The focus is on measurement and zIIP offloads. The podcast runs for 2:01.

To listen to the podcast, please click on the following link: http://blogs.datadirect.com/media/GreggWillhoit_MoreBestPracticeszIIPExploitation _3.mp3

Podcast text:

Gregg, what are some of the best practices for zIIP exploitation?

Gregg Willhoit:

The product to be truly useful with regards to a TCO environment has to be able to present a matric, which allows the user to determine how successful they have been with regards to zIIP offloads. Measurement and matric are key. The product should be able to, at a detailed level, to identify what’s being offloaded and what’s not within a quick glance. The product should also use SMS Facilities to record zIIP offloads and CPU consumption matric by whatever category the user requires. For example, in an SOA environment that might be a web service or the operation. For a JDBC or ODBC type client that might be by SQL statements or users ID, things like that.

Because the zIIP requires basically to exploit the zIIP on the Z platform, one has to operate under both and SRB and a TCB, which are two different dispatchable units. So the product must allow the user to have end to end monitoring and control, which provides for a facile method to follow the work through both the TCB and the SRB.

Because the zIIP does require executing in two different dispatchable unites –  modes –  it has to have an integrated and coherent recovery strategy so that if errors occur in either mode they are represented back to the user as executing under a common type dispatchable unit for easier diagnostics. The product has to be able to communicate with WLM in a manner which allows the product and the user to define how much and which workloads are offloaded to the zIIP.

Some of these facilities are now available within WLM itself at this particular time. One of the best practices for using the zIIP should include that ability to – at a very granular level –  determine the percentage of the zIIP offload based upon various WLM classification methods, such as user ID and web service and operation.

Gregg Willhoit

View all posts from Gregg Willhoit on the Progress blog. Connect with us about all things application development and deployment, data integration and digital business.

Read next Build an ETL Pipeline with Kafka Connect via JDBC Connectors
Comments are disabled in preview mode.