4.1.0 archive

Note: This version of Hybrid Data Pipeline has reached end of life. These release notes are for reference purposes.

Changes Since Release 4.1.0

Enhancements

Hybrid Data Pipeline server
  • Account Lockout Policy (Limits API). Support has been added for implementing an account lockout policy. An account lockout policy allows the administrator to set the number of consecutive failed authentication attempts that result in a user account being locked, as well as the lockout period and the duration of time that failed attempts are counted. When a lockout occurs, the user is unable to authenticate until the specified period of time has passed or until the administrator unlocks the account.
  • Configurable CORS Behavior (Limits API). Support for disabling cross-origin resource sharing (CORS) filter for environments that do not require it. Since Hybrid Data Pipeline does not currently support filtering of cross-origin requests, disabling CORS filters can provide added security against cross-site forgery attacks.
Apache Hive
  • Certified with Apache Hive 2.0 and 2.1.
IBM DB2
  • Certified with DB2 for i 7.3
Oracle Database
  • Certified with Oracle 12c R2 (12.2).

Resolved Issues

Hybrid Data Pipeline server
  • Version 4.1.0.44. Bug 71841. Resolved an issue where the Hybrid Data Pipeline server failed to honor the START_ON_INSTALL environment variable to stop and start Tomcat services.
  • Version 4.1.0.44. Resolved an issue where the installer accepted an SSL certificate only in the PEM file format during the installation of the server for a cluster environment. The installer now accepts the SSL certificate (root certificate) in PEM, DER, or base64 encodings for a cluster installation.
  • Version 4.1.0.44. Resolved an issue where an SSL certificate was required for a cluster installation. An SSL certificate is no longer required for a cluster installation.
  • Version 4.1.0.44. Resolved an issue that prevented the installer from supporting a number of upgrade scenarios.
JDBC driver
  • Version 4.1.0.7. Resolved an issue where the JDBC driver was not connecting to the Hybrid Data Pipeline server by default when running on a UNIX/Linux system.

4.1.0 Release Notes

Security

OpenSSL
  • The default OpenSSL library has been updated to 1.0.2k, which fixes the following security vulnerabilities.
    • Truncated packet could crash via OOB read (CVE-2017-3731)
    • BN_mod_exp may produce incorrect results on x86_64 (CVE-2017-3732)
    • Montgomery multiplication may produce incorrect results (CVE-2016-7055)

    OpenSSL 1.0.2k addresses vulnerabilities resolved by earlier versions of the library. For more information on OpenSSL vulnerabilities resolved by this upgrade, refer to OpenSSL announcements.

SSL Enabled Data Stores
  • The default value for Crypto Protocol Version has been updated to TLSv1, TLSv1.1, TLSv1.2 for data stores that support the option. This change improves the security of the connectivity service by employing only the most secure cryptographic protocols as the default behavior. At connection, the connectivity service will attempt to use the most secure protocol first, TLS 1.2, then fall back to use 1.1 and then 1.0.
On-Premises Connector
  • The On-Premises Connector has been enhanced to resolve a security vulnerability. We strongly recommend upgrading to the latest version to take advantage of this fix.
Apache Hive Data Store
  • Hybrid Data Pipeline now supports SSL for Apache Hive data stores running Apache Hive 0.13.0 or higher.
SQL Server Data Store
  • Support for NTLMv2 authentication has been added for the SQL Server data store. NTLMv2 authentication can be specified in the Authentication Method field under the Security tab.

Enhancements

Hybrid Data Pipeline server
  • Hybrid Data Pipeline Cluster. To support scalability, the Hybrid Data Pipeline service can be deployed on multiple nodes behind a load balancer. Incoming requests can be evenly distributed across cluster nodes. SSL communication is supported if the load balancer supports SSL termination. Session affinity is supported to bind a client query to a single node for improved performance. (Session affinity must be enabled in the load balancer to support the Web UI and ODBC and JDBC clients.) HTTP health checks are supported via the Health Check API.
  • MySQL Community Edition Data Store. Support for MySQL Community Edition has been added to Hybrid Data Pipeline. During installation of the Hybrid Data Pipeline server and the On-Premises Connector, you provide the location of the MySQL Connector/J driver. After installation, you may then configure data sources that connect to a MySQL Community Edition data store and execute queries with ODBC, JDBC, and OData applications.
  • MySQL Community Edition System Database. Support for MySQL Community Edition as an external system database has been added. During the installation process, you are prompted to select either an internal database or an external database to store system information necessary for the operation of Hybrid Data Pipeline. With this enhancement, you can choose either Oracle or MySQL Community Edition as an external database.
  • Installation Procedures and Response File. The installation procedures have been modified with the introduction of support for the Hybrid Data Pipeline cluster, the MySQL Community Edition data store, and the MySQL Community Edition system database. New prompts have been added to the installation process. Several of these prompts have corresponding settings that must be used in the response file for silent installation of the server. If you are performing silent installations of the server, your response file must be modified accordingly. The following list provides the new settings. The settings may differ depending on whether you generate the response file with a GUI or console installation.
    Note: Values for the SKIP_HOSTNAME_VALIDATION and SKIP_PORT_VALIDATION options have been changed from false | true to 0 | 1. These options have the same name in GUI-generated and console-generated response files.
    Note: Values for the SKIP_LB_HOSTNAME_VALIDATION option are currently 0 for disable and true for enable. In a future release, the values will be 0 for disable and 1 for enable. This option has the same name in GUI-generated and console-generated response files.
    New response file options.The first name in the list is the name of the response file option generated by the GUI installer. The second name in the list is the name generated by the console mode installer. (If only one value is provided, there is no corresponding value for console mode.)
    • USING_LOAD_BALANCING_YES | D2C_USING_LOAD_BALANCING _CONSOLE - Specifies whether you are installing the service on a node behind a load balancer.
    • LOAD_BALANCING_HOST_NAME | LOAD_BALANCING_HOST_NAME_CONSOLE - Specifies the hostname of the load balancer appliance or the machine hosting the load balancer service.
    • USING_LOAD_BALANCING_NO - Specifies whether you are installing the service on a node behind a load balancer. For console installation, only D2C_USING_LOAD_BALANCING _CONSOLE is used.
    • SKIP_LB_HOSTNAME_VALIDATION | SKIP_LB_HOSTNAME_VALIDATION - Specifies whether the installer should validate the load balancer hostname during the installation of a node.
    • D2C_CERT_FILE | D2C_CERT_FILE_CONSOLE - Specifies the fully qualified path of the Certificate Authority certificate that signed the load balancer server certificate. This certificate is used to create the trust store used by ODBC and JDBC clients.
    • D2C_DB_MYSQL_COMMUNITY_SUPPORT_YES | D2C_DB_MYSQL_COMMUNITY_SUPPORT_CONSOLE - Specifies whether the service will support MySQL Community Edition data store.
    • D2C_DB_MYSQL_JAR_PATH | D2C_DB_MYSQL_JAR_PATH_CONSOLE - Specifies whether the fully qualified path of the MySQL Connector/J jar file to support a MySQL Community Edition data store.
    • D2C_DB_MYSQL_COMMUNITY_SUPPORT_NO - Specifies whether the service will support MySQL Community Edition data store. For console installation, only D2C_DB_MYSQL_COMMUNITY_SUPPORT_CONSOLE is used.
    • D2C_DB_VENDOR_MYSQL - Specifies whether a MySQL Community Edition database will be used as the external system database. For console mode installations, D2C_DB_VENDOR_CONSOLE is used to specify an Oracle or MySQL Community Edition external system database.
    • D2C_DB_PORT_MYSQL - Specifies the port number of the MySQL Community Edition external database. For console mode installations, D2C_DB_PORT_CONSOLEis used to specify the port of either an Oracle or MySQL Community Edition externa system database.
    • USER_INPUT_KEY_LOCATION | USER_INPUT_KEY_LOCATION_CONSOLE - Specifies the fully qualified path of the encryption key to be shared by the nodes in a cluster environment.
  • Throttling (Limits API). Support for throttling to prevent a user or group of users from adversely impacting the performance of the connectivity service has been added. The Limits API allows administrators to set limits on how many rows can be returned for ODBC, JDBC, and OData requests. An error is returned if an application fetches rows beyond the specified limit.
  • Refresh Map. The new refresh map button has been added to the Mapping tab. This button allows you to refresh the map without connecting to the data store. This feature is useful when you are in the process of developing your application and you have made changes to the objects in your backend data store. Pressing this button forces the data store to rebuild the map allowing the new objects to show up in the relational map the next time your application connects to the data source. (The map can also be refreshed with a Management API call or when establishing a connection.)
  • SQL Editor. The SQL editor in the SQL Testing view has been upgraded. The functionality of the new editor is similar to that of the previous editor. However, the history panel is not currently supported with the new editor.
  • OpenAccess Server. The OpenAccess server component has been deprecated. The OpenAccess server is no longer required to connect with Oracle Eloqua.
On-Premises Connector
  • Upgraded to use Tomcat 8.0.41
  • Upgraded to use Java SE 8
  • Support for Windows Server 2003 has been deprecated
Hybrid Data Pipeline ODBC Driver
  • Certified with CentOS Linux 4.x, 5.x, 6.x, and 7.x
  • Certified with Debian Linux 7.11, 8.5
  • Certified with Oracle Linux 4.x, 5.x, 6.x, and 7.x
  • Certified with Ubuntu Linux 14.04, 16.04
  • Support for Windows Server 2003 has been deprecated
Apache Hive
  • Added SSL support for Apache Hive 0.13.0 and higher
  • Certified with Apache Hive 0.13, 0.14, 1.0, 1.1, 1.2
  • Certified with Amazon (AMI) 3.2, 3.3.1, 3.7
  • Certified with Cloudera (CDH) 5.0, 5.1, 5.2, 5.3, 5.4, 5.4, 5.6, 5.7
  • Certified with Hortonworks (HDP) 2.1, 2.2
  • Certified with IBM BigInsights 4.1
  • Certified with Pivotal HD (PHD) 2.1
Greenplum
  • Made generally available
  • Certified with Greenplum 4.3
  • Certified with Pivotal HAWQ 1.2, 2.0
IBM DB2
  • Certified with IBM DB2 V11.1 for LUW
  • Certified with DB2 for i 7.2
Informix
  • Made generally available
  • Certified with Informix 12.10
  • Certified with Informix 11.7, 11.5, 11.0
  • Certified with Informix 10.0
  • Certified with Informix 9.4, 9.3, 9.2
Oracle Marketing Cloud (Oracle Eloqua)

The Oracle Marketing Cloud data store provides access to Oracle Eloqua. Improved features and functionality for this data store are available with this Hybrid Data Pipeline release.

  • Write Access
    • Support for INSERT/UPDATE/DELETE operations on CONTACT, ACCOUNT and CustomObjects_XXX
  • Bulk Calls
    • Performance improvement for bulk calls
    • Supports fetching more than 5 million records
    • Supports fetching up to 250 columns for bulk calls
    • Supports pushing OR operators for bulk calls (This does not apply to Activities)
  • REST Calls
    • Some queries with OR and AND operators have been optimized.
  • Metadata
    • The data store now uses null as the catalog name. Previously, ECATALOG was used as the catalog name.
    • The current version of the data store maps columns with integer data to type INTEGER. The previous version mapped the integer type to string.
  • In contrast to the previous version, the current version of the data store cannot split OR queries and push them separately to Oracle Eloqua APIs. Therefore, compared to the previous version, the current version may take longer to return results involving OR queries.
  • The previous version of the data store used the ActivityID field as the primary key for Activity_EmailXXX objects, such as Activity_EmailOpen, Activity_EmailClickthrough, and Activity_EmailSend. In contrast, the current version of the data store uses the ExternalID field as the primary key instead of ActivityID.
PostgreSQL
  • Certified with PostgreSQL 9.3, 9.4, 9.5, 9.6
Progress OpenEdge
  • Certified with Progress OpenEdge 11.4, 11.5, 11.6
Salesforce
  • Certified with Salesforce API 38
SAP Sybase ASE
  • Made generally available
  • Certified with SAP Adaptive Server Enterprise 16.0
  • Added support for NTLMv2 authentication. NTLMv2 authentication can be specified in the Authentication Method field under the Security tab.
  • Certified with Microsoft SQL Server 2016

Resolved Issues

Web UI
  • Resolved an issue where the SQL editor in the SQL Testing view returned errors when executing SQL commands against Google Analytics data sources
OData
  • Resolved an issue where OData requests were timing out before application could finish retrieving the results
Hybrid Data Pipeline Management API
  • Resolved an issue where a 201 was returned when adding members to a group data source through the Management API
  • Resolved an issue where a normal user would receive a 400 instead of a 404 error when using the user query parameter to Management API calls
  • Resolved an issue where user creation API allowed invalid values for the status field
DB2
  • Resolved an issue where the error "Numeric value out of range" occurs when calling SQLStatistics in DB2 with the ODBC driver
Google Analytics
  • Resolved an issue where the SQL editor in the SQL Testing view returned errors when executing SQL commands against Google Analytics data sources
 

Connect any application to any data source anywhere

Explore all DataDirect Connectors

A product specialist will be glad to get in touch with you

Contact Us