4.3.0 archive

Note: This version of Hybrid Data Pipeline has reached end of life. These release notes are for reference purposes.





LDAP authentication


Hybrid Data Pipeline has added support to integrate with Active Directory for user authentication using LDAP protocol. Customers can configure an LDAP authentication configuration by providing the details of the server and can configure users to use the LDAP authentication as opposed to the default authentication.

In order to get started with LDAP Authentication, you need to do the following:

  1. Create an Authentication Service of type 3 using the Authentication APIs. Once your authentication service has been created, you must note the authentication service ID.
  2. Create Users tagged to the authentication service ID. You have several different ways of creating service has been created, you must note the authentication service ID.
  3. Create Users tagged to the authentication service ID. You have several different ways of creating users. Refer to the User guide for details.
• Support for a permissions API has been added. The Permissions API enables administrators to manage permissions through the Users, Roles, and DataSource APIs. In addition, the Permissions API allows administrators to create data sources on behalf of users and manage end user access to data source details. Administrators can also specify whether to expose change password functionality in the Web UI and SQL editor functionality.
Password policy
• Support for a password policy has been added.
Tomcat Upgrade
• The Hybrid Data Pipeline server and On-Premises Connector have been upgraded to install and use Tomcat 8.5.28.


Hybrid Data Pipeline Server
  • OData Version 4 functions.
    Added OData Version 4 function support for IBM DB2 and Microsoft SQL Server data
    stores. (Note: This functionality was previously added for Oracle Database.) If the data stores contain stored functions, they can be exposed using an OData Version 4 service. As part of OData function support, OData schema map version has been changed. The Web UI will automatically migrate the existing OData schema map to a newer OData schema map version when the OData schema is modified for OData Version 4 data sources.

    The following aspects of OData Version 4 functions are supported:

    • Functions that are unbound (static operations)
    • Function imports
    • Functions that return primitive types
    • Function invocation with OData system query options $filter

    The following aspects of OData Version 4 functions are currently NOT supported:

    • Functions that return complex types and entities
    • Functions that are bound to entities
    • Built-in functions
    • Functions with OUT/INOUT parameters
    • Overloaded functions
    • OData system query options using $select
    • OData system query options using $orderby
    • Functions that invoke Parameter value
    • Parameter aliases are not supported. Hence, function invocation with function parameters as URL query parameters is not supported.

  • Installation procedures and response file. The installation program work flow has been modified. The Hybrid Data Pipeline service has two default users, "d2cadmin" and "d2cuser". The installer now prompts you to enter passwords for each default user. When generating a response file to perform a silent installation, the installer will not include values for these properties. Hence, you will need to add the passwords manually to the response file before proceeding with a silent installation. Also, note that a password policy is not enforced during the installation process. The installer only ensures that a value has been specified. The following table provides the new settings. The settings differ depending on whether you generate the response file with a GUI or console installation. Further details are available in the Progress DataDirect Hybrid Data Pipeline Installation Guide.
New response file options
default administrator.
 D2C_USER_PASSWORD D2C_USER_PASSWORD_CONSOLESpecifies the password for the
default user.


Web UI

• Product Information In cases where you are using the evaluation version of the product, the Web UI now mentions evaluation timeout information as 'xx Days Remaining'.
• Version Information The product version information now includes details about the licence type. This can be seen under the version information section of the UI. The licence type is also returned when you query for version information via the version API.

Beta support for third party JDBC drivers

• With the 4.3 release, Hybrid Data Pipeline enables users to plug JDBC drivers into Hybrid Data Pipeline and access data using those drivers. This beta feature supports accessibility via JDBC, ODBC and OData clients with the Teradata JDBC driver. If you are interested in setting up this feature as you evaluate Hybrid Data Pipeline, please contact our sales department.

Apache Hive


• Enhanced to optimize the performance of fetches.

• Enhanced to support the Binary, Char, Date, Decimal, and Varchar data types.

• Enhanced to support HTTP mode, which allows you to access Apache Hive data sources using HTTP/HTTPS requests. HTTP mode can be configured using the new Transport Mode and HTTP Path parameters.

• Enhanced to support cookie based authentication for HTTP connections. Cookie based authentication can be configured using the new Enable Cookie Authentication and Cookie Name parameters. * Enhanced to support Apache Knox.

• Enhanced to support Impersonation and Trusted Impersonation using the Impersonate User parameter.

• The Batch Mechanism parameter has been added. When Batch Mechanism is set to multiRowInsert, the driver executes a single insert for all the rows contained in a parameter array. MultiRowInsert is the default setting and provides substantial performance gains when performing batch inserts.

• The Catalog Mode parameter allows you to determine whether the native catalog functions are used to retrieve information returned by DatabaseMetaData functions. In the default setting, Hybrid Data Pipeline employs a balance of native functions and driver-discovered information for the optimal balance of performance and accuracy when retrieving catalog information.

• The Array Fetch Size parameter improves performance and reduces out of memory errors. Array Fetch Size can be used to increase throughput or, alternately, improve response time in Web-based applications.

• The Array Insert Size parameter provides a workaround for memory and server issues that can sometimes occur when inserting a large number of rows that contain large values.

• Certifications

• Certified with Hive 2.0.x, 2.1.x

• Apache Hive data store connectivity has been certified with the following distributions:

• Cloudera (CDH) 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 5.10, 5.11, 5.12

• Hortonworks (HDP) 2.3, 2.4, 2.5

• IBM BigInsights 4.1, 4.2, 4.3

• MapR 5.2


Version and distribution support

• Hive versions 1.0 and higher are supported. Support for earlier version has been deprecated.

• The HiveServer2 protocol and higher is supported. As a result:

• Support for the HiveServer1 protocol has been deprecated.

• The Wire Protocol Version parameter has been deprecated.

• Support has been deprecated for the following distributions:

• Amazon Elastic MapReduce (Amazon EMR) 2.1.4, 2.24-3.1.4, 3.2-3.7

• Cloudera's Distribution Including Apache Hadoop (CDH) 4.0, 4.1, 4.2, 4.5, 5.0, 5.1, 5.2, 5.3

• Hortonworks (HDP), versions 1.3, 2.0, 2.1, 2.2

• IBM BigInsights 3.0 - MapR Distribution for Apache Hadoop 1.2, 2.0

• Pivotal Enterprise HD 2.0.1, 2.1



• Certified with DB2 V12 for z/OS

• Certified with dashDB (IBM Db2 Warehouse on Cloud)

Oracle Marketing Cloud (Oracle Eloqua)

Data type support. The following data types are supported for the Oracle Eloqua data store.







Oracle Sales Cloud

Data type support. The following data types are supported for the Oracle Eloqua data store.












Connect any application to any data source anywhere

Explore all DataDirect Connectors

A product specialist will be glad to get in touch with you

Contact Us