Release 4.6.0

An asterisk (*) indicates support that was added in a hotfix or software patch subsequent to a release. 

Resolved Issues

Issue HDP-3878 OData model creation failure

OData model creation was failing when the connectivity service was building an OData model from a very large database. Additionally, if unable to read metadata from unique or unusual tables, the creation of the OData model would result in either no rows returned or only partial rows returned. Hybrid Data Pipeline now builds the OData model from the tables selected to be in the model, as opposed to all the tables in the database.

Enhancements

ODBC driver branded installation*

The ODBC driver installation program has been enhanced to support branded installations for OEM customers. The branded driver can then be distributed with OEM customer client applications. For the Hybrid Data Pipeline ODBC driver distribution guide, visit the Progress DataDirect Product Books page on the Progress PartnerLink website (login required).

Tomcat upgrade*

The Hybrid Data Pipeline server and On-Premises Connector have been upgraded to install and use Tomcat 9.0.20.

Transactions

Hybrid Data Pipeline supports transactions against data stores that provide transaction support such as DB2, MySQL, Oracle, and SQL Server. Transactions are supported for JDBC, ODBC, and OData client applications. For JDBC and ODBC applications, transactions are handled via the TransactionMode property and Transaction Mode option, respectively. For OData client applications, Hybrid Data Pipeline supports transactions for OData Version 4 batch requests.

REST connectivity

Hybrid Data Pipeline supports SQL read-only access to JSON-based REST services through the Autonomous REST Connector. When you create a REST data source, the connector creates a relational model of the returned JSON data and translates SQL statements to REST API requests.

Web UI multitenant user management

The Web UI now supports multitenant user management functionality. System administrators can use the Web UI to isolate groups of users, such as organizations or departments, that are being hosted on Hybrid Data Pipeline. In addition, administrators can create roles and provision users using the Web UI. Depending on permissions, administrators may also use the Web UI to manage data sources, specify throttling and other limits, and set system configurations.

PostgreSQL system database

Hybrid Data Pipeline requires an internal or external system database for storing user and configuration information. PostgreSQL 11 is now supported as an external system database.

JDBC and ODBC throttling

A beta version of a new throttling limit has been introduced in the System Limits view. The XdbcMaxResponse limit can be used to set the approximate maximum size of JDBC and ODBC HTTP result data.

Java configuration

Hybrid Data Pipeline uses an embedded JRE at runtime. However, you can integrate an external JRE with a standing deployment of Hybrid Data Pipeline. The following JREs are currently supported.

  • Oracle Java 8 JRE
  • OpenJDK 8 JRE
Tomcat upgrade
The Hybrid Data Pipeline server and On-Premises Connector have been upgraded to install and use Tomcat 9.0.19.

Changed Behavior

OData throttling
The OData concurrent queries limit has been renamed from MaxConcurrentQueries to ODataMaxConcurrentQueries. This limit determines the maximum number of concurrent active OData queries per data source.
JDBC driver JVM requirements
  • The following JVM implementations are now supported.
    • Oracle Java 8 and 11
    • OpenJDK 8 and 11
  • Java SE 6 and 7 JVM implementations are no longer supported.
Windows platform support
The following Windows platforms have reached the end of their product life cycle and are no longer supported by the drivers or the On-Premises Connector.
  • Windows 8.0 (versions 8.1 and higher are still supported)
  • Windows Vista (all versions)
  • Window XP (all versions)
  • Windows Server 2003 (all versions)
Oracle
The following enhancements and changes have been made to support Oracle connectivity.
  • The LOB Prefetch Size option has been added to the Advanced tab. LOB prefetch is supported for Oracle database versions 12.1.0.1 and higher. This option allows you to specify the size of prefetch data the driver returns for BLOBs and CLOBs. With LOB prefetch enabled, the driver can return LOB meta-data and the beginning of LOB data along with the LOB locator during a fetch operation. This can have significant performance impact, especially for small LOBs which can potentially be entirely prefetched, because the data is available without having to go through the LOB protocol.
  • The default value for the Data Integrity Level has been updated to accepted.
  • The default value for the Encryption Level has been updated to accepted.
Salesforce
The following enhancements and changes have been made to support Salesforce connectivity.
  • The Salesforce Bulk API, including PK chunking, is now supported for bulk fetch operations. This functionality can be configured with the following parameters.
    • Enable Bulk Fetch specifies whether the Salesforce Bulk API will be used for selects based on the value of the Bulk Fetch Threshold parameter.
    • Bulk Fetch Threshold specifies a number of rows that, if exceeded, signals that the Salesforce Bulk API should be used for select operations.
    • Enable Primary Key Chunking specifies whether primary key chunking is used for select operations.
    • Primary Key Chunk Size specifies the size, in rows, of a primary key chunk when primary key chunking has been enabled.
  • The Enable Bulk Load default has been updated to ON. By default, the bulk load protocol can be used for inserts, updates, and deletes based on the Bulk Load Threshold parameter. 
  • The Map System Column Names default has been updated to 0. By default, the names of the Salesforce system columns are not changed when mapping the Salesforce data model.
  • The Custom Suffix default has been updated to include. By default, the "_c" and "_x" suffixes are included for table and column names when mapping the Salesforce data model.

Known Issues


See Hybrid Data Pipeline known issues for details.

Known Issues

The following are notes, known issues, or restrictions associated with Hybrid Data Pipeline.
Driver Files API

The Driver Files API cannot be used to retrieve the output REST file when the On-Premises Connector is used to connect to a REST service via an Automated REST Connector data source. 

FIPS mode
  • The On-Premises Connector is not currently FIPS compliant. Therefore, any connections made to an on-premises data source through an On-Premises Connector will not be fully FIPS compliant.
  • When running in FIPS mode, Hybrid Data Pipeline does not support JDBC third party connectivity toSnowflake. Hybrid Data Pipeline uses the Bouncy Castle libraries to provide FIPS 140-2 compliant cryptography, but the Snowflake JDBC driver is incompatible with Bouncy Castle FIPS. The Snowflake JDBC driver uses the default Sun Security Provider to create its own SSL context and the Sun Security Provider is not available in BC FIPS. In addition, the BC FIPS Security Provider does not allow creating custom SSL contexts.
Performing a silent installation - Log file issue

When performing a silent install, if the deployment script fails, no 'SilentInstallError.log' is written. You may check the 'Installation directory/ddcloud/final.log' to know the installation status.

The use of wildcards in SSL server certificates

The Hybrid Data Pipeline service will not by default connect to a backend data store that has been configured for SSL when a wildcard is used to identify the server name in the SSL certificate. If a server certificate contains a wildcard, the following error will be returned.

There is a problem connecting to the DataSource. SSL handshake failed:
sun.security.validator.ValidatorException: PKIX path building failed:
sun.security.provider.certpaths.SunCertPathBuilderException: unable to find
valid certification path to requested target

To work around this issue, the exact string (with wildcard) in the server certificate can be specified with the Host Name in Certificate option when configuring your data source through the Hybrid Data Pipeline user interface or management API.

Load balancer port limitation

The following requirements and limitations apply to the use of non-standard ports in a cluster environment behind a load balancer.

  • For OData connections, the load balancer must supply the X-Forwarded-Port header.
  • In the Web UI, the OData tab will not display the correct port in the URL.
  • For JDBC connections, a non-standard port must be specified with the PortNumber connection property. The connection URL syntax is: //<host_name>:<port_number>;<key>=<value>;....
  • For ODBC connections, a non-standard port must be specified with the Port Number connection option.
  • If you are using the On-Premises Connector, a non-standard port must be specified with the On-Premises Connector Configuration Tool.
Web UI
  • If the entry page is blank after successfully logging in to the Web UI, it should be refreshed to load properly.
  • When working with an Autonomous REST Connector data source and clicking the Generate Configuration button, the Web UI does not open the REST configuration dialog and no error message is displayed. The user should review the REST endpoints and confirm basic authentication values. To see error details, the
    user can execute the following operation where {id} is the ID of the data source.
    GET https://MyServer:8443/api/mgmt/datasources/{id}/export/driverfiles/outputrest
  • An OAuth profile cannot be created for a Google Analytics data source when using Microsoft Edge. To work around this issue, another supported browser such as Chrome or Firefox should be used.
  • If an error is encountered when executing a SQL statement via the SQL Editor view, the Web UI exhibits unexpected behavior. In particular, if the user proceeds to the Data Sources view and clicks the New Data Source button, the Web UI does not return the Data Stores page. To work around this issue, the user must refresh the Data Sources view before clicking on the New Data Source button.
  • When using IE 11 to access the Web UI, the domain URL must be qualified (for example, http(s)://domain-qualified-url/hdpui). Alternatively, the "Display intranet sites in compatibility view settings" in IE 11 must be turned off to use a hostname without a domain address.
  • If there are any '%' or '_'characters in the HDPMetadataExposedSchema option, they will not be treated as wildcard characters. The option value specified is considered a literal. 
  • When a data source is configured with OData Version 4 and the OData Schema Map version is 'odata_mapping_v3' and it does not contain any "entityNameMode", any further editing of the OData Schema map adds "entityNameMode":"pluralize". This affects how entity names are referred to in the OData queries. To avoid this, you must set the entityNameMode whenever a data source is created or edited to the preferred mode. Alternatively, you can remove the "entityNameMode" property from the OData schema map json while saving the data source, if you want to use the default "Guess" mode.
Management API
  • When the Limits API (throttling) is used to set a row limit and createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE) is being used, a row-limit-exceeded error is returned at the row limit instead of one row beyond the limit. For example, if a row limit is set at 45 rows when returning a scrollable, insensitive result set beyond the specified limit, the connectivity service returns the following error on the 45th row as opposed to the expected 46th row: The limit on the number of rows that can be returned from a query -- 45 -- has been exceeded.
OData
  • Functions are not currently supported for $orderby.
  • OData functions are not supported with the On-Premises Connector.
  • Functions with default parameters are not working.
  • For DB2, BOOLEAN Data type does not work with functions in OData.
  • For SQL Server and DB2, OData datatypes Edm.Date and Edm.TimeofDay do not work in Power BI, if the function is selected from from the list of function Imports and parameter values are provided. However, Power BI allows Edm.Date and Edm.TimeOfDay types for Function imports when passed directly in OData feed. There is one workaround available for type Edm.TimeofDay. The columns that are exposed as Edm.TimeofDay should be mapped as TimeAsString in ODataSchemaMap. In this case, PowerBI works as expected.
  • In a load balancer environment, when invoking function import (and not function) that takes datetimeoffset as a parameter, we need to encode the : character present in time parameter. So, the following will return an error:
    http://NC-HDP-U13/api/odata4/D2C_ORACLE_ODATAv4_FUNCT/ODATA_FUNC_GTABLE_DATE
    (DATEIN=1999-12-31T00:00:00Z,INTEGERIN=5)
    The correct URL encoded example must look like the following:
    http://NC-HDP-U13/api/odata4/D2C_ORACLE_ODATAv4_FUNCT/ODATA_FUNC_GTABLE_DATE
    (DATEIN=1999-12-31T00%3A00%3A00Z,INTEGERIN=5)
  • When invoking function import (and not function) that returns null using Power BI, a data format error is returned. The resolution to this issue is being discussed internally as well as with Microsoft.
  • OData 4.0 support for $expand does not work with the following data stores: Salesforce, Dynamics CRM, SugarCRM, Rollbase, Google Analytics, and Oracle Service Cloud.
  • $expand only supports one level deep. Take for example the following entity hierarchy:
    Customers
    |-- Orders
    | |-- OrderItems
    |-- Contacts


    The following queries are supported:
    Customers?$expand=Orders
    Customers?$expand=Contacts
    Customers?$expand=Orders,Contacts


    However, this query is not supported:
    Customers?$expand=Orders,OrderItems

    OrderItems is a second level entity with respect to Customers. To query Orders and OrderItems, the query must be rooted at Orders. For example:
    Orders?$expand=OrderItems
    Orders(id)?$expand=OrderItems

  • When manually editing the ODataSchemaMap value, the table names and column names specified in the value are case-sensitive. The case of the table and column names must match the case of the tables and column names reported by the data source.
    Note: It is highly recommended that you use the OData Schema Editor to generate the value for the ODataSchemaMap data source option. The Schema Editor takes care of table and column name casing and other syntactic details.
  • The $expand clause is not supported with OpenEdge data sources when filtering for more than a single table.
  • The day, endswith, and cast functions are not working when specified in a $filter clause when querying a DB2 data source.
On-Premises Connector
  • The On-Premises Connector is not currently FIPS compliant. Therefore, any connections made to an on-premises data source through an On-Premises Connector will not be fully FIPS compliant.
  • External authentication services are not currently supported when connecting to data sources using the On-Premises Connector.
  • If User Account Control is enabled on your Windows machine and you installed the On-Premises Connector in a system folder (such as Windows or Program Files), you must run the On-Premises Connector Configuration Tool in administrator mode.
  • When using Kerberos with Microsoft Dynamics, the JRE installed with the On-Premises Connector must be configured to run with Kerberos. Take the following steps to configure the JRE.
    • Download a zip file containing new version of the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files for JDK/JRE 8 at http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html.
    • Unzip the file into the \jre\lib\security directory to update the Java security policy files to support 256-bit encryption:
      • C:\<installdir>\jre\lib\security\local_policy.jar
      • C:\<installdir>\jre\lib\security\US_export_policy.jar
  • Uninstalling and re-installing the On-Premises Connector causes the Connector ID of the On-Premises Connector to change. Any Hybrid Data Pipeline data sources using the old Connector ID must be updated to use the new Connector ID. Installing to a new directory allows both the old and new On-Premises Connector to exist side-by-side. However, you must update the Connector ID option in previously-defined Hybrid Data Pipeline data sources to point to the new On-Premises Connector. In addition, you must update Connector Id wherever it was used, such as the definitions of Group Connectors and Authorized Users. Note that upgrading an existing installation of the On-Premises Connector maintains the Connector ID.
  • When upgrading the On-Premises Connector, if the specified user installation directory contains a hyphen “-”, the upgrade will fail. To work around this issue, avoid using hyphen “-” in the user installation directory name. If your existing On-Premises Connector installation directory name contains a hyphen, you must uninstall the existing On-Premises Connector and then perform a new install rather than attempting to upgrade the existing On-Premises Connector installation.
JDBC driver
  • If you attempt to install the JDBC driver in GUI mode to the default installation directory but do not have appropriate permissions for the default directory, the installer indicates that the installation has succeeded when in fact the driver has not been installed. When attempting an installation under the same circumstance but in console mode, the proper error message is displayed.
  • The JDBC 32-bit silent installation fails on Windows 10. Use the standard installation instead.
  • Console mode installation is supported only on UNIX and Linux.
  • The default value for the Service connection property does not connect to the Hybrid Data Pipeline server. To connect, set Service to the Hybrid Data Pipeline server in your connection URL.
  • Using JNDI data sources, encryptionMethod must be configured through setExtendedOptions.

  • MS Dynamics CRM
    • Executing certain queries against MS Dynamics CRM may result in a "Communication failure. Protocol error."
  • OpenEdge
    • The values returned for isWritable and isReadOnly result set metadata attributes are not correct when connected to an OpenEdge data source.
    • Fetching / Describing stored procedure parameter metadata from a callable statement is not working. You can obtain stored procedure parameter metadata using DatabaseMetaData.getProcedureColumns.
    • Max Precision of LONGVARCHAR data type reported by the driver is incorrect. The correct max precision is 1073741824.
  • Salesforce.com, Force.com, and Database.com
    • Using the SELECT...INTO Statement. The SELECT...INTO statement is not supported for remote tables.
ODBC driver
  • When you first install a driver, you are given the option to install a default data source for that driver. We recommend that you install default data sources when you first install the drivers. If you do not install the default data source at this time, you will be unable to install a default data source for this driver later. To install a default data source for a driver after the initial installation, you must uninstall the driver and then reinstall it.
  • Console mode installation is supported only on UNIX.
  • The default ODBC.INI generated by the installer is missing required entries for Service=, PortNumber=, and HybridDataPipelineDataSource=.
  • Salesforce.com, Force.com, and Database.com
    • SQLGetDescField(SQL_DESC_NAME) and SQLColAttributes(SQL_DESC_NAME) do not return a column alias. The column name is always returned.
    • Several SQLGetInfo calls for max length, max number return unknown instead of the maximum length or maximum count.
    • TEXTAREA data type takes a create parameter max length. That is not reported in the TypeInfo.
    • Binary data (SQL_C_BINARY) inserted into character columns (SQL_CHAR, SQL_VARCHAR, SQL_LONGVARCHAR) is not inserted correctly.
  • For SQLColAttribute, the column attributes 1001 and 1002, which were assigned as DataDirect-specific attributes, were inadvertently used as system attributes by the Microsoft 3.0 ODBC implementation. Applications using those attributes must now use 1901 and 1902, respectively.
  • Because of inconsistencies in the ODBC specification, users attempting to use SQL_C_NUMERIC parameters must set the precision and scale values of the corresponding structure and the descriptor fields in the Application Parameter Descriptor.
  • One of the most common connectivity issues encountered while using IIS (Microsoft Internet Information Server) concerns the use and settings of the account permissions. If you encounter problems using Hybrid Data Pipeline drivers with an IIS server, refer to the following KnowledgeBase article: https://knowledgebase.progress.com/articles/Article/4274
All data stores
  • It is recommended that Login Timeout not be disabled (set to 0) for a data source.
  • Using setByte to set parameter values fails when the data store does not support the TINYINT SQL type. Use setShort or setInt to set the parameter value instead of setByte.
Autonomous REST Connector
  • When working with an Autonomous REST Connector data source and clicking the Generate Configuration button, the Web UI does not open the REST configuration dialog and no error message is displayed. The user should review the REST endpoints and confirm basic authentication values. To see error details, the user can execute the following operation where {id} is the ID of the data source.
    GET https://MyServer:8443/api/mgmt/datasources/{id}/export/driverfiles/outputrest

Google Analytics

  • An OAuth profile cannot be created for a Googl Analytics data source when using Microsoft Edge. To work around this issue, another supported browser such as Chrome or Firefox should be used.
  • Validation message is not displayed when a user enters a Start Date value less than the End Date value in Create/Update Google Analytics page.
  • Once a Google Analytics OAuth profile is created for a specific Google account, changing the Google Account associated with the profile results in "the configuration options used to open the database do not match the options used to create the database" error being returned for any existing data sources.
Microsoft Dynamics CRM
  • Executing certain queries against MS Dynamics CRM with the JDBC driver may result in a “Communication failure. Protocol error."
  • Testing has shown the following two errors from Microsoft Dynamics CRM Online when executing queries against the ImportData and TeamTemplate tables:
    • Attribute errortype on Entity ImportData is of type picklist but has Child Attributes Count 0
    • Attribute issystem on Entity TeamTemplate is of type bit but has Child Attributes Count 0
    Note: We have filed a case with Microsoft and are waiting to hear back about the cause of the issue.

  • The initial on-premises connection when the relational map is created can take some time. It is even possible to receive an error "504: Gateway Timeout". When this happens, Hybrid Data Pipeline continues to build the map in the background such that subsequent connection attempts are successful and have full access to the relational map.
OpenEdge 10.2b
  • Setting the MaxPooledStatements data source option in an OpenEdge data store to a value other than zero can cause statement not prepared errors to be returned in some situations.
Oracle Marketing Cloud (Oracle Eloqua)
  • Data store issues
    • There are known issues with Batch Operations.
    • The Update/Delete implementation can update only one record at a time. Because of this, the number of APIs executed depends on the number of records that get updated or deleted by the query plus the number of API calls required to fetch the IDs for those records.
    • Lengths of certain text fields are reported as higher than the actual lengths supported in Oracle Eloqua.
  • We are currently working with Oracle to resolve the following issues with the Oracle Eloqua REST API.
    • AND operators that involve different columns are optimized. In other cases, the queries are only partially optimized.
    • OR operators on the same column are optimized. In other cases, the queries are completely post-processed.
    • The data store is not able to insert or update the NULL value to any field explicitly.
    • The data store is unable to update few fields. They are always reported as NULL after update.
    • Oracle Eloqua uses a double colon (::) as an internal delimiter for multivalued Select fields. Hence when a value with the semi-colon character (;) is inserted or updated into a multivalued Select field, the semicolon character gets converted into the double colon character.
    • Query SELECT count (*) from template returns incorrect results.
    • Oracle Eloqua APIs do not populate the correct values in CreatedBy and UpdatedBy fields. Instead of user names, they contain a Timestamp value.
    • Only equality filters on id fields are optimized. All other filter conditions are not working correctly with Oracle Eloqua APIs and the data store is doing post-processing for such filters.
    • Filters on Non-ID Integer fields and Boolean fields are not working correctly. Hence the driver needs to post-process all these queries.
    • The data store does not distinguish between NULL and empty string. Therefore, null fields are often reported back as empty strings.
    • Values with special characters such as curly braces ({,}), back slash (\), colon (:), slash star (/*) and star slash (*/) are not supported in where clause filter value.
Oracle Sales Cloud
  • Currently, passing filter conditions to Oracle Sales Cloud works only for simple, single column conditions. If there are multiple filters with 'AND' and 'OR', only partial or no filters are passed to Oracle Sales Cloud.
  • Oracle Sales Cloud reports the data type of String and Date fields as String. Therefore, when such fields are filtered or ordered in Hybrid Data Pipeline, they are treated as String values. However, when filter conditions are passed to Oracle Sales Cloud, Oracle Sales Cloud can distinguish between the actual data types and apply Date specific comparisons to Date fields. Therefore, query results can differ depending on whether filters have been passed down to Oracle Sales Cloud or processed by Hybrid Data Pipeline.
  • There appears to be a limitation with the Oracle Sales Cloud REST API concerning the >=, <=, and != comparison operators when querying String fields. Therefore, Hybrid Data Pipeline has not been optimized to pass these comparison operators to Oracle Sales Cloud. We are working with Oracle on this issue.
  • There appears to be a limitation with the Oracle Sales Cloud REST API concerning queries with filter operations on Boolean fields. Therefore, Hybrid Data Pipeline has not been optimized to pass filter operations on Boolean fields to Oracle Sales Cloud. We are working with Oracle on this issue.
  • The drivers currently report ATTACHMENT type fields in the metadata but do not support retrieving data for these fields. These fields are set to NULL.
  • Join queries between parent and child tables are not supported.
  • Queries on child tables whose parent has a composite primary key are not supported. For example, the children of ACTIVITIES_ACTIVITYCONTACT and LEADS_PRODUCTS are not accessible.
  • Queries on the children of relationship objects are not supported. For example, the children of ACCOUNTS_RELATIONSHIP, CONTACTS_RELATIONSHIP, and HOUSEHOLDS_RELATIONSHIP are not accessible.
  • Queries on grandchildren with multiple sets of Parent IDs and Grand Parent IDs used in an OR clause are not supported. For example, the following query is not supported.
    Select * From ACCOUNTS_ADDRESS_ADDRESSPURPOSE
    Where (ACCOUNTS_PARTYNUMBER = 'OSC_12343' AND
    ACCOUNTS_ADDRESS_ADDRESSNUMBER = 'AUNA-2XZKGH')
    OR (ACCOUNTS_PARTYNUMBER = 'OSC_12344' AND
    ACCOUNTS_ADDRESS_ADRESSNUMBER = 'AUNA-2YZKGH")
  • When querying documented objects like "CATALOGPRODUCTITEMS" and "CATEGORYPRODUCTITEMS", no more than 500 records are returned, even when more records may be present. This behavior is also seen with some custom objects. We are currently working with Oracle support to resolve this issue.
  • A query on OPPORTUNITIES_CHILDREVENUE_PRODUCTS or LEADS_PRODUCTGROUPS with a filter on the primary key column returns 0 records even when more records are present. We are currently working with Oracle support to resolve this issue.
  • Queries that contain subqueries returning more than 100 records are not supported. For example, the following query is not supported.
    Select * From ACCOUNTS_ADDRESS
    Where ACCOUNTS_PARTYNUMBER
    In (Select Top 101 PARTYNUMBER From ACCOUNTS
  • When you create custom objects, your Oracle Sales Cloud administrator must enable these objects for REST API access through Application Composer. Otherwise, you will not be able to query against these custom objects.
Oracle Service Cloud
  • When you create a custom object, your Oracle Service Cloud administrator must enable all four columns of the Object Fields tab of the Object Designer, or you cannot query against the custom objects.
  • The initial connection when the relational map is created can take some time. It is even possible to receive an error "504: Gateway Timeout". When this happens, Hybrid Data Pipeline continues to build the map in the background such that subsequent connection attempts are successful and have full access to the relational map.
Salesforce
  • If you have existing Salesforce data sources and are upgrading from an earlier version of Hybrid Data Pipeline to version 4.6, then you must recreate the relational map of each Salesforce data source.

    To recreate the relational map using the Web UI, select the Salesforce data source from the list of data sources in the Manage Data Sources view. Then, under the Mapping tab, select Force New from the Create Mapping dropdown, and click the Update button to save the change. Next, click the Test button to test the connection. Once you have confirmed the connection, the Create Mapping option should be changed back to Not Exist.

    To recreate the relational map using the Data Sources API, execute the following operation and payload where {datasourceId} is the ID of the data source.
    POST https://MyServer:8443/api/mgmt/datasources/{datasourceId}/map
    {
    "map": "recreate"
    }
SugarCRM
  • Data sources that are using the deprecated enableExportMode option will still see a problem until they are migrated to the new data source configuration.
  • Data source connections by default now use Export Mode to communicate with the Sugar CRM server, providing increased performance when querying large sets of data. Bulk export mode causes NULL values for currency columns to be returned as the value 0. Because of this, there is no way to differentiate between a NULL value and 0, when operating in export mode. This can be a problem when using currency columns in the SQL statements, because Hybrid Data Pipeline must satisfy some filter conditions on queries, such as with operations like =, <>, >, >=, <, <=, IS NULL and IS NOT NULL.  For example, suppose a currency column in a table in SugarCRM has 3 null values and 5 values that are 0. When a query is executed to return all NULL values (SELECT * FROM <table> WHERE <>currency column> IS NULL), then 3 rows are returned. However, if a query is executed to return all rows where the column performs an arithmetic operation (SELECT * FROM <table> WHERE <currency column> + 1 = 1), then all 8 records are returned because the 3 NULL values are seen as 0.

Release 4.5.0

Progress DataDirect Hybrid Data Pipeline is a data access server that provides simple, secure access to cloud and on-premises data sources, such as RDBMS, Big Data, and NoSQL. Hybrid Data Pipeline allows business intelligence tools and applications to use ODBC, JDBC, or OData to access data from supported data sources. Hybrid Data Pipeline can be installed in the cloud or behind a firewall. Hybrid Data Pipeline can then be configured to work with applications and data sources in nearly any business environment. Progress DataDirect Hybrid Data Pipeline consists of four primary, separately installed components.

  • The Hybrid Data Pipeline server provides access to multiple data sources through a single, unified interface. The server can be hosted on premises or in the cloud.

  • The On-Premises Connector enables the Hybrid Data Pipeline to establish a secure connection from the cloud to an on-premises data source.

  • The ODBC driver enables ODBC applications to communicate to a data source through the Hybrid Data Pipeline server.

  • The JDBC driver enables JDBC applications to communicate to a data source through the Hybrid Data Pipeline server.

Resolved Issues

The following issues have been resolved. An asterisk (*) indicates an issue that was resolved in a software patch subsequent to the GA release.

Issue HDP-3974 Installation fails when choosing a unicode external database*
When a unicode external database was selected during the installation process, the Hybrid Data Pipeline server failed to install. This fix is available in build 4.5.0.71.

Issue HDP-3989 Validate Server Certificate persistence in the Web UI*
The Web UI was not persisting the value of the Validate Server Certificate parameter after it had been set to OFF. After exiting the data source and returning back, this resulted in the test connection failing. This fix is available in build 4.5.0.65.

Issue HDP-3785 Data source password replacing plus sign (+) with space*
When creating a password for a MySQL CE data source in the Web UI, the plus sign (+) was incorrectly being replaced with a space. This fix is available in build 4.5.0.65.

Issue HDP-3878 OData model creation failure*
OData model creation was failing when the connectivity service was building an OData model from a very large database. Additionally, if unable to read metadata from unique or unusual tables, the creation of the OData model would result in either no rows returned or only partial rows returned. Hybrid Data Pipeline now builds the OData model from the tables selected to be in the model, as opposed to all the tables in the database. This enhancement is available in build 4.5.0.61.

Enhancements

Multitenancy

Hybrid Data Pipeline now supports multitenancy. Multitenancy allows a system administrator to isolate groups of users, such as organizations or departments, that are being hosted through the Hybrid Data Pipeline service. The provider maintains a physical instance of Hybrid Data Pipeline, while each tenant (group of users) is provided with its own logical instance of the service. In a multitenant environment, the default system tenant contains multiple child tenants. The user accounts that reside in one tenant are isolated from those in other tenants.

Data source sharing

Hybrid Data Pipeline now supports data source sharing via the Data Sources API. Data source owners can now share data sources with other users. Standard users can share data sources with individual user accounts. Administrators can share data sources with tenants and individual user accounts. Data source sharing allows administrators to provision users for limited or query-only access to Hybrid Data Pipeline resources.

Third-party JDBC support and validation tool

Hybrid Data Pipeline support for third-party JDBC drivers is now GA. Administrators can use a command line validation tool to determine whether a third-party JDBC driver will work with the Hybrid Data Pipeline server and On-Premises Connector. If validated, a third-party driver can be used to support OData, JDBC, and ODBC connectivity in the Hybrid Data Pipeline environment. Once the driver is integrated with the Hybrid Data Pipeline environment, users can create Hybrid Data Pipeline data sources for the backend data store supported by the third-party JDBC driver.

IP address whitelists

Administrators can now restrict access to Hybrid Data Pipeline by creating an IP address whitelist to determine which IP addresses (either individual IP addresses or a range of IP addresses) can access resources such as the Data Sources API, the Users API, and the Web UI. IP address whitelists can be implemented at system, tenant, and user levels.

Web UI
  • The Web UI has been refreshed with modern look and feel to provide an improved user experience. As part of the refresh, the Web UI URL has been changed to http(s)://<servername>:<portnumber>/hdpui.
  • The OData Configure Schema editor has been enhanced and now provides a better way to configure an OData schema map.
  • The process for creating Google Analytics data sources has also been improved. Refer to the Hybrid Data Pipeline help system for further information. 
SQL Server data store

Hybrid Data Pipeline now supports the following features.

  • Transparent connectivity to Microsoft Azure SQL Data Warehouse and Microsoft Analytics Platform System data sources
  • Always On Availability Groups via the Multi-Subnet Failover, Application Intent, and Server Name options
  • Azure Active Directory authentication (Azure AD authentication) via the Authentication Method, User, Password, Server Name, and Port Number options

Exporting non-relational data source files
The Data Source API now supports operations to export the relational map files for non-relational data sources. When a data source is created for a web service such as Salesforce, Hybrid Data Pipeline generates files to map the object model to a relational model. These files may be used to resolve issues that can arise when performing queries against data sources such as these.
Evaluation period
The evaluation period for Hybrid Data Pipeline has been changed from 90 to 30 days. 

 

Known Issues

Sharing group data sources

Sharing a group data source requires that the member data sources of the group also be shared.

Configuring logging from the Web UI

Data source logging cannot be configured through the Web UI in Hybrid Data Pipeline 4.5. Nevertheless, data source logging can still be configured using the Logging API.

FIPS compliance with the On-Premises Connector

The On-Premises Connector is not currently FIPS compliant. Therefore, any connections made to an on-premises data source through an On-Premises Connector will not be fully FIPS compliant.

Performing a silent installation - Log file issue

When performing a silent install, if the deployment script fails, no 'SilentInstallError.log' is written. You may check the 'Installation directory/ddcloud/final.log' to know the installation status.

The use of wildcards in SSL server certificates

The Hybrid Data Pipeline service will not by default connect to a backend data store that has been configured for SSL when a wildcard is used to identify the server name in the SSL certificate. If a server certificate contains a wildcard, the following error will be returned.

There is a problem connecting to the DataSource. SSL handshake failed:
sun.security.validator.ValidatorException: PKIX path building failed:
sun.security.provider.certpaths.SunCertPathBuilderException: unable to find
valid certification path to requested target

To work around this issue, the exact string (with wildcard) in the server certificate can be specified with the Host Name in Certificate option when configuring your data source through the Hybrid Data Pipeline user interface or management API.

Load balancer port limitation

The following requirements and limitations apply to the use of non-standard ports in a cluster environment behind a load balancer.

  • For OData connections, the load balancer must supply the X-Forwarded-Port header.
  • In the Web UI, the OData tab will not display the correct port in the URL.
  • For JDBC connections, a non-standard port must be specified with the PortNumber connection property. The connection URL syntax is: //<host_name>:<port_number>;<key>=<value>;....
  • For ODBC connections, a non-standard port must be specified with the Port Number connection option.
  • If you are using the On-Premises Connector, a non-standard port must be specified with the On-Premises Connector Configuration Tool.
Web UI
  • When using IE 11 to access the Web UI, the domain URL must be qualified (for example, http(s)://domain-qualified-url/hdpui). Alternatively, the "Display intranet sites in compatibility view settings" in IE 11 must be turned off to use a hostname without a domain address.
  • If there are any '%' or '_'characters in the HDPMetadataExposedSchema option, they will not be treated as wildcard characters. The option value specified is considered a literal. 
  • When a data source is configured with OData Version 4 and the OData Schema Map version is 'odata_mapping_v3' and it does not contain any "entityNameMode", any further editing of the OData Schema map adds "entityNameMode":"pluralize". This affects how entity names are referred to in the OData queries. To avoid this, you must set the entityNameMode whenever a data source is created or edited to the preferred mode. Alternatively, you can remove the "entityNameMode" property from the OData schema map json while saving the data source, if you want to use the default "Guess" mode.
Management API
  • When the Limits API (throttling) is used to set a row limit and createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE) is being used, a row-limit-exceeded error is returned at the row limit instead of one row beyond the limit. For example, if a row limit is set at 45 rows when returning a scrollable, insensitive result set beyond the specified limit, the connectivity service returns the following error on the 45th row as opposed to the expected 46th row: The limit on the number of rows that can be returned from a query -- 45 -- has been exceeded.
OData
  • Functions are not currently supported for $orderby.
  • OData functions are not supported with the On-Premises Connector.
  • Functions with default parameters are not working.
  • For DB2, BOOLEAN Data type does not work with functions in OData.
  • For SQL Server and DB2, OData datatypes Edm.Date and Edm.TimeofDay do not work in Power BI, if the function is selected from from the list of function Imports and parameter values are provided. However Power BI allows ‘Edm.Date’ and ‘Edm.TimeOfDay’ types for Function imports when passed directly in OData feed. There is one workaround available for type Edm.TimeofDay. The columns that are exposed as Edm.TimeofDay should be mapped as “TimeAsString” in ODataSchemaMap. In this case, PowerBI works
    as expected.
  • In a load balancer environment, when invoking function import (and not function) that takes datetimeoffset as a parameter, we need to encode the : character present in time parameter. So, the following will return an error:
    http://NC-HDP-U13/api/odata4/D2C_ORACLE_ODATAv4_FUNCT/ODATA_FUNC_GTABLE_DATE
    (DATEIN=1999-12-31T00:00:00Z,INTEGERIN=5)
    The correct URL encoded example must look like the following:
    http://NC-HDP-U13/api/odata4/D2C_ORACLE_ODATAv4_FUNCT/ODATA_FUNC_GTABLE_DATE
    (DATEIN=1999-12-31T00%3A00%3A00Z,INTEGERIN=5)
  • When invoking function import (and not function) that returns null using Power BI, a data format error is returned. The resolution to this issue is being discussed internally as well as with Microsoft.
  • OData 4.0 support for $expand does not work with the following data stores: Salesforce, Dynamics CRM, SugarCRM, Rollbase, Google Analytics, and Oracle Service Cloud.
  • $expand only supports one level deep. Take for example the following entity hierarchy:
    Customers
    |-- Orders
    | |-- OrderItems
    |-- Contacts


    The following queries are supported:
    Customers?$expand=Orders
    Customers?$expand=Contacts
    Customers?$expand=Orders,Contacts


    However, this query is not supported:
    Customers?$expand=Orders,OrderItems

    OrderItems is a second level entity with respect to Customers. To query Orders and OrderItems, the query must be rooted at Orders. For example:
    Orders?$expand=OrderItems
    Orders(id)?$expand=OrderItems


  • Hybrid Data Pipeline OData model asynch API incorrectly returns zero instead of the actual percent complete when querying the status of a model that is being generated.
  • When manually editing the ODataSchemaMap value, the table names and column names specified in the value are case-sensitive. The case of the table and column names must match the case of the tables and column names reported by the data source.
    Note: It is highly recommended that you use the OData Schema Editor to generate the value for the ODataSchemaMap data source option. The Schema Editor takes care of table and column name casing and other syntactic details.
  • The $expand clause is not supported with OpenEdge data sources when filtering for more than a single table.
  • The day, endswith, and cast functions are not working when specified in a $filter clause when querying a DB2 data source.
On-Premise Connector
  • The On-Premises Connector is not currently FIPS compliant. Therefore, any connections made to an on-premises data source through an On-Premises Connector will not be fully FIPS compliant.
  • External authentication services are not currently supported when connecting to data sources using the On-Premises Connector.
  • If User Account Control is enabled on your Windows machine and you installed the On-Premises Connector in a system folder (such as Windows or Program Files), you must run the On-Premises Connector Configuration Tool in administrator mode.
  • When using Kerberos with Microsoft Dynamics, the JRE installed with the On-Premises Connector must be configured to run with Kerberos. Take the following steps to configure the JRE.
    • Download a zip file containing new version of the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files for JDK/JRE 8 at http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html.
    • Unzip the file into the \jre\lib\security directory to update the Java security policy files to support 256-bit encryption:
      • C:\<installdir>\jre\lib\security\local_policy.jar
      • C:\<installdir>\jre\lib\security\US_export_policy.jar
  • Uninstalling and re-installing the On-Premises Connector causes the Connector ID of the On-Premises Connector to change. Any Hybrid Data Pipeline data sources using the old Connector ID must be updated to use the new Connector ID. Installing to a new directory allows both the old and new On-Premises Connector to exist side-by-side. However, you must update the Connector ID option in previously-defined Hybrid Data Pipeline data sources to point to the new On-Premises Connector. In addition, you must update Connector Id wherever it was used, such as the definitions of Group Connectors and Authorized Users. Note that upgrading an existing installation of the On-Premises Connector maintains the Connector ID.
  • When upgrading the On-Premises Connector, if the specified user installation directory contains a hyphen “-”, the upgrade will fail. To work around this issue, avoid using hyphen “-” in the user installation directory name. If your existing On-Premises Connector installation directory name contains a hyphen, you must uninstall the existing On-Premises Connector and then perform a new install rather than attempting to upgrade the existing On-Premises Connector installation.
JDBC driver
  • If you attempt to install the JDBC driver in GUI mode to the default installation directory but do not have appropriate permissions for the default directory, the installer indicates that the installation has succeeded when in fact the driver has not been installed. When attempting an installation under the same circumstance but in console mode, the proper error message is displayed.
  • The default value for the Service connection property does not connect to the Hybrid Data Pipeline server. To connect, set Service to the Hybrid Data Pipeline server in your connection URL.
  • The JDBC 32-bit silent installation fails on Windows 10. Use the standard installation instead.
  • Console mode installation is supported only on UNIX and Linux.
  • Executing certain queries against MS Dynamics CRM may result in a "Communication failure. Protocol error."
  • Using JNDI data sources, encryptionMethod must be configured through setExtendedOptions.
  • See the d2cjdbcreadme.txt file installed the JDBC driver for more information.
ODBC driver
  • The default ODBC.INI generated by the installer is missing required entries for Service=, PortNumber=, and HybridDataPipelineDataSource=.
  • Console mode installation is supported only on UNIX.
  • When you first install a driver, you are given the option to install a default data source for that driver. We recommend that you install default data sources when you first install the drivers. If you do not install the default data source at this time, you will be unable to install a default data source for this driver later. To install a default data source for a driver after the initial installation, you must uninstall the driver and then reinstall it.
  • See the d2codbcreadme.txt file installed the ODBC driver for more information.
All data stores
  • It is recommended that Login Timeout not be disabled (set to 0) for a data source.
  • Using setByte to set parameter values fails when the data store does not support the TINYINT SQL type. Use setShort or setInt to set the parameter value instead of setByte.
Google Analytics
  • Validation message is not displayed when a user enters a Start Date value less than the End Date value in Create/Update Google Analytics page.
  • Once a Google Analytics OAuth profile is created for a specific Google account, changing the Google Account associated with the profile results in "the configuration options used to open the database do not match the options used to create the database" error being returned for any existing data sources.
Microsoft Dynamics CRM
  • Executing certain queries against MS Dynamics CRM with the JDBC driver may result in a “Communication failure. Protocol error."
  • Testing has shown the following two errors from Microsoft Dynamics CRM Online when executing queries against the ImportData and TeamTemplate tables:
    • Attribute errortype on Entity ImportData is of type picklist but has Child Attributes Count 0
    • Attribute issystem on Entity TeamTemplate is of type bit but has Child Attributes Count 0
    Note: We have filed a case with Microsoft and are waiting to hear back about the cause of the issue.

  • The initial on-premises connection when the relational map is created can take some time. It is even possible to receive an error "504: Gateway Timeout". When this happens, Hybrid Data Pipeline continues to build the map in the background such that subsequent connection attempts are successful and have full access to the relational map.
OpenEdge 10.2b
  • Setting the MaxPooledStatements data source option in an OpenEdge data store to a value other than zero can cause statement not prepared errors to be returned in some situations.
Oracle Marketing Cloud (Oracle Eloqua)
  • Data store issues
    • There are known issues with Batch Operations.
    • The Update/Delete implementation can update only one record at a time. Because of this, the number of APIs executed depends on the number of records that get updated or deleted by the query plus the number of API calls required to fetch the IDs for those records.
    • Lengths of certain text fields are reported as higher than the actual lengths supported in Oracle Eloqua.
  • We are currently working with Oracle to resolve the following issues with the Oracle Eloqua REST API.
    • AND operators that involve different columns are optimized. In other cases, the queries are only partially optimized.
    • OR operators on the same column are optimized. In other cases, the queries are completely post-processed.
    • The data store is not able to insert or update the NULL value to any field explicitly.
    • The data store is unable to update few fields. They are always reported as NULL after update.
    • Oracle Eloqua uses a double colon (::) as an internal delimiter for multivalued Select fields. Hence when a value with the semi-colon character (;) is inserted or updated into a multivalued Select field, the semicolon character gets converted into the double colon character.
    • Query SELECT count (*) from template returns incorrect results.
    • Oracle Eloqua APIs do not populate the correct values in CreatedBy and UpdatedBy fields. Instead of user names, they contain a Timestamp value.
    • Only equality filters on id fields are optimized. All other filter conditions are not working correctly with Oracle Eloqua APIs and the data store is doing post-processing for such filters.
    • Filters on Non-ID Integer fields and Boolean fields are not working correctly. Hence the driver needs to post-process all these queries.
    • The data store does not distinguish between NULL and empty string. Therefore, null fields are often reported back as empty strings.
    • Values with special characters such as curly braces ({,}), back slash (\), colon (:), slash star (/*) and star slash (*/) are not supported in where clause filter value.
Oracle Sales Cloud
  • Currently, passing filter conditions to Oracle Sales Cloud works only for simple, single column conditions. If there are multiple filters with 'AND' and 'OR', only partial or no filters are passed to Oracle Sales Cloud.
  • Oracle Sales Cloud reports the data type of String and Date fields as String. Therefore, when such fields are filtered or ordered in Hybrid Data Pipeline, they are treated as String values. However, when filter conditions are passed to Oracle Sales Cloud, Oracle Sales Cloud can distinguish between the actual data types and apply Date specific comparisons to Date fields. Therefore, query results can differ depending on whether filters have been passed down to Oracle Sales Cloud or processed by Hybrid Data Pipeline.
  • There appears to be a limitation with the Oracle Sales Cloud REST API concerning the >=, <=, and != comparison operators when querying String fields. Therefore, Hybrid Data Pipeline has not been optimized to pass these comparison operators to Oracle Sales Cloud. We are working with Oracle on this issue.
  • There appears to be a limitation with the Oracle Sales Cloud REST API concerning queries with filter operations on Boolean fields. Therefore, Hybrid Data Pipeline has not been optimized to pass filter operations on Boolean fields to Oracle Sales Cloud. We are working with Oracle on this issue.
  • The drivers currently report ATTACHMENT type fields in the metadata but do not support retrieving data for these fields. These fields are set to NULL.
  • Join queries between parent and child tables are not supported.
  • Queries on child tables whose parent has a composite primary key are not supported. For example, the children of ACTIVITIES_ACTIVITYCONTACT and LEADS_PRODUCTS are not accessible.
  • Queries on the children of relationship objects are not supported. For example, the children of ACCOUNTS_RELATIONSHIP, CONTACTS_RELATIONSHIP, and HOUSEHOLDS_RELATIONSHIP are not accessible.
  • Queries on grandchildren with multiple sets of Parent IDs and Grand Parent IDs used in an OR clause are not supported. For example, the following query is not supported.
    Select * From ACCOUNTS_ADDRESS_ADDRESSPURPOSE
    Where (ACCOUNTS_PARTYNUMBER = 'OSC_12343' AND
    ACCOUNTS_ADDRESS_ADDRESSNUMBER = 'AUNA-2XZKGH')
    OR (ACCOUNTS_PARTYNUMBER = 'OSC_12344' AND
    ACCOUNTS_ADDRESS_ADRESSNUMBER = 'AUNA-2YZKGH")
  • When querying documented objects like "CATALOGPRODUCTITEMS" and "CATEGORYPRODUCTITEMS", no more than 500 records are returned, even when more records may be present. This behavior is also seen with some custom objects. We are currently working with Oracle support to resolve this issue.
  • A query on OPPORTUNITIES_CHILDREVENUE_PRODUCTS or LEADS_PRODUCTGROUPS with a filter on the primary key column returns 0 records even when more records are present. We are currently working with Oracle support to resolve this issue.
  • Queries that contain subqueries returning more than 100 records are not supported. For example, the following query is not supported.
    Select * From ACCOUNTS_ADDRESS
    Where ACCOUNTS_PARTYNUMBER
    In (Select Top 101 PARTYNUMBER From ACCOUNTS
  • When you create custom objects, your Oracle Sales Cloud administrator must enable these objects for REST API access through Application Composer. Otherwise, you will not be able to query against these custom objects.
Oracle Service Cloud
  • When you create a custom object, your Oracle Service Cloud administrator must enable all four columns of the Object Fields tab of the Object Designer, or you cannot query against the custom objects.
  • The initial connection when the relational map is created can take some time. It is even possible to receive an error "504: Gateway Timeout". When this happens, Hybrid Data Pipeline continues to build the map in the background such that subsequent connection attempts are successful and have full access to the relational map.
SugarCRM
  • Data sources that are using the deprecated enableExportMode option will still see a problem until they are migrated to the new data source configuration.
  • Data source connections by default now use Export Mode to communicate with the Sugar CRM server, providing increased performance when querying large sets of data. Bulk export mode causes NULL values for currency columns to be returned as the value 0. Because of this, there is no way to differentiate between a NULL value and 0, when operating in export mode. This can be a problem when using currency columns in the SQL statements, because Hybrid Data Pipeline must satisfy some filter conditions on queries, such as with operations like =, <>, >, >=, <, <=, IS NULL and IS NOT NULL.  For example, suppose a currency column in a table in SugarCRM has 3 null values and 5 values that are 0. When a query is executed to return all NULL values (SELECT * FROM <table> WHERE <>currency column> IS NULL), then 3 rows are returned. However, if a query is executed to return all rows where the column performs an arithmetic operation (SELECT * FROM <table> WHERE <currency column> + 1 = 1), then all 8 records are returned because the 3 NULL values are seen as 0.

Release 4.4.0

Progress DataDirect Hybrid Data Pipeline is a data access server that provides simple, secure access to cloud and on-premises data sources, such as RDBMS, Big Data, and NoSQL. Hybrid Data Pipeline allows business intelligence tools and applications to use ODBC, JDBC, or OData to access data from supported data sources. Hybrid Data Pipeline can be installed in the cloud or behind a firewall. Hybrid Data Pipeline can then be configured to work with applications and data sources in nearly any business environment. Progress DataDirect Hybrid Data Pipeline consists of four primary, separately installed components.

  • The Hybrid Data Pipeline server provides access to multiple data sources through a single, unified interface. The server can be hosted on premises or in the cloud.

  • The On-Premises Connector enables the Hybrid Data Pipeline to establish a secure connection from the cloud to an on-premises data source.

  • The ODBC driver enables ODBC applications to communicate to a data source through the Hybrid Data Pipeline server.

  • The JDBC driver enables JDBC applications to communicate to a data source through the Hybrid Data Pipeline server.

Enhancements

The Hybrid Data Pipeline 4.4 release implements features that simplify the deployment of cluster environments by implementing an enhanced messaging that removes the external dependency upon a Kafka message queue and provides integration with application load balancers in public cloud environments.

Integration with cloud load balancers

Hybrid Data Pipeline has added support for multi-node clusters that integrate with cloud load balancers. Hybrid Data Pipeline supports cloud load balancers that support the Websockets protocol (such as AWS application load balancer and Azure application gateway).

Enhanced Messaging

Hybrid Data Pipeline now has enhanced messaging such that the deployments no longer rely upon a Kafka cluster to support highly available inter-node communication.

Support for OAuth 2.0

Hybrid Data Pipeline now supports OAuth 2.0 authorization for OData API access, in addition to the basic authentication. Customers using client applications or third-party applications like Salesforce Connect and Power BI will be able to invoke Hybrid Data Pipeline OData access endpoints by passing in the required tokens as opposed to storing user credentials in the application.

Support for Installation using Docker image

You can now install a single node Hybrid Data Pipeline server for evaluation purposes using a Docker image. Docker is a tool that makes it easier to deploy and run applications. The use of a Docker image means that no prior machine setup is required. You may choose to use this method if you want to get started without spending time on installation and configuration.

Response File changes

The following properties have been removed from the response file for both console and GUI modes:

  • USING_LOAD_BALANCING_YES
  • USING_LOAD_BALANCING_NO
  • D2C_USING_KAFKA_CONFIG
  • D2C_USING_KAFKA_CONFIG_CONSOLE
  • D2C_MESSAGE_QUEUE_SERVERS
  • D2C_MESSAGE_QUEUE_SERVERS_CONSOLE
  • D2C_HDP_CLUSTER_NAME

The following properties have been added to the response file for both console and GUI modes:

  • D2C_NO_LOAD_BALANCER (GUI and console): specifies if no load balancer is used.
  • D2C_NETWORK_LOAD_BALANCER (or D2C_NETWORK_LOAD_BALANCER_CONSOLE): specifies whether a network load balancer is used.
  • DC_CLOUD_LOAD_BALANCER (or D2C_CLOUD_LOAD_BALANCER_CONSOLE): specifies whether a cloud load balancer is used.

Further details are available in the Progress DataDirect Hybrid Data Pipeline Installation Guide.

Web UI Enhancements

Limit GetSchema Hybrid Data Pipeline

Users can now configure the additional property Metadata Exposed Schemas in the data source configuration to restrict the schemas they see in the SQL Editor and the OData Editor.

GUID for SQL Server

Added support for exposing GUID data type as a GUID in OData for SQL Server data source.

On-Premises Connector

Simplified the On-premises Connector installation such that the Cloud Access service is no longer installed. Only the single Cloud Connector service will be installed.

Known Issues

FIPS compliance with the On-Premises Connector
  • The On-Premises Connector is not currently FIPS compliant. Therefore, any connections made to an on-premises data source through an On-Premises Connector will not be fully FIPS compliant.
Performing a silent installation - Log file issue
  • When performing a silent install, if the deployment script fails, no 'SilentInstallError.log' is written. You may check the 'Installation directory/ddcloud/final.log' to know the installation status.
The use of wildcards in SSL server certificates
  • The Hybrid Data Pipeline service will not by default connect to a backend data store that has been configured for SSL when a wildcard is used to identify the server name in the SSL certificate. If a server certificate contains a wildcard, the following error will be returned.
    There is a problem connecting to the DataSource. SSL handshake failed:
    sun.security.validator.ValidatorException: PKIX path building failed:
    sun.security.provider.certpaths.SunCertPathBuilderException: unable to find
    valid certification path to requested target

    To work around this issue, the exact string (with wildcard) in the server certificate can be specified with the Host Name in Certificate option when configuring your data source through the Hybrid Data Pipeline user interface or management API.

Load balancer port limitation
  • Either port 80 for non-SSL environments, or port 443 for SSL environments, must be used in the configuration of a load balancer used to support a Hybrid Data Pipeline cluster. Non-standard ports in the configuration of a load balancer are not currently supported.
Web UI
  • When a data source is configured with OData Version 4 and the OData Schema Map version is 'odata_mapping_v3' and it does not contain any "entityNameMode", any further editing of the OData Schema map adds "entityNameMode":"pluralize". This affects how entity names are referred to in the OData queries. To avoid this, you must set the entityNameMode whenever a data source is created or edited to the preferred mode. Alternatively, you can remove the "entityNameMode" property from the OData schema map json while saving the data source, if you want to use the default "Guess" mode.
  • If a Hybrid Data Pipeline administrator creates a user with a password that contains a percentage mark (%), the new user may face issues while trying to login. In addition, Hybrid Data Pipeline functionality may not work as expected.
  • When an administrator tries to add new users using the Add Users window, the Password and Confirm Password fields occasionally do not appear properly in the popup window.
  • COPY DETAILS functionality is not currently working in Internet Explorer 11 due to a limitation with the third party plugin Clipboard.js on bootstrap modals. More details on this can be found at https://github.com/zenorocha/clipboard.js/wiki/Known-Issues.
Management API
  • When the Limits API (throttling) is used to set a row limit and createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE) is being used, a row-limit-exceeded error is returned at the row limit instead of one row beyond the limit. For example, if a row limit is set at 45 rows when returning a scrollable, insensitive result set beyond the specified limit, the connectivity service returns the following error on the 45th row as opposed to the expected 46th row: The limit on the number of rows that can be returned from a query -- 45 -- has been exceeded.
  • If a Hybrid Data Pipeline administrator creates a user with a password that contains a percentage mark (%), the new user may face issues while trying to login. In addition, Hybrid Data Pipeline functionality may not work as expected.
OData
  • Functions are not currently supported for $orderby.
  • OData functions are not supported with the On-Premises Connector.
  • Functions with default parameters are not working.
  • For DB2, BOOLEAN Data type does not work with functions in OData.
  • For SQL Server and DB2, OData datatypes Edm.Date and Edm.TimeofDay do not work in Power BI, if the function is selected from from the list of function Imports and parameter values are provided. However Power BI allows ‘Edm.Date’ and ‘Edm.TimeOfDay’ types for Function imports when passed directly in OData feed. There is one workaround available for type Edm.TimeofDay. The columns that are exposed as Edm.TimeofDay should be mapped as “TimeAsString” in ODataSchemaMap. In this case, PowerBI works
    as expected.
  • In a load balancer environment, when invoking function import (and not function) that takes datetimeoffset as a parameter, we need to encode the : character present in time parameter. So, the following will return an error:
    http://NC-HDP-U13/api/odata4/D2C_ORACLE_ODATAv4_FUNCT/ODATA_FUNC_GTABLE_DATE
    (DATEIN=1999-12-31T00:00:00Z,INTEGERIN=5)
    The correct URL encoded example must look like the following:
    http://NC-HDP-U13/api/odata4/D2C_ORACLE_ODATAv4_FUNCT/ODATA_FUNC_GTABLE_DATE
    (DATEIN=1999-12-31T00%3A00%3A00Z,INTEGERIN=5)
  • When invoking function import (and not function) that returns null using Power BI, a data format error is returned. The resolution to this issue is being discussed internally as well as with Microsoft.
  • OData 4.0 support for $expand does not work with the following data stores: Salesforce, Dynamics CRM, SugarCRM, Rollbase, Google Analytics, and Oracle Service Cloud.
  • $expand only supports one level deep. Take for example the following entity hierarchy:
    Customers
    |-- Orders
    | |-- OrderItems
    |-- Contacts


    The following queries are supported:
    Customers?$expand=Orders
    Customers?$expand=Contacts
    Customers?$expand=Orders,Contacts


    However, this query is not supported:
    Customers?$expand=Orders,OrderItems

    OrderItems is a second level entity with respect to Customers. To query Orders and OrderItems, the query must be rooted at Orders. For example:
    Orders?$expand=OrderItems
    Orders(id)?$expand=OrderItems


  • Hybrid Data Pipeline OData model asynch API incorrectly returns zero instead of the actual percent complete when querying the status of a model that is being generated.
  • When manually editing the ODataSchemaMap value, the table names and column names specified in the value are case-sensitive. The case of the table and column names must match the case of the tables and column names reported by the data source.
    Note: It is highly recommended that you use the OData Schema Editor to generate the value for the ODataSchemaMap data source option. The Schema Editor takes care of table and column name casing and other syntactic details.
  • The $expand clause is not supported with OpenEdge data sources when filtering for more than a single table.
  • The day, endswith, and cast functions are not working when specified in a $filter clause when querying a DB2 data source.
On-Premise Connector
  • FIPS compliance with the On-Premises Connector: The On-Premises Connector is not currently FIPS compliant. Therefore, any connections made to an on-premises data source through an On-Premises Connector will not be fully FIPS compliant.
  • External authentication with the On-Premises Connector: External authentication services are not currently supported when connecting to data sources using the On-Premises Connector.
  • If User Account Control is enabled on your Windows machine and you installed the On-Premises Connector in a system folder (such as Windows or Program Files), you must run the On-Premises Connector Configuration Tool in administrator mode.
  • When using Kerberos with Microsoft Dynamics, the JRE installed with the On-Premises Connector must be configured to run with Kerberos. Take the following steps to configure the JRE.
    • Download a zip file containing new version of the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files for JDK/JRE 8 at http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html.
    • Unzip the file into the \jre\lib\security directory to update the Java security policy files to support 256-bit encryption:
      • C:\<installdir>\jre\lib\security\local_policy.jar
      • C:\<installdir>\jre\lib\security\US_export_policy.jar
  • Uninstalling and re-installing the On-Premises Connector causes the Connector ID of the On-Premises Connector to change. Any Hybrid Data Pipeline data sources using the old Connector ID must be updated to use the new Connector ID. Installing to a new directory allows both the old and new On-Premises Connector to exist side-by-side. However, you must update the Connector ID option in previously-defined Hybrid Data Pipeline data sources to point to the new On-Premises Connector. In addition, you must update Connector Id wherever it was used, such as the definitions of Group Connectors and Authorized Users. Note that upgrading an existing installation of the On-Premises Connector maintains the Connector ID.
  • When upgrading the On-Premises Connector, if the specified user installation directory contains a hyphen “-”, the upgrade will fail. To work around this issue, avoid using hyphen “-” in the user installation directory name. If your existing On-Premises Connector installation directory name contains a hyphen, you must uninstall the existing On-Premises Connector and then perform a new install rather than attempting to upgrade the existing On-Premises Connector installation.
JDBC Driver
  • If you attempt to install the JDBC driver in GUI mode to the default installation directory but do not have appropriate permissions for the default directory, the installer indicates that the installation has succeeded when in fact the driver has not been installed. When attempting an installation under the same circumstance but in console mode, the proper error message is displayed.
  • The default value for the Service connection property does not connect to the Hybrid Data Pipeline server. To connect, set Service to the Hybrid Data Pipeline server in your connection URL.
  • The JDBC 32-bit silent installation fails on Windows 10. Use the standard installation instead.
  • Executing certain queries against MS Dynamics CRM may result in a "Communication failure. Protocol error."
  • Using JNDI data sources, encryptionMethod must be configured through setExtendedOptions.
  • See the d2cjdbcreadme.txt file installed the JDBC driver for more information.
ODBC Driver
  • The default ODBC.INI generated by the installer is missing required entries for Service=, PortNumber=, and HybridDataPipelineDataSource=.
  • Console mode installation is supported only on UNIX.
  • When you first install a driver, you are given the option to install a default data source for that driver. We recommend that you install default data sources when you first install the drivers. If you do not install the default data source at this time, you will be unable to install a default data source for this driver later. To install a default data source for a driver after the initial installation, you must uninstall the driver and then reinstall it.
  • See the d2codbcreadme.txt file installed the ODBC driver for more information.
All data stores
  • It is recommended that Login Timeout not be disabled (set to 0) for a data source.
  • Using setByte to set parameter values fails when the data store does not support the TINYINT SQL type. Use setShort or setInt to set the parameter value instead of setByte.
Google Analytics
  • Validation message is not displayed when a user enters a Start Date value less than the End Date value in Create/Update Google Analytics page.
  • Once a Google Analytics OAuth profile is created for a specific Google account, changing the Google Account associated with the profile results in "the configuration options used to open the database do not match the options used to create the database" error being returned for any existing data sources.
Microsoft Dynamics CRM
  • Executing certain queries against MS Dynamics CRM with the JDBC driver may result in a “Communication failure. Protocol error."
  • Testing has shown the following two errors from Microsoft Dynamics CRM Online when executing queries against the ImportData and TeamTemplate tables:
    • Attribute errortype on Entity ImportData is of type picklist but has Child Attributes Count 0
    • Attribute issystem on Entity TeamTemplate is of type bit but has Child Attributes Count 0
    Note: We have filed a case with Microsoft and are waiting to hear back about the cause of the issue.

  • The initial on-premises connection when the relational map is created can take some time. It is even possible to receive an error "504: Gateway Timeout". When this happens, Hybrid Data Pipeline continues to build the map in the background such that subsequent connection attempts are successful and have full access to the relational map.
OpenEdge 10.2b
  • Setting the MaxPooledStatements data source option in an OpenEdge data store to a value other than zero can cause statement not prepared errors to be returned in some situations.
Oracle Marketing Cloud (Oracle Eloqua)
  • Data store issues
    • There are known issues with Batch Operations.
    • The Update/Delete implementation can update only one record at a time. Because of this, the number of APIs executed depends on the number of records that get updated or deleted by the query plus the number of API calls required to fetch the IDs for those records.
    • Lengths of certain text fields are reported as higher than the actual lengths supported in Oracle Eloqua.
  • We are currently working with Oracle to resolve the following issues with the Oracle Eloqua REST API.
    • AND operators that involve different columns are optimized. In other cases, the queries are only partially optimized.
    • OR operators on the same column are optimized. In other cases, the queries are completely post-processed.
    • The data store is not able to insert or update the NULL value to any field explicitly.
    • The data store is unable to update few fields. They are always reported as NULL after update.
    • Oracle Eloqua uses a double colon (::) as an internal delimiter for multivalued Select fields. Hence when a value with the semi-colon character (;) is inserted or updated into a multivalued Select field, the semicolon character gets converted into the double colon character.
    • Query SELECT count (*) from template returns incorrect results.
    • Oracle Eloqua APIs do not populate the correct values in CreatedBy and UpdatedBy fields. Instead of user names, they contain a Timestamp value.
    • Only equality filters on id fields are optimized. All other filter conditions are not working correctly with Oracle Eloqua APIs and the data store is doing post-processing for such filters.
    • Filters on Non-ID Integer fields and Boolean fields are not working correctly. Hence the driver needs to post-process all these queries.
    • The data store does not distinguish between NULL and empty string. Therefore, null fields are often reported back as empty strings.
    • Values with special characters such as curly braces ({,}), back slash (\), colon (:), slash star (/*) and star slash (*/) are not supported in where clause filter value.
Oracle Sales Cloud
  • Currently, passing filter conditions to Oracle Sales Cloud works only for simple, single column conditions. If there are multiple filters with 'AND' and 'OR', only partial or no filters are passed to Oracle Sales Cloud.
  • Oracle Sales Cloud reports the data type of String and Date fields as String. Therefore, when such fields are filtered or ordered in Hybrid Data Pipeline, they are treated as String values. However, when filter conditions are passed to Oracle Sales Cloud, Oracle Sales Cloud can distinguish between the actual data types and apply Date specific comparisons to Date fields. Therefore, query results can differ depending on whether filters have been passed down to Oracle Sales Cloud or processed by Hybrid Data Pipeline.
  • There appears to be a limitation with the Oracle Sales Cloud REST API concerning the >=, <=, and != comparison operators when querying String fields. Therefore, Hybrid Data Pipeline has not been optimized to pass these comparison operators to Oracle Sales Cloud. We are working with Oracle on this issue.
  • There appears to be a limitation with the Oracle Sales Cloud REST API concerning queries with filter operations on Boolean fields. Therefore, Hybrid Data Pipeline has not been optimized to pass filter operations on Boolean fields to Oracle Sales Cloud. We are working with Oracle on this issue.
  • The drivers currently report ATTACHMENT type fields in the metadata but do not support retrieving data for these fields. These fields are set to NULL.
  • Join queries between parent and child tables are not supported.
  • Queries on child tables whose parent has a composite primary key are not supported. For example, the children of ACTIVITIES_ACTIVITYCONTACT and LEADS_PRODUCTS are not accessible.
  • Queries on the children of relationship objects are not supported. For example, the children of ACCOUNTS_RELATIONSHIP, CONTACTS_RELATIONSHIP, and HOUSEHOLDS_RELATIONSHIP are not accessible.
  • Queries on grandchildren with multiple sets of Parent IDs and Grand Parent IDs used in an OR clause are not supported. For example, the following query is not supported.
    Select * From ACCOUNTS_ADDRESS_ADDRESSPURPOSE
    Where (ACCOUNTS_PARTYNUMBER = 'OSC_12343' AND
    ACCOUNTS_ADDRESS_ADDRESSNUMBER = 'AUNA-2XZKGH')
    OR (ACCOUNTS_PARTYNUMBER = 'OSC_12344' AND
    ACCOUNTS_ADDRESS_ADRESSNUMBER = 'AUNA-2YZKGH")
  • When querying documented objects like "CATALOGPRODUCTITEMS" and "CATEGORYPRODUCTITEMS", no more than 500 records are returned, even when more records may be present. This behavior is also seen with some custom objects. We are currently working with Oracle support to resolve this issue.
  • A query on OPPORTUNITIES_CHILDREVENUE_PRODUCTS or LEADS_PRODUCTGROUPS with a filter on the primary key column returns 0 records even when more records are present. We are currently working with Oracle support to resolve this issue.
  • Queries that contain subqueries returning more than 100 records are not supported. For example, the following query is not supported.
    Select * From ACCOUNTS_ADDRESS
    Where ACCOUNTS_PARTYNUMBER
    In (Select Top 101 PARTYNUMBER From ACCOUNTS
  • When you create custom objects, your Oracle Sales Cloud administrator must enable these objects for REST API access through Application Composer. Otherwise, you will not be able to query against these custom objects.
Oracle Service Cloud
  • When you create a custom object, your Oracle Service Cloud administrator must enable all four columns of the Object Fields tab of the Object Designer, or you cannot query against the custom objects.
  • The initial connection when the relational map is created can take some time. It is even possible to receive an error "504: Gateway Timeout". When this happens, Hybrid Data Pipeline continues to build the map in the background such that subsequent connection attempts are successful and have full access to the relational map.
SugarCRM
  • Data sources that are using the deprecated enableExportMode option will still see a problem until they are migrated to the new data source configuration.
  • Data source connections by default now use Export Mode to communicate with the Sugar CRM server, providing increased performance when querying large sets of data. Bulk export mode causes NULL values for currency columns to be returned as the value 0. Because of this, there is no way to differentiate between a NULL value and 0, when operating in export mode. This can be a problem when using currency columns in the SQL statements, because Hybrid Data Pipeline must satisfy some filter conditions on queries, such as with operations like =, <>, >, >=, <, <=, IS NULL and IS NOT NULL.  For example, suppose a currency column in a table in SugarCRM has 3 null values and 5 values that are 0. When a query is executed to return all NULL values (SELECT * FROM <table> WHERE <>currency column> IS NULL), then 3 rows are returned. However, if a query is executed to return all rows where the column performs an arithmetic operation (SELECT * FROM <table> WHERE <currency column> + 1 = 1), then all 8 records are returned because the 3 NULL values are seen as 0.

Third Party Acknowledgments

Refer to Hybrid Data Pipeline Third Party Acknowledgments.



Release 4.3.0

Progress DataDirect Hybrid Data Pipeline is a data access server that provides simple, secure access to cloud and on-premises data sources, such as RDBMS, Big Data, and NoSQL. Hybrid Data Pipeline allows business intelligence tools and applications to use ODBC, JDBC, or OData to access data from supported data sources. Hybrid Data Pipeline can be installed in the cloud or behind a firewall. Hybrid Data Pipeline can then be configured to work with applications and data sources in nearly any business environment. Progress DataDirect Hybrid Data Pipeline consists of four primary, separately installed components.

  • The Hybrid Data Pipeline server provides access to multiple data sources through a single, unified interface. The server can be hosted on premises or in the cloud.

  • The On-Premises Connector enables the Hybrid Data Pipeline to establish a secure connection from the cloud to an on-premises data source.

  • The ODBC driver enables ODBC applications to communicate to a data source through the Hybrid Data Pipeline server.

  • The JDBC driver enables JDBC applications to communicate to a data source through the Hybrid Data Pipeline server.

4.3.0 Release Notes

Enhancements

Security

 

LDAP authentication

 

Hybrid Data Pipeline has added support to integrate with Active Directory for user authentication using LDAP protocol. Customers can configure an LDAP authentication configuration by providing the details of the server and can configure users to use the LDAP authentication as opposed to the default authentication.

In order to get started with LDAP Authentication, you need to do the following:

  1. Create an Authentication Service of type 3 using the Authentication APIs. Once your authentication service has been created, you must note the authentication service ID.
  2. Create Users tagged to the authentication service ID. You have several different ways of creating service has been created, you must note the authentication service ID.
  3. Create Users tagged to the authentication service ID. You have several different ways of creating users. Refer to the User guide for details.
Permissions
• Support for a permissions API has been added. The Permissions API enables administrators to manage permissions through the Users, Roles, and DataSource APIs. In addition, the Permissions API allows administrators to create data sources on behalf of users and manage end user access to data source details. Administrators can also specify whether to expose change password functionality in the Web UI and SQL editor functionality.
Password policy
• Support for a password policy has been added.
Tomcat Upgrade
• The Hybrid Data Pipeline server and On-Premises Connector have been upgraded to install and use Tomcat 8.5.28.


Hybrid Data Pipeline Server
  • OData Version 4 functions.
    Added OData Version 4 function support for IBM DB2 and Microsoft SQL Server data
    stores. (Note: This functionality was previously added for Oracle Database.) If the data stores contain stored functions, they can be exposed using an OData Version 4 service. As part of OData function support, OData schema map version has been changed. The Web UI will automatically migrate the existing OData schema map to a newer OData schema map version when the OData schema is modified for OData Version 4 data sources.

    The following aspects of OData Version 4 functions are supported:

    • Functions that are unbound (static operations)
    • Function imports
    • Functions that return primitive types
    • Function invocation with OData system query options $filter

    The following aspects of OData Version 4 functions are currently NOT supported:

    • Functions that return complex types and entities
    • Functions that are bound to entities
    • Built-in functions
    • Functions with OUT/INOUT parameters
    • Overloaded functions
    • OData system query options using $select
    • OData system query options using $orderby
    • Functions that invoke Parameter value
    • Parameter aliases are not supported. Hence, function invocation with function parameters as URL query parameters is not supported.

  • Installation procedures and response file. The installation program work flow has been modified. The Hybrid Data Pipeline service has two default users, "d2cadmin" and "d2cuser". The installer now prompts you to enter passwords for each default user. When generating a response file to perform a silent installation, the installer will not include values for these properties. Hence, you will need to add the passwords manually to the response file before proceeding with a silent installation. Also, note that a password policy is not enforced during the installation process. The installer only ensures that a value has been specified. The following table provides the new settings. The settings differ depending on whether you generate the response file with a GUI or console installation. Further details are available in the Progress DataDirect Hybrid Data Pipeline Installation Guide.
New response file options
GUI Console Definition
D2C_ADMIN_PASSWORD D2C_ADMIN_PASSWORD_CONSOLE Specifies the password for the
default administrator.
 D2C_USER_PASSWORD  D2C_USER_PASSWORD_CONSOLE Specifies the password for the
default user.

Web UI


• Product Information In cases where you are using the evaluation version of the product, the Web UI now mentions evaluation timeout information as 'xx Days Remaining'.
• Version Information The product version information now includes details about the licence type. This can be seen under the version information section of the UI. The licence type is also returned when you query for version information via the version API.

Beta support for third party JDBC drivers


• With the 4.3 release, Hybrid Data Pipeline enables users to plug JDBC drivers into Hybrid Data Pipeline and access data using those drivers. This beta feature supports accessibility via JDBC, ODBC and OData clients with the Teradata JDBC driver. If you are interested in setting up this feature as you evaluate Hybrid Data Pipeline, please contact our sales department.

Apache Hive

Enhancements

• Enhanced to optimize the performance of fetches.

• Enhanced to support the Binary, Char, Date, Decimal, and Varchar data types.

• Enhanced to support HTTP mode, which allows you to access Apache Hive data sources using HTTP/HTTPS requests. HTTP mode can be configured using the new Transport Mode and HTTP Path parameters.

• Enhanced to support cookie based authentication for HTTP connections. Cookie based authentication can be configured using the new Enable Cookie Authentication and Cookie Name parameters. * Enhanced to support Apache Knox.

• Enhanced to support Impersonation and Trusted Impersonation using the Impersonate User parameter.

• The Batch Mechanism parameter has been added. When Batch Mechanism is set to multiRowInsert, the driver executes a single insert for all the rows contained in a parameter array. MultiRowInsert is the default setting and provides substantial performance gains when performing batch inserts.

• The Catalog Mode parameter allows you to determine whether the native catalog functions are used to retrieve information returned by DatabaseMetaData functions. In the default setting, Hybrid Data Pipeline employs a balance of native functions and driver-discovered information for the optimal balance of performance and accuracy when retrieving catalog information.

• The Array Fetch Size parameter improves performance and reduces out of memory errors. Array Fetch Size can be used to increase throughput or, alternately, improve response time in Web-based applications.

• The Array Insert Size parameter provides a workaround for memory and server issues that can sometimes occur when inserting a large number of rows that contain large values.

• Certifications

• Certified with Hive 2.0.x, 2.1.x

• Apache Hive data store connectivity has been certified with the following distributions:

• Cloudera (CDH) 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 5.10, 5.11, 5.12

• Hortonworks (HDP) 2.3, 2.4, 2.5

• IBM BigInsights 4.1, 4.2, 4.3

• MapR 5.2

 

Version and distribution support

• Hive versions 1.0 and higher are supported. Support for earlier version has been deprecated.

• The HiveServer2 protocol and higher is supported. As a result:

• Support for the HiveServer1 protocol has been deprecated.

• The Wire Protocol Version parameter has been deprecated.

• Support has been deprecated for the following distributions:

• Amazon Elastic MapReduce (Amazon EMR) 2.1.4, 2.24-3.1.4, 3.2-3.7

• Cloudera's Distribution Including Apache Hadoop (CDH) 4.0, 4.1, 4.2, 4.5, 5.0, 5.1, 5.2, 5.3

• Hortonworks (HDP), versions 1.3, 2.0, 2.1, 2.2

• IBM BigInsights 3.0 - MapR Distribution for Apache Hadoop 1.2, 2.0

• Pivotal Enterprise HD 2.0.1, 2.1

IBM DB2

Certifications

• Certified with DB2 V12 for z/OS

• Certified with dashDB (IBM Db2 Warehouse on Cloud)

Oracle Marketing Cloud (Oracle Eloqua)

Data type support. The following data types are supported for the Oracle Eloqua data store.

• BOOLEAN

• DECIMAL

• INTEGER

• LONG

• LONGSTRING

• STRING

Oracle Sales Cloud

Data type support. The following data types are supported for the Oracle Eloqua data store.

• ARRAY

• BOOLEAN

• DATETIME

• DECIMAL

• DURATION

• INTEGER

• LARGETEXT

• LONG

• TEXT
• URL

 

Known Issues


FIPS compliance with the On-Premises Connector

• The On-Premises Connector is not currently FIPS compliant. Therefore, any connections made to an
on-premises data source through an On-Premises Connector will not be fully FIPS compliant.

Performing a silent installation - Log file issue

• When performing a silent install, if the deployment script fails, no 'SilentInstallError.log' is written. You may
check the 'Installation directory/ddcloud/final.log' to know the installation status.



The use of wildcards in SSL server certificates
  • The Hybrid Data Pipeline service will not by default connect to a backend data store that has been configured for SSL when a wildcard is used to identify the server name in the SSL certificate. If a server certificate contains a wildcard, the following error will be returned.

    There is a problem connecting to the DataSource. SSL handshake failed:
    sun.security.validator.ValidatorException: PKIX path building failed:
    sun.security.provider.certpaths.SunCertPathBuilderException: unable to find
    valid certification path to requested target
    To work around this issue, the exact string (with wildcard) in the server certificate can be specified with the Host Name in Certificate option when configuring your data source through the Hybrid Data Pipeline user interface or management API

     

Load balancer port limitation
  • Either port 80 for non-SSL environments, or port 443 for SSL environments, must be used in the configuration of a load balancer used to support a Hybrid Data Pipeline cluster. Non-standard ports in the configuration of a load balancer are not currently supported.
Web UI
  • When a data source is configured with OData Version 4 and the OData Schema Map version is 'odata_mapping_v3' and it does not contain any "entityNameMode", any further editing of the OData Schema map adds "entityNameMode":"pluralize". This affects how entity names are referred to in the OData queries. To avoid this, you must set the entityNameMode whenever a data source is created or edited to the preferred mode. Alternatively, you can remove the "entityNameMode" property from the OData schema map json while saving the data source, if you want to use the default "Guess" mode.
  • If a Hybrid Data Pipeline administrator creates a user with a password that contains a percentage mark (%), the new user may face issues while trying to login. In addition, Hybrid Data Pipeline functionality may not work as expected.
  • When an administrator tries to add new users using the Add Users window, the Password and Confirm Password fields occasionally do not appear properly in the popup window.
  • COPY DETAILS functionality is not currently working in Internet Explorer 11 due to a limitation with the third party plugin Clipboard.js on bootstrap modals. More details on this can be found at https://github.com/zenorocha/clipboard.js/wiki/Known-Issues.
Management API
  • When the Limits API (throttling) is used to set a row limit and createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE) is being used, a row-limit-exceeded error is returned at the row limit instead of one row beyond the limit. For example, if a row limit is set at 45 rows when returning a scrollable, insensitive result set beyond the specified limit, the connectivity service returns the following error on the 45th row as opposed to the expected 46th row: The limit on the number of rows that can be returned from a query -- 45 -- has been exceeded.
  • If a Hybrid Data Pipeline administrator creates a user with a password that contains a percentage mark (%), the new user may face issues while trying to login. In addition, Hybrid Data Pipeline functionality may not work as expected.
OData
  • Functions are not currently supported for $orderby.
  • OData functions are not supported with the On-Premises Connector.
  • Functions with default parameters are not working.
  • For DB2, BOOLEAN Data type does not work with functions in OData.
  • For SQL Server and DB2, OData datatypes Edm.Date and Edm.TimeofDay do not work in Power BI, if the function is selected from from the list of function Imports and parameter values are provided. However Power BI allows ‘Edm.Date’ and ‘Edm.TimeOfDay’ types for Function imports when passed directly in OData feed. There is one workaround available for type Edm.TimeofDay. The columns that are exposed as Edm.TimeofDay should be mapped as “TimeAsString” in ODataSchemaMap. In this case, PowerBI works
    as expected.
  • In a load balancer environment, when invoking function import (and not function) that takes datetimeoffset as a parameter, we need to encode the : character present in time parameter. So, the following will return an error:
    http://NC-HDP-U13/api/odata4/D2C_ORACLE_ODATAv4_FUNCT/ODATA_FUNC_GTABLE_DATE
    (DATEIN=1999-12-31T00:00:00Z,INTEGERIN=5)
    The correct URL encoded example must look like the following:
    http://NC-HDP-U13/api/odata4/D2C_ORACLE_ODATAv4_FUNCT/ODATA_FUNC_GTABLE_DATE
    (DATEIN=1999-12-31T00%3A00%3A00Z,INTEGERIN=5)
  • When invoking function import (and not function) that returns null using Power BI, a data format error is returned. The resolution to this issue is being discussed internally as well as with Microsoft.
  • OData 4.0 support for $expand does not work with the following data stores: Salesforce, Dynamics CRM, SugarCRM, Rollbase, Google Analytics, and Oracle Service Cloud.
  • $expand only supports one level deep. Take for example the following entity hierarchy:
    Customers
    |-- Orders
    | |-- OrderItems
    |-- Contacts


    The following queries are supported:
    Customers?$expand=Orders
    Customers?$expand=Contacts
    Customers?$expand=Orders,Contacts


    However, this query is not supported:
    Customers?$expand=Orders,OrderItems

    OrderItems is a second level entity with respect to Customers. To query Orders and OrderItems, the query must be rooted at Orders. For example:
    Orders?$expand=OrderItems
    Orders(id)?$expand=OrderItems


  • Hybrid Data Pipeline OData model asynch API incorrectly returns zero instead of the actual percent complete when querying the status of a model that is being generated.
  • When manually editing the ODataSchemaMap value, the table names and column names specified in the value are case-sensitive. The case of the table and column names must match the case of the tables and column names reported by the data source.
    Note: It is highly recommended that you use the OData Schema Editor to generate the value for the ODataSchemaMap data source option. The Schema Editor takes care of table and column name casing and other syntactic details.
  • The $expand clause is not supported with OpenEdge data sources when filtering for more than a single table.
  • The day, endswith, and cast functions are not working when specified in a $filter clause when querying a DB2 data source.
On-Premise Connector
  • FIPS compliance with the On-Premises Connector: The On-Premises Connector is not currently FIPS compliant. Therefore, any connections made to an on-premises data source through an On-Premises Connector will not be fully FIPS compliant.
  • External authentication with the On-Premises Connector: External authentication services are not currently supported when connecting to data sources using the On-Premises Connector.
  • If User Account Control is enabled on your Windows machine and you installed the On-Premises Connector in a system folder (such as Windows or Program Files), you must run the On-Premises Connector Configuration Tool in administrator mode.
  • When using Kerberos with Microsoft Dynamics, the JRE installed with the On-Premises Connector must be configured to run with Kerberos. Take the following steps to configure the JRE.
    • Download a zip file containing new version of the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files for JDK/JRE 8 at http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html.
    • Unzip the file into the \jre\lib\security directory to update the Java security policy files to support 256-bit encryption:
      • C:\<installdir>\jre\lib\security\local_policy.jar
      • C:\<installdir>\jre\lib\security\US_export_policy.jar
  • Uninstalling and re-installing the On-Premises Connector causes the Connector ID of the On-Premises Connector to change. Any Hybrid Data Pipeline data sources using the old Connector ID must be updated to use the new Connector ID. Installing to a new directory allows both the old and new On-Premises Connector to exist side-by-side. However, you must update the Connector ID option in previously-defined Hybrid Data Pipeline data sources to point to the new On-Premises Connector. In addition, you must update Connector Id wherever it was used, such as the definitions of Group Connectors and Authorized Users. Note that upgrading an existing installation of the On-Premises Connector maintains the Connector ID.
  • When upgrading the On-Premises Connector, if the specified user installation directory contains a hyphen “-”, the upgrade will fail. To work around this issue, avoid using hyphen “-” in the user installation directory name. If your existing On-Premises Connector installation directory name contains a hyphen, you must uninstall the existing On-Premises Connector and then perform a new install rather than attempting to upgrade the existing On-Premises Connector installation.
JDBC Driver
  • If you attempt to install the JDBC driver in GUI mode to the default installation directory but do not have appropriate permissions for the default directory, the installer indicates that the installation has succeeded when in fact the driver has not been installed. When attempting an installation under the same circumstance but in console mode, the proper error message is displayed.
  • The default value for the Service connection property does not connect to the Hybrid Data Pipeline server. To connect, set Service to the Hybrid Data Pipeline server in your connection URL.
  • The JDBC 32-bit silent installation fails on Windows 10. Use the standard installation instead.
  • Executing certain queries against MS Dynamics CRM may result in a "Communication failure. Protocol error."
  • Using JNDI data sources, encryptionMethod must be configured through setExtendedOptions.
  • See the d2cjdbcreadme.txt file installed the JDBC driver for more information.
ODBC Driver
  • The default ODBC.INI generated by the installer is missing required entries for Service=, PortNumber=, and HybridDataPipelineDataSource=.
  • Console mode installation is supported only on UNIX.
  • When you first install a driver, you are given the option to install a default data source for that driver. We recommend that you install default data sources when you first install the drivers. If you do not install the default data source at this time, you will be unable to install a default data source for this driver later. To install a default data source for a driver after the initial installation, you must uninstall the driver and then reinstall it.
  • See the d2codbcreadme.txt file installed the ODBC driver for more information.
All data stores
  • It is recommended that Login Timeout not be disabled (set to 0) for a data source.
  • Using setByte to set parameter values fails when the data store does not support the TINYINT SQL type. Use setShort or setInt to set the parameter value instead of setByte.
Google Analytics
  • Validation message is not displayed when a user enters a Start Date value less than the End Date value in Create/Update Google Analytics page.
  • Once a Google Analytics OAuth profile is created for a specific Google account, changing the Google Account associated with the profile results in "the configuration options used to open the database do not match the options used to create the database" error being returned for any existing data sources.
Microsoft Dynamics CRM
  • Executing certain queries against MS Dynamics CRM with the JDBC driver may result in a “Communication failure. Protocol error."
  • Testing has shown the following two errors from Microsoft Dynamics CRM Online when executing queries against the ImportData and TeamTemplate tables:
    • Attribute errortype on Entity ImportData is of type picklist but has Child Attributes Count 0
    • Attribute issystem on Entity TeamTemplate is of type bit but has Child Attributes Count 0
    Note: We have filed a case with Microsoft and are waiting to hear back about the cause of the issue.

  • The initial on-premises connection when the relational map is created can take some time. It is even possible to receive an error "504: Gateway Timeout". When this happens, Hybrid Data Pipeline continues to build the map in the background such that subsequent connection attempts are successful and have full access to the relational map.
OpenEdge 10.2b
  • Setting the MaxPooledStatements data source option in an OpenEdge data store to a value other than zero can cause statement not prepared errors to be returned in some situations.
Oracle Marketing Cloud (Oracle Eloqua)
  • Data store issues
    • There are known issues with Batch Operations.
    • The Update/Delete implementation can update only one record at a time. Because of this, the number of APIs executed depends on the number of records that get updated or deleted by the query plus the number of API calls required to fetch the IDs for those records.
    • Lengths of certain text fields are reported as higher than the actual lengths supported in Oracle Eloqua.
  • We are currently working with Oracle to resolve the following issues with the Oracle Eloqua REST API.
    • AND operators that involve different columns are optimized. In other cases, the queries are only partially optimized.
    • OR operators on the same column are optimized. In other cases, the queries are completely post-processed.
    • The data store is not able to insert or update the NULL value to any field explicitly.
    • The data store is unable to update few fields. They are always reported as NULL after update.
    • Oracle Eloqua uses a double colon (::) as an internal delimiter for multivalued Select fields. Hence when a value with the semi-colon character (;) is inserted or updated into a multivalued Select field, the semicolon character gets converted into the double colon character.
    • Query SELECT count (*) from template returns incorrect results.
    • Oracle Eloqua APIs do not populate the correct values in CreatedBy and UpdatedBy fields. Instead of user names, they contain a Timestamp value.
    • Only equality filters on id fields are optimized. All other filter conditions are not working correctly with Oracle Eloqua APIs and the data store is doing post-processing for such filters.
    • Filters on Non-ID Integer fields and Boolean fields are not working correctly. Hence the driver needs to post-process all these queries.
    • The data store does not distinguish between NULL and empty string. Therefore, null fields are often reported back as empty strings.
    • Values with special characters such as curly braces ({,}), back slash (\), colon (:), slash star (/*) and star slash (*/) are not supported in where clause filter value.
Oracle Sales Cloud
  • Currently, passing filter conditions to Oracle Sales Cloud works only for simple, single column conditions. If there are multiple filters with 'AND' and 'OR', only partial or no filters are passed to Oracle Sales Cloud.
  • Oracle Sales Cloud reports the data type of String and Date fields as String. Therefore, when such fields are filtered or ordered in Hybrid Data Pipeline, they are treated as String values. However, when filter conditions are passed to Oracle Sales Cloud, Oracle Sales Cloud can distinguish between the actual data types and apply Date specific comparisons to Date fields. Therefore, query results can differ depending on whether filters have been passed down to Oracle Sales Cloud or processed by Hybrid Data Pipeline.
  • There appears to be a limitation with the Oracle Sales Cloud REST API concerning the >=, <=, and != comparison operators when querying String fields. Therefore, Hybrid Data Pipeline has not been optimized to pass these comparison operators to Oracle Sales Cloud. We are working with Oracle on this issue.
  • There appears to be a limitation with the Oracle Sales Cloud REST API concerning queries with filter operations on Boolean fields. Therefore, Hybrid Data Pipeline has not been optimized to pass filter operations on Boolean fields to Oracle Sales Cloud. We are working with Oracle on this issue.
  • The drivers currently report ATTACHMENT type fields in the metadata but do not support retrieving data for these fields. These fields are set to NULL.
  • Join queries between parent and child tables are not supported.
  • Queries on child tables whose parent has a composite primary key are not supported. For example, the children of ACTIVITIES_ACTIVITYCONTACT and LEADS_PRODUCTS are not accessible.
  • Queries on the children of relationship objects are not supported. For example, the children of ACCOUNTS_RELATIONSHIP, CONTACTS_RELATIONSHIP, and HOUSEHOLDS_RELATIONSHIP are not accessible.
  • Queries on grandchildren with multiple sets of Parent IDs and Grand Parent IDs used in an OR clause are not supported. For example, the following query is not supported.
    Select * From ACCOUNTS_ADDRESS_ADDRESSPURPOSE
    Where (ACCOUNTS_PARTYNUMBER = 'OSC_12343' AND
    ACCOUNTS_ADDRESS_ADDRESSNUMBER = 'AUNA-2XZKGH')
    OR (ACCOUNTS_PARTYNUMBER = 'OSC_12344' AND
    ACCOUNTS_ADDRESS_ADRESSNUMBER = 'AUNA-2YZKGH")
  • When querying documented objects like "CATALOGPRODUCTITEMS" and "CATEGORYPRODUCTITEMS", no more than 500 records are returned, even when more records may be present. This behavior is also seen with some custom objects. We are currently working with Oracle support to resolve this issue.
  • A query on OPPORTUNITIES_CHILDREVENUE_PRODUCTS or LEADS_PRODUCTGROUPS with a filter on the primary key column returns 0 records even when more records are present. We are currently working with Oracle support to resolve this issue.
  • Queries that contain subqueries returning more than 100 records are not supported. For example, the following query is not supported.
    Select * From ACCOUNTS_ADDRESS
    Where ACCOUNTS_PARTYNUMBER
    In (Select Top 101 PARTYNUMBER From ACCOUNTS
  • When you create custom objects, your Oracle Sales Cloud administrator must enable these objects for REST API access through Application Composer. Otherwise, you will not be able to query against these custom objects.
Oracle Service Cloud
  • When you create a custom object, your Oracle Service Cloud administrator must enable all four columns of the Object Fields tab of the Object Designer, or you cannot query against the custom objects.
  • The initial connection when the relational map is created can take some time. It is even possible to receive an error "504: Gateway Timeout". When this happens, Hybrid Data Pipeline continues to build the map in the background such that subsequent connection attempts are successful and have full access to the relational map.
SugarCRM
  • Data sources that are using the deprecated enableExportMode option will still see a problem until they are migrated to the new data source configuration.
  • Data source connections by default now use Export Mode to communicate with the Sugar CRM server, providing increased performance when querying large sets of data. Bulk export mode causes NULL values for currency columns to be returned as the value 0. Because of this, there is no way to differentiate between a NULL value and 0, when operating in export mode. This can be a problem when using currency columns in the SQL statements, because Hybrid Data Pipeline must satisfy some filter conditions on queries, such as with operations like =, <>, >, >=, <, <=, IS NULL and IS NOT NULL.  For example, suppose a currency column in a table in SugarCRM has 3 null values and 5 values that are 0. When a query is executed to return all NULL values (SELECT * FROM <table> WHERE <>currency column> IS NULL), then 3 rows are returned. However, if a query is executed to return all rows where the column performs an arithmetic operation (SELECT * FROM <table> WHERE <currency column> + 1 = 1), then all 8 records are returned because the 3 NULL values are seen as 0.

Third Party Acknowledgments

Refer to Hybrid Data Pipeline Third Party Acknowledgments.



Release 4.2.1

Progress DataDirect Hybrid Data Pipeline is a data access server that provides simple, secure access to cloud and on-premises data sources, such as RDBMS, Big Data, and NoSQL. Hybrid Data Pipeline allows business intelligence tools and applications to use ODBC, JDBC, or OData to access data from supported data sources. Hybrid Data Pipeline can be installed in the cloud or behind a firewall. Hybrid Data Pipeline can then be configured to work with applications and data sources in nearly any business environment. Progress DataDirect Hybrid Data Pipeline consists of four primary, separately installed components.

  • The Hybrid Data Pipeline server provides access to multiple data sources through a single, unified interface. The server can be hosted on premises or in the cloud.

  • The On-Premises Connector enables the Hybrid Data Pipeline to establish a secure connection from the cloud to an on-premises data source.

  • The ODBC driver enables ODBC applications to communicate to a data source through the Hybrid Data Pipeline server.

  • The JDBC driver enables JDBC applications to communicate to a data source through the Hybrid Data Pipeline server.

Changes Since Release 4.2.1

Enhancements

Change password functionality
  • Hybrid Data Pipeline change password functionality has been enhanced. When changing passwords, users must now provide a current password as well as a new password by default. The Administrator's API has been modified to support this functional change. The changepassword API now includes the currentPassword parameter, as well as the newPassword parameter, in the payload.
       {
       "currentPassword": "<mycurrentpassword>"
       "newPassword": "<mynewpassword>"
       }
    Administrators can fall back to the old functionality by setting the configurations API with the new secureChangePassword attribute (specified with the number 2). For example, the following PUT operation would configure the system to use the old functionality where the user must provide only a new password.
       https://myserver:port/api/admin/configurations/2
       {
       "value": "false"
       }

Resolved Issues

  • 4.2.1.59. Issue 83987. Resolved an issue where editing of the OData schema map resulted in the addition of "entityNameMode":"pluralize" when a data source had been configured with OData Version 4, the OData schema map version was odata_mapping_v3, and entityNameMode had not been included.
  • 4.2.1.59. Issues 84061. Resolved issues where the Web UI was not displaying function synonyms for read-only users and where the Web UI duplicated function parameters when synonyms were created for read-only users.
  • 4.2.1.59. Issue 84480. Resolved an issue where the data access service, when configured with a delimiter for external authentication, required the user login to contain the user name, a delimiter, and the identifier for the internal authentication service, for any users authenticating with the internal authentication service. For example, if the data access service was configured with the @ character as the delimiter, then authenticating as an internal user might look like user1@Internal. Now the user login only needs to contain the user name for any users authenticating with the internal authentication service, for example, user1. When only the user name is provided, the data access service uses the internal authentication service to authenticate the user.
  • 4.2.1.59. Issue 84496. Resolved an issue where the data access server was not running in FIPS approved mode when FIPS was enabled. The Bouncy Castle BCFIPS security provider now ensures that the data access service is running in FIPS approved mode.
    When the data access and notification services start, they check to see if they are running in FIPS approved mode. You can confirm that the services are running in FIPS approved mode by checking their corresponding log files: das/server/logs/catalina.out and notification/logs/palatte/datestamp-SYSTEM.log. With result=true, the log entry confirms that the service is running in FIPS approved mode:
    Check for BouncyCastle Approved Only Mode [result=true]
    NOTE: Because the installer program is not capable of regenerating encryption keys for existing users and data sources, we currently recommend a new, clean installation of Hybrid Data Pipeline with FIPS enabled when upgrading from a non-FIPS-compliant server to a FIPS-compliant server. With a new installation, users and data sources must be re-created.
  • 4.2.1.59. Issue 84499. Resolved an issue where a log file was created for each external user when the data access service is configured to use an external authentication service. The data access service now produces a single log file for each internal user and data source with logging details for each external user associated with that internal user and data source.
  • 4.2.1.59. Issue 84527. Resolved an issue where the database host name and port numbers were included in an error message when a query was made against the data access service with the database down.

4.2.1 Release Notes

Security

FIPS compliance
  • Hybrid Data Pipeline is now FIPS 140-2 compliant. By default, HDP will be installed in a FIPS disabled mode. We recommend a new, clean installation with FIPS enabled for production environments. With a new installation, users and datasources must be re-created. For information on how to enable FIPS, refer to the Progress DataDirect Hybrid Data Pipeline Installation Guide.

    Note: The On-Premises Connector is not currently FIPS compliant. Therefore, any connections made to an on-premises data source through an On-Premises Connector will not be fully FIPS compliant.

Support for external authentication
  • Hybrid Data Pipeline now supports two types of authentication: the Hybrid Data Pipeline internal authentication mechanism and external authentication. The external authentication feature is supported as a Java plugin. Administrators can create their own implementation and plug it into Hybrid Data Pipeline either at the time of installation, or at a later time. After external authentication is set up successfully, using APIs, one can set up users in such a way that they get authenticated against an external authentication system. Optionally, multiple external authentication users can be configured to map to one Hybrid Data Pipeline user to get access to data sources.
Tomcat Upgrade
  • The Hybrid Data Pipeline server and On-Premises Connector have been upgraded to install and use Tomcat 8.5.23.

Enhancements

Hybrid Data Pipeline Server
  • OData Version 4 functions. With 4.2.1, Hybrid Data Pipeline supports OData Version 4 functions for Oracle data sources only. If the Oracle database contains stored functions, they can be exposed using an OData Version 4 service. As part of OData function support, OData schema map version has been changed. The Web UI will automatically migrate the existing OData schema map to a newer OData schema map version when the OData schema is modified for OData Version 4 data sources.

    The following aspects of OData Version 4 functions are supported:

    • Functions that are unbound (static operations)
    • Function imports
    • Functions that return primitive types
    • Function invocation with OData system query options $filter

    The following aspects of OData Version 4 functions are currently NOT supported:

    • Functions that return complex types and entities
    • Functions that are bound to entities
    • Built-in functions
    • Functions with OUT/INOUT parameters
    • Overloaded functions
    • OData system query options using $select
    • OData system query options using $orderby
    • Functions that invoke Parameter value
    • Parameter aliases are not supported. Hence, function invocation with function parameters as URL query parameters is not supported.
  • Log files cleanup. Hybrid Data Pipeline now enables you to configure the number of days for which log files must be stored. This is to prevent log files from completely filling up your directories. You can use the Limits API to specify the number of days for log file retention.
  • Support for Ubuntu. Hybrid Data Pipeline Server now supports Ubuntu Linux version 16 and higher.
  • Installation procedures and response file. The installation procedures have been modified with the introduction of support for FIPS and External Authentication. New prompts have been added to the installation process. One of these prompts has a corresponding option that appears in the response file generated by the latest installer for silent installation. If you are using a response file generated by an earlier version of the installer, you should regenerate the response file with the latest installer. The new response file should then be used for silent installations. The following table provides the new settings. The settings differ depending on whether you generate the response file with a GUI or console installation. Further details are available in the Progress DataDirect Hybrid Data Pipeline Installation Guide.
New response file options
GUI Console Definition
D2C_USING_FIPS_CONFIG D2C_USING_FIPS_CONFIG_CONSOLE Specifies if you want to configure the server to be FIPS-compliant.

Known Issues

The use of wildcards in SSL server certificates
  • The Hybrid Data Pipeline service will not by default connect to a backend data store that has been configured for SSL when a wildcard is used to identify the server name in the SSL certificate. If a server certificate contains a wildcard, the following error will be returned.
    There is a problem connecting to the DataSource. SSL handshake failed:
    sun.security.validator.ValidatorException: PKIX path building failed:
    sun.security.provider.certpaths.SunCertPathBuilderException: unable to find
    valid certification path to requested target
    To work around this issue, the exact string (with wildcard) in the server certificate can be specified with the Host Name in Certificate option when configuring your data source through the Hybrid Data Pipeline user interface or management API
Load balancer port limitation
  • Either port 80 for non-SSL environments, or port 443 for SSL environments, must be used in the configuration of a load balancer used to support a Hybrid Data Pipeline cluster. Non-standard ports in the configuration of a load balancer are not currently supported.
Web UI
  • When a data source is configured with OData Version 4 and the OData Schema Map version is 'odata_mapping_v3' and it does not contain any "entityNameMode", any further editing of the OData Schema map adds "entityNameMode":"pluralize". This affects how entity names are referred to in the OData queries. To avoid this, you must set the entityNameMode whenever a data source is created or edited to the preferred mode. Alternatively, you can remove the "entityNameMode" property from the OData schema map json while saving the data source, if you want to use the default "Guess" mode.
  • If a Hybrid Data Pipeline administrator creates a user with a password that contains a percentage mark (%), the new user may face issues while trying to login. In addition, Hybrid Data Pipeline functionality may not work as expected.
  • When an administrator tries to add new users using the Add Users window, the Password and Confirm Password fields occasionally do not appear properly in the popup window.
  • COPY DETAILS functionality is not currently working in Internet Explorer 11 due to a limitation with the third party plugin Clipboard.js on bootstrap modals. More details on this can be found at https://github.com/zenorocha/clipboard.js/wiki/Known-Issues.
Management API
  • When the Limits API (throttling) is used to set a row limit and createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE) is being used, a row-limit-exceeded error is returned at the row limit instead of one row beyond the limit. For example, if a row limit is set at 45 rows when returning a scrollable, insensitive result set beyond the specified limit, the connectivity service returns the following error on the 45th row as opposed to the expected 46th row: The limit on the number of rows that can be returned from a query -- 45 -- has been exceeded.
  • If a Hybrid Data Pipeline administrator creates a user with a password that contains a percentage mark (%), the new user may face issues while trying to login. In addition, Hybrid Data Pipeline functionality may not work as expected.
OData
  • Functions are not currently supported for $orderby.
  • In a load balancer environment, when invoking function import (and not function) that takes datetimeoffset as a parameter, we need to encode the : character present in time parameter. So, the following will return an error:
    http://NC-HDP-U13/api/odata4/D2C_ORACLE_ODATAv4_FUNCT/ODATA_FUNC_GTABLE_DATE
    (DATEIN=1999-12-31T00:00:00Z,INTEGERIN=5)
    The correct URL encoded example must look like the following:
    http://NC-HDP-U13/api/odata4/D2C_ORACLE_ODATAv4_FUNCT/ODATA_FUNC_GTABLE_DATE
    (DATEIN=1999-12-31T00%3A00%3A00Z,INTEGERIN=5)
  • When invoking function import (and not function) that returns null using Power BI, a data format error is returned. The resolution to this issue is being discussed internally as well as with Microsoft.
  • OData 4.0 support for $expand does not work with the following data stores: Salesforce, Dynamics CRM, SugarCRM, Rollbase, Google Analytics, and Oracle Service Cloud.
  • $expand only supports one level deep. Take for example the following entity hierarchy:
    Customers
    |-- Orders
    | |-- OrderItems
    |-- Contacts


    The following queries are supported:
    Customers?$expand=Orders
    Customers?$expand=Contacts
    Customers?$expand=Orders,Contacts


    However, this query is not supported:
    Customers?$expand=Orders,OrderItems

    OrderItems is a second level entity with respect to Customers. To query Orders and OrderItems, the query must be rooted at Orders. For example:
    Orders?$expand=OrderItems
    Orders(id)?$expand=OrderItems


  • Hybrid Data Pipeline OData model asynch API incorrectly returns zero instead of the actual percent complete when querying the status of a model that is being generated.
  • When manually editing the ODataSchemaMap value, the table names and column names specified in the value are case-sensitive. The case of the table and column names must match the case of the tables and column names reported by the data source.
    Note: It is highly recommended that you use the OData Schema Editor to generate the value for the ODataSchemaMap data source option. The Schema Editor takes care of table and column name casing and other syntactic details.
  • The $expand clause is not supported with OpenEdge data sources when filtering for more than a single table.
  • The day, endswith, and cast functions are not working when specified in a $filter clause when querying a DB2 data source.
On-Premise Connector
  • FIPS compliance with the On-Premises Connector: The On-Premises Connector is not currently FIPS compliant. Therefore, any connections made to an on-premises data source through an On-Premises Connector will not be fully FIPS compliant.
  • External authentication with the On-Premises Connector: External authentication services are not currently supported when connecting to data sources using the On-Premises Connector.
  • If User Account Control is enabled on your Windows machine and you installed the On-Premises Connector in a system folder (such as Windows or Program Files), you must run the On-Premises Connector Configuration Tool in administrator mode.
  • When using Kerberos with Microsoft Dynamics, the JRE installed with the On-Premises Connector must be configured to run with Kerberos. Take the following steps to configure the JRE.
    • Download a zip file containing new version of the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files for JDK/JRE 8 at http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html.
    • Unzip the file into the \jre\lib\security directory to update the Java security policy files to support 256-bit encryption:
      • C:\<installdir>\jre\lib\security\local_policy.jar
      • C:\<installdir>\jre\lib\security\US_export_policy.jar
  • Uninstalling and re-installing the On-Premises Connector causes the Connector ID of the On-Premises Connector to change. Any Hybrid Data Pipeline data sources using the old Connector ID must be updated to use the new Connector ID. Installing to a new directory allows both the old and new On-Premises Connector to exist side-by-side. However, you must update the Connector ID option in previously-defined Hybrid Data Pipeline data sources to point to the new On-Premises Connector. In addition, you must update Connector Id wherever it was used, such as the definitions of Group Connectors and Authorized Users. Note that upgrading an existing installation of the On-Premises Connector maintains the Connector ID.
  • When upgrading the On-Premises Connector, if the specified user installation directory contains a hyphen “-”, the upgrade will fail. To work around this issue, avoid using hyphen “-” in the user installation directory name. If your existing On-Premises Connector installation directory name contains a hyphen, you must uninstall the existing On-Premises Connector and then perform a new install rather than attempting to upgrade the existing On-Premises Connector installation.
JDBC Driver
  • If you attempt to install the JDBC driver in GUI mode to the default installation directory but do not have appropriate permissions for the default directory, the installer indicates that the installation has succeeded when in fact the driver has not been installed. When attempting an installation under the same circumstance but in console mode, the proper error message is displayed.
  • The default value for the Service connection property does not connect to the Hybrid Data Pipeline server. To connect, set Service to the Hybrid Data Pipeline server in your connection URL.
  • The JDBC 32-bit silent installation fails on Windows 10. Use the standard installation instead.
  • Executing certain queries against MS Dynamics CRM may result in a "Communication failure. Protocol error."
  • Using JNDI data sources, encryptionMethod must be configured through setExtendedOptions.
  • See the d2cjdbcreadme.txt file installed the JDBC driver for more information.
ODBC Driver
  • The default ODBC.INI generated by the installer is missing required entries for Service=, PortNumber=, and HybridDataPipelineDataSource=.
  • Console mode installation is supported only on UNIX.
  • When you first install a driver, you are given the option to install a default data source for that driver. We recommend that you install default data sources when you first install the drivers. If you do not install the default data source at this time, you will be unable to install a default data source for this driver later. To install a default data source for a driver after the initial installation, you must uninstall the driver and then reinstall it.
  • See the d2codbcreadme.txt file installed the ODBC driver for more information.
All data stores
  • It is recommended that Login Timeout not be disabled (set to 0) for a data source.
  • Using setByte to set parameter values fails when the data store does not support the TINYINT SQL type. Use setShort or setInt to set the parameter value instead of setByte.
Google Analytics
  • Validation message is not displayed when a user enters a Start Date value less than the End Date value in Create/Update Google Analytics page.
  • Once a Google Analytics OAuth profile is created for a specific Google account, changing the Google Account associated with the profile results in "the configuration options used to open the database do not match the options used to create the database" error being returned for any existing data sources.
Microsoft Dynamics CRM
  • Executing certain queries against MS Dynamics CRM with the JDBC driver may result in a “Communication failure. Protocol error."
  • Testing has shown the following two errors from Microsoft Dynamics CRM Online when executing queries against the ImportData and TeamTemplate tables:
    • Attribute errortype on Entity ImportData is of type picklist but has Child Attributes Count 0
    • Attribute issystem on Entity TeamTemplate is of type bit but has Child Attributes Count 0
    Note: We have filed a case with Microsoft and are waiting to hear back about the cause of the issue.

  • The initial on-premises connection when the relational map is created can take some time. It is even possible to receive an error "504: Gateway Timeout". When this happens, Hybrid Data Pipeline continues to build the map in the background such that subsequent connection attempts are successful and have full access to the relational map.
OpenEdge 10.2b
  • Setting the MaxPooledStatements data source option in an OpenEdge data store to a value other than zero can cause statement not prepared errors to be returned in some situations.
Oracle Marketing Cloud (Oracle Eloqua)
  • Data store issues
    • There are known issues with Batch Operations.
    • The Update/Delete implementation can update only one record at a time. Because of this, the number of APIs executed depends on the number of records that get updated or deleted by the query plus the number of API calls required to fetch the IDs for those records.
    • Lengths of certain text fields are reported as higher than the actual lengths supported in Oracle Eloqua.
  • We are currently working with Oracle to resolve the following issues with the Oracle Eloqua REST API.
    • AND operators that involve different columns are optimized. In other cases, the queries are only partially optimized.
    • OR operators on the same column are optimized. In other cases, the queries are completely post-processed.
    • The data store is not able to insert or update the NULL value to any field explicitly.
    • The data store is unable to update few fields. They are always reported as NULL after update.
    • Oracle Eloqua uses a double colon (::) as an internal delimiter for multivalued Select fields. Hence when a value with the semi-colon character (;) is inserted or updated into a multivalued Select field, the semicolon character gets converted into the double colon character.
    • Query SELECT count (*) from template returns incorrect results.
    • Oracle Eloqua APIs do not populate the correct values in CreatedBy and UpdatedBy fields. Instead of user names, they contain a Timestamp value.
    • Only equality filters on id fields are optimized. All other filter conditions are not working correctly with Oracle Eloqua APIs and the data store is doing post-processing for such filters.
    • Filters on Non-ID Integer fields and Boolean fields are not working correctly. Hence the driver needs to post-process all these queries.
    • The data store does not distinguish between NULL and empty string. Therefore, null fields are often reported back as empty strings.
    • Values with special characters such as curly braces ({,}), back slash (\), colon (:), slash star (/*) and star slash (*/) are not supported in where clause filter value.
Oracle Sales Cloud
  • Currently, passing filter conditions to Oracle Sales Cloud works only for simple, single column conditions. If there are multiple filters with 'AND' and 'OR', only partial or no filters are passed to Oracle Sales Cloud.
  • Oracle Sales Cloud reports the data type of String and Date fields as String. Therefore, when such fields are filtered or ordered in Hybrid Data Pipeline, they are treated as String values. However, when filter conditions are passed to Oracle Sales Cloud, Oracle Sales Cloud can distinguish between the actual data types and apply Date specific comparisons to Date fields. Therefore, query results can differ depending on whether filters have been passed down to Oracle Sales Cloud or processed by Hybrid Data Pipeline.
  • There appears to be a limitation with the Oracle Sales Cloud REST API concerning the >=, <=, and != comparison operators when querying String fields. Therefore, Hybrid Data Pipeline has not been optimized to pass these comparison operators to Oracle Sales Cloud. We are working with Oracle on this issue.
  • There appears to be a limitation with the Oracle Sales Cloud REST API concerning queries with filter operations on Boolean fields. Therefore, Hybrid Data Pipeline has not been optimized to pass filter operations on Boolean fields to Oracle Sales Cloud. We are working with Oracle on this issue.
  • The drivers currently report ATTACHMENT type fields in the metadata but do not support retrieving data for these fields. These fields are set to NULL.
  • Join queries between parent and child tables are not supported.
  • Queries on child tables whose parent has a composite primary key are not supported. For example, the children of ACTIVITIES_ACTIVITYCONTACT and LEADS_PRODUCTS are not accessible.
  • Queries on the children of relationship objects are not supported. For example, the children of ACCOUNTS_RELATIONSHIP, CONTACTS_RELATIONSHIP, and HOUSEHOLDS_RELATIONSHIP are not accessible.
  • Queries on grandchildren with multiple sets of Parent IDs and Grand Parent IDs used in an OR clause are not supported. For example, the following query is not supported.
    Select * From ACCOUNTS_ADDRESS_ADDRESSPURPOSE
    Where (ACCOUNTS_PARTYNUMBER = 'OSC_12343' AND
    ACCOUNTS_ADDRESS_ADDRESSNUMBER = 'AUNA-2XZKGH')
    OR (ACCOUNTS_PARTYNUMBER = 'OSC_12344' AND
    ACCOUNTS_ADDRESS_ADRESSNUMBER = 'AUNA-2YZKGH")
  • When querying documented objects like "CATALOGPRODUCTITEMS" and "CATEGORYPRODUCTITEMS", no more than 500 records are returned, even when more records may be present. This behavior is also seen with some custom objects. We are currently working with Oracle support to resolve this issue.
  • A query on OPPORTUNITIES_CHILDREVENUE_PRODUCTS or LEADS_PRODUCTGROUPS with a filter on the primary key column returns 0 records even when more records are present. We are currently working with Oracle support to resolve this issue.
  • Queries that contain subqueries returning more than 100 records are not supported. For example, the following query is not supported.
    Select * From ACCOUNTS_ADDRESS
    Where ACCOUNTS_PARTYNUMBER
    In (Select Top 101 PARTYNUMBER From ACCOUNTS
  • When you create custom objects, your Oracle Sales Cloud administrator must enable these objects for REST API access through Application Composer. Otherwise, you will not be able to query against these custom objects.
Oracle Service Cloud
  • When you create a custom object, your Oracle Service Cloud administrator must enable all four columns of the Object Fields tab of the Object Designer, or you cannot query against the custom objects.
  • The initial connection when the relational map is created can take some time. It is even possible to receive an error "504: Gateway Timeout". When this happens, Hybrid Data Pipeline continues to build the map in the background such that subsequent connection attempts are successful and have full access to the relational map.
SugarCRM
  • Data sources that are using the deprecated enableExportMode option will still see a problem until they are migrated to the new data source configuration.
  • Data source connections by default now use Export Mode to communicate with the Sugar CRM server, providing increased performance when querying large sets of data. Bulk export mode causes NULL values for currency columns to be returned as the value 0. Because of this, there is no way to differentiate between a NULL value and 0, when operating in export mode. This can be a problem when using currency columns in the SQL statements, because Hybrid Data Pipeline must satisfy some filter conditions on queries, such as with operations like =, <>, >, >=, <, <=, IS NULL and IS NOT NULL.  For example, suppose a currency column in a table in SugarCRM has 3 null values and 5 values that are 0. When a query is executed to return all NULL values (SELECT * FROM <table> WHERE <>currency column> IS NULL), then 3 rows are returned. However, if a query is executed to return all rows where the column performs an arithmetic operation (SELECT * FROM <table> WHERE <currency column> + 1 = 1), then all 8 records are returned because the 3 NULL values are seen as 0.

Third Party Acknowledgments

Refer to Hybrid Data Pipeline Third Party Acknowledgments.



Release 4.2.0

Progress DataDirect Hybrid Data Pipeline is a data access server that provides simple, secure access to cloud and on-premises data sources, such as RDBMS, Big Data, and NoSQL. Hybrid Data Pipeline allows business intelligence tools and applications to use ODBC, JDBC, or OData to access data from supported data sources. Hybrid Data Pipeline can be installed in the cloud or behind a firewall. Hybrid Data Pipeline can then be configured to work with applications and data sources in nearly any business environment. Progress DataDirect Hybrid Data Pipeline consists of four primary, separately installed components.

  • The Hybrid Data Pipeline server provides access to multiple data sources through a single, unified interface. The server can be hosted on premises or in the cloud.

  • The On-Premises Connector enables the Hybrid Data Pipeline to establish a secure connection from the cloud to an on-premises data source.

  • The ODBC driver enables ODBC applications to communicate to a data source through the Hybrid Data Pipeline server.

  • The JDBC driver enables JDBC applications to communicate to a data source through the Hybrid Data Pipeline server.


4.2.0 Release Notes

Security

On-Premises Connector in a Hybrid Data Pipeline Cluster
  • Support for the On-Premises Connector in a cluster environment in which multiple Hybrid Data Pipeline nodes run behind a load balancer. The On-Premises Connector allows cloud applications to securely query on-premises data sources without requiring a VPN or other gateway.
Account Lockout Policy
  • Support for implementing an account lockout policy. An account lockout policy can be implemented with the Administrator Limits API. An account lockout policy allows the administrator to set the number of consecutive failed authentication attempts that result in a user account being locked, as well as the lockout period and the duration of time that failed attempts are counted. When a lockout occurs, the user is unable to authenticate until the specified period of time has passed or until the administrator unlocks the account.
    With Release 4.2.0, the Hybrid Data Pipeline account lockout policy is by default enabled in accordance with Federal Risk and Authorization Management Program (FedRAMP) low- and medium-risk guidelines. The number of failed authentication attempts is limited to 3 in a 15 minute period. Once this limit is met, a lockout of the user account occurs for 30 minutes.
CORS Filters
  • Support for cross-origin resource sharing (CORS) filters that allow the sharing of web resources across domains. While the default CORS setting is off, CORS filters can be enabled with the Administrator Limits API and a list of trusted origins can be enabled with the Whitelist APIs to fully implement CORS filtering. CORS provides several advantages over sites with a single-origin policy, including improved resource management and two-way integration between third-party sites.
JVM Upgrade
  • The Hybrid Data Pipeline server and On-Premises Connector have been upgraded to install and use Java SE 8 (8u131).
Tomcat Upgrade
  • The Hybrid Data Pipeline server and On-Premises Connector have been upgraded to install and use Tomcat 8.0.46.


Enhancements

Hybrid Data Pipeline Server
  • OData 4.0 Support. Support for the OData 4.0 specification. OData 4.0 support includes the following:
    • Support for the $search clause
    • Support for $batch requests
    • The expand clause has been enhanced to support $select, *, $filter, and $top operations
    • The $metadata clause has been enhanced to support the full, minimal, and none arguments
    • Support for the date, dateTimeOffset, and timeOfDay data types
    • Only supports JSON format for payloads
  • OData Model Status
    • OData Model status now displays the timestamp of the OData model creation. The timestamp is displayed only when model creation is completed successfully.
    • Users can now view the details of the tables and/or columns that were dropped while generating the OData Model for a given schema map of a Data Source. You can view the warnings through the Web UI as well as through API.
  • Version Information. Support for version information to be returned from /api/mgmt/version endpoint. This feature is now accessible via all user accounts. The response is returned in a JSON-style format with the following syntax.

{
"HDPVersion": "major.minor.service_pack.build_number"
"WAPVersion": "major.minor.service_pack.build_number",
"DASVersion":"major.minor.service_pack.build_number"
}

 

  • OData Query Throttling. Support for OData query throttling. OData throttling can be implemented with the Administrator Limits API. When OData throttling is enabled, rows are fetched one page in advance of application requests. In addition, administrators can specify a maximum number of concurrent OData queries to prevent users from exhausting system and database resources.
  • Apache Kafka Message Queue. Support for Apache Kafka message queue in a Hybrid Data Pipeline. Apache Kafka allows you to distribute your message queue over multiple nodes for a high-level of availability and fault tolerance. Note that Apache Kafka is not included as part of the product installation. For download information, refer to https://kafka.apache.org/.
  • Logging. Support for configuring logging at the data source and user level. Administrators can configure logging using the Web UI or the Administrator Logging API.
  • Microsoft SQL Server System Database. Support for SQL Server as an external system database. During the installation process, you are prompted to select either an internal database or an external database to store system information necessary for the operation of Hybrid Data Pipeline. With this enhancement, you can choose Oracle, MySQL Community Edition, or SQL Server as an external database.
  • Shared Key Location. During the installation of the Hybrid Data Pipeline server, you are prompted to specify a "Key location" for the generated key. The directory specified serves as the location for additional internal files used in the installation and operation of the server. These files include properties files, encryption keys, and system information. In particular, the files located in the redist subdirectory after installation of the server must be used in the installation of the On-Premises Connector, the ODBC driver, and the JDBC driver. See the Progress DataDirect Hybrid Data Pipeline Installation Guide for details.
  • Installation Procedures and Response File. The installation procedures have been modified with the introduction of support for the On-Premises Connector in a Hybrid Data Pipeline cluster, support for Apache Kafka message queue in a Hybrid Data Pipeline cluster, and support for the Microsoft SQL Server system database. New prompts have been added to the installation process. Several of these prompts have corresponding options that appear in a response file generated by the latest installer for silent installation. If you are using a response file generated by an earlier version of the installer, you should regenerate the response file with the latest installer. The new response file should then be used for silent installations. The following table provides the new settings. The settings may differ depending on whether you generate the response file with a GUI or console installation. Further details are available in the Progress DataDirect Hybrid Data Pipeline Installation Guide.

Note: Values for the SKIP_HOSTNAME_VALIDATION and SKIP_PORT_VALIDATION options are now false | true, false for disable and true for enable. These options have the same name in GUI-generated and console-generated response files.



Note: Values for the SKIP_LB_HOSTNAME_VALIDATION option are now false | true, false for disable and true for enable. This option has the same name in GUI-generated and console-generated response files.


New response file options
GUI Console Definition
D2C_USING_KAFKA_CONFIG D2C_USING_KAFKA_CONFIG_CONSOLE Specifies whether you are using an Apache Kafka message queue service.
D2C_MESSAGE_QUEUE_SERVERS D2C_MESSAGE_QUEUE_SERVERS_CONSOLE Specifies the servers in your Apache Kafka cluster.
D2C_HDP_CLUSTER_NAME D2C_HDP_CLUSTER_NAME_CONSOLE Specifies a name for your Hybrid Data Pipeline cluster used by the Apache Kafka message queue service.
D2C_DB_VENDOR_MSSQLSERVER na Specifies whether you are using SQL Server as an external systems database. In a console mode response file, the external database is specified with the D2C_DB_VENDOR_CONSOLE option.
D2C_DB_PORT_MSSQLSERVER na Specifies the port number of a SQL Server external systems database. In a console mode response file, the external database port is specified with the D2C_DB_PORT_CONSOLE option.
D2C_SCHEMA_NAME D2C_SCHEMA_NAME_CONSOLE Specifies the name of the schema to be used to store systems information when a SQL Server external systems database is being used.

On-Premises Connector
  • Support for the On-Premises Connector in a cluster environment in which multiple Hybrid Data Pipeline nodes run behind a load balancer. The On-Premises Connector allows cloud applications to securely query on-premises data sources without requiring a VPN or other gateway.
Apache Hive
  • Certified with Hive 2.0 and 2.1.
IBM DB2
  • Certified with DB2 for i 7.3.
Oracle Database
  • Certified with Oracle 12c R2 (12.2).
Oracle Sales Cloud
  • Support for proxy server connections.

Resolved Issues

Hybrid Data Pipeline server
  • Issue 71841. Resolved an issue where the Hybrid Data Pipeline server failed to honor the START_ON_INSTALL environment variable to stop and start Tomcat services.
  • Resolved an issue where the installer accepted an SSL certificate only in the PEM file format during the installation of the server for a cluster environment. The installer now accepts the SSL certificate (root certificate) in PEM, DER, or base64 encodings for a cluster installation.
  • Resolved an issue where an SSL certificate was required for a cluster installation. An SSL certificate is no longer required for a cluster installation.
  • Resolved an issue that prevented the installer from supporting a number of upgrade scenarios.
JDBC driver
  • Resolved an issue where the JDBC driver was not connecting to the Hybrid Data Pipeline server by default when running on a UNIX/Linux system.

Known Issues

Load Balancer Port Limitation
  • Either port 80 for non-SSL environments, or port 443 for SSL environments, must be used in the configuration of a load balancer used to support a Hybrid Data Pipeline cluster. Non-standard ports in the configuration of a load balancer are not currently supported.
Web UI
  • If a Hybrid Data Pipeline administrator creates a user with a password that contains a percentage mark (%), the new user may face issues while trying to login. In addition, Hybrid Data Pipeline functionality may not work as expected.
  • When an administrator tries to add new users using the Add Users window, the Password and Confirm Password fields occasionally do not appear properly in the popup window.
  • COPY DETAILS functionality is not currently working in Internet Explorer 11 due to a limitation with the third party plugin Clipboard.js on bootstrap modals. More details on this can be found at https://github.com/zenorocha/clipboard.js/wiki/Known-Issues.
Management API
  • When the Limits API (throttling) is used to set a row limit and createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE) is being used, a row-limit-exceeded error is returned at the row limit instead of one row beyond the limit. For example, if a row limit is set at 45 rows when returning a scrollable, insensitive result set beyond the specified limit, the connectivity service returns the following error on the 45th row as opposed to the expected 46th row: The limit on the number of rows that can be returned from a query -- 45 -- has been exceeded.
  • If a Hybrid Data Pipeline administrator creates a user with a password that contains a percentage mark (%), the new user may face issues while trying to login. In addition, Hybrid Data Pipeline functionality may not work as expected.
OData
  • OData 4.0 support for $expand does not work with the following data stores: Salesforce, Dynamics CRM, SugarCRM, Rollbase, Google Analytics, and Oracle Service Cloud.
  • $expand only supports one level deep. Take for example the following entity hierarchy:
    Customers
    |-- Orders
    | |-- OrderItems
    |-- Contacts


    The following queries are supported:
    Customers?$expand=Orders
    Customers?$expand=Contacts
    Customers?$expand=Orders,Contacts


    However, this query is not supported:
    Customers?$expand=Orders,OrderItems

    OrderItems is a second level entity with respect to Customers. To query Orders and OrderItems, the query must be rooted at Orders. For example:
    Orders?$expand=OrderItems
    Orders(id)?$expand=OrderItems


  • Hybrid Data Pipeline OData model asynch API incorrectly returns zero instead of the actual percent complete when querying the status of a model that is being generated.
  • When manually editing the ODataSchemaMap value, the table names and column names specified in the value are case-sensitive. The case of the table and column names must match the case of the tables and column names reported by the data source.
    Note: It is highly recommended that you use the OData Schema Editor to generate the value for the ODataSchemaMap data source option. The Schema Editor takes care of table and column name casing and other syntactic details.
  • The $expand clause is not supported with OpenEdge data sources when filtering for more than a single table.
  • The day, endswith, and cast functions are not working when specified in a $filter clause when querying a DB2 data source.
On-Premise Connector
  • If User Account Control is enabled on your Windows machine and you installed the On-Premises Connector in a system folder (such as Windows or Program Files), you must run the On-Premises Connector Configuration Tool in administrator mode.
  • When using Kerberos with Microsoft Dynamics, the JRE installed with the On-Premises Connector must be configured to run with Kerberos. Take the following steps to configure the JRE.
    • Download a zip file containing new version of the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files for JDK/JRE 8 at http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html.
    • Unzip the file into the \jre\lib\security directory to update the Java security policy files to support 256-bit encryption:
      • C:\<installdir>\jre\lib\security\local_policy.jar
      • C:\<installdir>\jre\lib\security\US_export_policy.jar
  • Uninstalling and re-installing the On-Premises Connector causes the Connector ID of the On-Premises Connector to change. Any Hybrid Data Pipeline data sources using the old Connector ID must be updated to use the new Connector ID. Installing to a new directory allows both the old and new On-Premises Connector to exist side-by-side. However, you must update the Connector ID option in previously-defined Hybrid Data Pipeline data sources to point to the new On-Premises Connector. In addition, you must update Connector Id wherever it was used, such as the definitions of Group Connectors and Authorized Users. Note that upgrading an existing installation of the On-Premises Connector maintains the Connector ID.
  • When upgrading the On-Premises Connector, if the specified user installation directory contains a hyphen “-”, the upgrade will fail. To work around this issue, avoid using hyphen “-” in the user installation directory name. If your existing On-Premises Connector installation directory name contains a hyphen, you must uninstall the existing On-Premises Connector and then perform a new install rather than attempting to upgrade the existing On-Premises Connector installation.
JDBC Driver
  • If you attempt to install the JDBC driver in GUI mode to the default installation directory but do not have appropriate permissions for the default directory, the installer indicates that the installation has succeeded when in fact the driver has not been installed. When attempting an installation under the same circumstance but in console mode, the proper error message is displayed.
  • The default value for the Service connection property does not connect to the Hybrid Data Pipeline server. To connect, set Service to the Hybrid Data Pipeline server in your connection URL.
  • The JDBC 32-bit silent installation fails on Windows 10. Use the standard installation instead.
  • Executing certain queries against MS Dynamics CRM may result in a "Communication failure. Protocol error."
  • Using JNDI data sources, encryptionMethod must be configured through setExtendedOptions.
  • See the d2cjdbcreadme.txt file installed the JDBC driver for more information.
ODBC Driver
  • The default ODBC.INI generated by the installer is missing required entries for Service=, PortNumber=, and HybridDataPipelineDataSource=.
  • Console mode installation is supported only on UNIX.
  • When you first install a driver, you are given the option to install a default data source for that driver. We recommend that you install default data sources when you first install the drivers. If you do not install the default data source at this time, you will be unable to install a default data source for this driver later. To install a default data source for a driver after the initial installation, you must uninstall the driver and then reinstall it.
  • See the d2codbcreadme.txt file installed the ODBC driver for more information.
All Data Stores
  • It is recommended that Login Timeout not be disabled (set to 0) for a data source.
  • Using setByte to set parameter values fails when the data store does not support the TINYINT SQL type. Use setShort or setInt to set the parameter value instead of setByte.
Google Analytics
  • Validation message is not displayed when a user enters a Start Date value less than the End Date value in Create/Update Google Analytics page.
  • Once a Google Analytics OAuth profile is created for a specific Google account, changing the Google Account associated with the profile results in "the configuration options used to open the database do not match the options used to create the database" error being returned for any existing data sources.
Microsoft Dynamics CRM
  • Executing certain queries against MS Dynamics CRM with the JDBC driver may result in a “Communication failure. Protocol error."
  • Testing has shown the following two errors from Microsoft Dynamics CRM Online when executing queries against the ImportData and TeamTemplate tables:
    • Attribute errortype on Entity ImportData is of type picklist but has Child Attributes Count 0
    • Attribute issystem on Entity TeamTemplate is of type bit but has Child Attributes Count 0
    Note: We have filed a case with Microsoft and are waiting to hear back about the cause of the issue.

  • The initial on-premises connection when the relational map is created can take some time. It is even possible to receive an error "504: Gateway Timeout". When this happens, Hybrid Data Pipeline continues to build the map in the background such that subsequent connection attempts are successful and have full access to the relational map.
OpenEdge 10.2b
  • Setting the MaxPooledStatements data source option in an OpenEdge data store to a value other than zero can cause statement not prepared errors to be returned in some situations.
Oracle Marketing Cloud (Oracle Eloqua)
  • Data store issues
    • There are known issues with Batch Operations.
    • The Update/Delete implementation can update only one record at a time. Because of this, the number of APIs executed depends on the number of records that get updated or deleted by the query plus the number of API calls required to fetch the IDs for those records.
    • Lengths of certain text fields are reported as higher than the actual lengths supported in Oracle Eloqua.
  • We are currently working with Oracle to resolve the following issues with the Oracle Eloqua REST API.
    • AND operators that involve different columns are optimized. In other cases, the queries are only partially optimized.
    • OR operators on the same column are optimized. In other cases, the queries are completely post-processed.
    • The data store is not able to insert or update the NULL value to any field explicitly.
    • The data store is unable to update few fields. They are always reported as NULL after update.
    • Oracle Eloqua uses a double colon (::) as an internal delimiter for multivalued Select fields. Hence when a value with the semi-colon character (;) is inserted or updated into a multivalued Select field, the semicolon character gets converted into the double colon character.
    • Query SELECT count (*) from template returns incorrect results.
    • Oracle Eloqua APIs do not populate the correct values in CreatedBy and UpdatedBy fields. Instead of user names, they contain a Timestamp value.
    • Only equality filters on id fields are optimized. All other filter conditions are not working correctly with Oracle Eloqua APIs and the data store is doing post-processing for such filters.
    • Filters on Non-ID Integer fields and Boolean fields are not working correctly. Hence the driver needs to post-process all these queries.
    • The data store does not distinguish between NULL and empty string. Therefore, null fields are often reported back as empty strings.
    • Values with special characters such as curly braces ({,}), back slash (\), colon (:), slash star (/*) and star slash (*/) are not supported in where clause filter value.
Oracle Sales Cloud
  • Currently, passing filter conditions to Oracle Sales Cloud works only for simple, single column conditions. If there are multiple filters with 'AND' and 'OR', only partial or no filters are passed to Oracle Sales Cloud.
  • Oracle Sales Cloud reports the data type of String and Date fields as String. Therefore, when such fields are filtered or ordered in Hybrid Data Pipeline, they are treated as String values. However, when filter conditions are passed to Oracle Sales Cloud, Oracle Sales Cloud can distinguish between the actual data types and apply Date specific comparisons to Date fields. Therefore, query results can differ depending on whether filters have been passed down to Oracle Sales Cloud or processed by Hybrid Data Pipeline.
  • There appears to be a limitation with the Oracle Sales Cloud REST API concerning the >=, <=, and != comparison operators when querying String fields. Therefore, Hybrid Data Pipeline has not been optimized to pass these comparison operators to Oracle Sales Cloud. We are working with Oracle on this issue.
  • There appears to be a limitation with the Oracle Sales Cloud REST API concerning queries with filter operations on Boolean fields. Therefore, Hybrid Data Pipeline has not been optimized to pass filter operations on Boolean fields to Oracle Sales Cloud. We are working with Oracle on this issue.
  • The drivers currently report ATTACHMENT type fields in the metadata but do not support retrieving data for these fields. These fields are set to NULL.
  • Join queries between parent and child tables are not supported.
  • Queries on child tables whose parent has a composite primary key are not supported. For example, the children of ACTIVITIES_ACTIVITYCONTACT and LEADS_PRODUCTS are not accessible.
  • Queries on the children of relationship objects are not supported. For example, the children of ACCOUNTS_RELATIONSHIP, CONTACTS_RELATIONSHIP, and HOUSEHOLDS_RELATIONSHIP are not accessible.
  • Queries on grandchildren with multiple sets of Parent IDs and Grand Parent IDs used in an OR clause are not supported. For example, the following query is not supported.
    Select * From ACCOUNTS_ADDRESS_ADDRESSPURPOSE
    Where (ACCOUNTS_PARTYNUMBER = 'OSC_12343' AND
    ACCOUNTS_ADDRESS_ADDRESSNUMBER = 'AUNA-2XZKGH')
    OR (ACCOUNTS_PARTYNUMBER = 'OSC_12344' AND
    ACCOUNTS_ADDRESS_ADRESSNUMBER = 'AUNA-2YZKGH")
  • When querying documented objects like "CATALOGPRODUCTITEMS" and "CATEGORYPRODUCTITEMS", no more than 500 records are returned, even when more records may be present. This behavior is also seen with some custom objects. We are currently working with Oracle support to resolve this issue.
  • A query on OPPORTUNITIES_CHILDREVENUE_PRODUCTS or LEADS_PRODUCTGROUPS with a filter on the primary key column returns 0 records even when more records are present. We are currently working with Oracle support to resolve this issue.
  • Queries that contain subqueries returning more than 100 records are not supported. For example, the following query is not supported.
    Select * From ACCOUNTS_ADDRESS
    Where ACCOUNTS_PARTYNUMBER
    In (Select Top 101 PARTYNUMBER From ACCOUNTS
  • When you create custom objects, your Oracle Sales Cloud administrator must enable these objects for REST API access through Application Composer. Otherwise, you will not be able to query against these custom objects.
Oracle Service Cloud
  • When you create a custom object, your Oracle Service Cloud administrator must enable all four columns of the Object Fields tab of the Object Designer, or you cannot query against the custom objects.
  • The initial connection when the relational map is created can take some time. It is even possible to receive an error "504: Gateway Timeout". When this happens, Hybrid Data Pipeline continues to build the map in the background such that subsequent connection attempts are successful and have full access to the relational map.
SugarCRM
  • Data sources that are using the deprecated enableExportMode option will still see a problem until they are migrated to the new data source configuration.
  • Data source connections by default now use Export Mode to communicate with the Sugar CRM server, providing increased performance when querying large sets of data. Bulk export mode causes NULL values for currency columns to be returned as the value 0. Because of this, there is no way to differentiate between a NULL value and 0, when operating in export mode. This can be a problem when using currency columns in the SQL statements, because Hybrid Data Pipeline must satisfy some filter conditions on queries, such as with operations like =, <>, >, >=, <, <=, IS NULL and IS NOT NULL.  For example, suppose a currency column in a table in SugarCRM has 3 null values and 5 values that are 0. When a query is executed to return all NULL values (SELECT * FROM <table> WHERE <>currency column> IS NULL), then 3 rows are returned. However, if a query is executed to return all rows where the column performs an arithmetic operation (SELECT * FROM <table> WHERE <currency column> + 1 = 1), then all 8 records are returned because the 3 NULL values are seen as 0.

Third Party Acknowledgments

Refer to Hybrid Data Pipeline Third Party Acknowledgments.



Release 4.1.0

Progress DataDirect Hybrid Data Pipeline is a data access server that provides simple, secure access to cloud and on-premises data sources, such as RDBMS, Big Data, and NoSQL. Hybrid Data Pipeline allows business intelligence tools and applications to use ODBC, JDBC, or OData to access data from supported data sources. Hybrid Data Pipeline can be installed in the cloud or behind a firewall. Hybrid Data Pipeline can then be configured to work with applications and data sources in nearly any business environment. Progress DataDirect Hybrid Data Pipeline consists of four primary, separately installed components.

  • The Hybrid Data Pipeline server provides access to multiple data sources through a single, unified interface. The server can be hosted on premises or in the cloud.

  • The On-Premises Connector enables the Hybrid Data Pipeline to establish a secure connection from the cloud to an on-premises data source.

  • The ODBC driver enables ODBC applications to communicate to a data source through the Hybrid Data Pipeline server.

  • The JDBC driver enables JDBC applications to communicate to a data source through the Hybrid Data Pipeline server.

Changes Since Release 4.1.0

Enhancements

Hybrid Data Pipeline server
  • Account Lockout Policy (Limits API). Support has been added for implementing an account lockout policy. An account lockout policy allows the administrator to set the number of consecutive failed authentication attempts that result in a user account being locked, as well as the lockout period and the duration of time that failed attempts are counted. When a lockout occurs, the user is unable to authenticate until the specified period of time has passed or until the administrator unlocks the account.
  • Configurable CORS Behavior (Limits API). Support for disabling cross-origin resource sharing (CORS) filter for environments that do not require it. Since Hybrid Data Pipeline does not currently support filtering of cross-origin requests, disabling CORS filters can provide added security against cross-site forgery attacks.
Apache Hive
  • Certified with Apache Hive 2.0 and 2.1.
IBM DB2
  • Certified with DB2 for i 7.3
Oracle Database
  • Certified with Oracle 12c R2 (12.2).

Resolved Issues

Hybrid Data Pipeline server
  • Version 4.1.0.44. Bug 71841. Resolved an issue where the Hybrid Data Pipeline server failed to honor the START_ON_INSTALL environment variable to stop and start Tomcat services.
  • Version 4.1.0.44. Resolved an issue where the installer accepted an SSL certificate only in the PEM file format during the installation of the server for a cluster environment. The installer now accepts the SSL certificate (root certificate) in PEM, DER, or base64 encodings for a cluster installation.
  • Version 4.1.0.44. Resolved an issue where an SSL certificate was required for a cluster installation. An SSL certificate is no longer required for a cluster installation.
  • Version 4.1.0.44. Resolved an issue that prevented the installer from supporting a number of upgrade scenarios.
JDBC driver
  • Version 4.1.0.7. Resolved an issue where the JDBC driver was not connecting to the Hybrid Data Pipeline server by default when running on a UNIX/Linux system.

4.1.0 Release Notes

Security

OpenSSL
  • The default OpenSSL library has been updated to 1.0.2k, which fixes the following security vulnerabilities.
    • Truncated packet could crash via OOB read (CVE-2017-3731)
    • BN_mod_exp may produce incorrect results on x86_64 (CVE-2017-3732)
    • Montgomery multiplication may produce incorrect results (CVE-2016-7055)

    OpenSSL 1.0.2k addresses vulnerabilities resolved by earlier versions of the library. For more information on OpenSSL vulnerabilities resolved by this upgrade, refer to OpenSSL announcements.

SSL Enabled Data Stores
  • The default value for Crypto Protocol Version has been updated to TLSv1, TLSv1.1, TLSv1.2 for data stores that support the option. This change improves the security of the connectivity service by employing only the most secure cryptographic protocols as the default behavior. At connection, the connectivity service will attempt to use the most secure protocol first, TLS 1.2, then fall back to use 1.1 and then 1.0.
On-Premises Connector
  • The On-Premises Connector has been enhanced to resolve a security vulnerability. We strongly recommend upgrading to the latest version to take advantage of this fix.
Apache Hive Data Store
  • Hybrid Data Pipeline now supports SSL for Apache Hive data stores running Apache Hive 0.13.0 or higher.
SQL Server Data Store
  • Support for NTLMv2 authentication has been added for the SQL Server data store. NTLMv2 authentication can be specified in the Authentication Method field under the Security tab.

Enhancements

Hybrid Data Pipeline server
  • Hybrid Data Pipeline Cluster. To support scalability, the Hybrid Data Pipeline service can be deployed on multiple nodes behind a load balancer. Incoming requests can be evenly distributed across cluster nodes. SSL communication is supported if the load balancer supports SSL termination. Session affinity is supported to bind a client query to a single node for improved performance. (Session affinity must be enabled in the load balancer to support the Web UI and ODBC and JDBC clients.) HTTP health checks are supported via the Health Check API.
  • MySQL Community Edition Data Store. Support for MySQL Community Edition has been added to Hybrid Data Pipeline. During installation of the Hybrid Data Pipeline server and the On-Premises Connector, you provide the location of the MySQL Connector/J driver. After installation, you may then configure data sources that connect to a MySQL Community Edition data store and execute queries with ODBC, JDBC, and OData applications.
  • MySQL Community Edition System Database. Support for MySQL Community Edition as an external system database has been added. During the installation process, you are prompted to select either an internal database or an external database to store system information necessary for the operation of Hybrid Data Pipeline. With this enhancement, you can choose either Oracle or MySQL Community Edition as an external database.
  • Installation Procedures and Response File. The installation procedures have been modified with the introduction of support for the Hybrid Data Pipeline cluster, the MySQL Community Edition data store, and the MySQL Community Edition system database. New prompts have been added to the installation process. Several of these prompts have corresponding settings that must be used in the response file for silent installation of the server. If you are performing silent installations of the server, your response file must be modified accordingly. The following list provides the new settings. The settings may differ depending on whether you generate the response file with a GUI or console installation. Further details are available in the Progress DataDirect Hybrid Data Pipeline Installation Guide.
    Note: Values for the SKIP_HOSTNAME_VALIDATION and SKIP_PORT_VALIDATION options have been changed from false | true to 0 | 1. These options have the same name in GUI-generated and console-generated response files.
    Note: Values for the SKIP_LB_HOSTNAME_VALIDATION option are currently 0 for disable and true for enable. In a future release, the values will be 0 for disable and 1 for enable. This option has the same name in GUI-generated and console-generated response files.
    New response file options.The first name in the list is the name of the response file option generated by the GUI installer. The second name in the list is the name generated by the console mode installer. (If only one value is provided, there is no corresponding value for console mode.)
    • USING_LOAD_BALANCING_YES | D2C_USING_LOAD_BALANCING _CONSOLE - Specifies whether you are installing the service on a node behind a load balancer.
    • LOAD_BALANCING_HOST_NAME | LOAD_BALANCING_HOST_NAME_CONSOLE - Specifies the hostname of the load balancer appliance or the machine hosting the load balancer service.
    • USING_LOAD_BALANCING_NO - Specifies whether you are installing the service on a node behind a load balancer. For console installation, only D2C_USING_LOAD_BALANCING _CONSOLE is used.
    • SKIP_LB_HOSTNAME_VALIDATION | SKIP_LB_HOSTNAME_VALIDATION - Specifies whether the installer should validate the load balancer hostname during the installation of a node.
    • D2C_CERT_FILE | D2C_CERT_FILE_CONSOLE - Specifies the fully qualified path of the Certificate Authority certificate that signed the load balancer server certificate. This certificate is used to create the trust store used by ODBC and JDBC clients.
    • D2C_DB_MYSQL_COMMUNITY_SUPPORT_YES | D2C_DB_MYSQL_COMMUNITY_SUPPORT_CONSOLE - Specifies whether the service will support MySQL Community Edition data store.
    • D2C_DB_MYSQL_JAR_PATH | D2C_DB_MYSQL_JAR_PATH_CONSOLE - Specifies whether the fully qualified path of the MySQL Connector/J jar file to support a MySQL Community Edition data store.
    • D2C_DB_MYSQL_COMMUNITY_SUPPORT_NO - Specifies whether the service will support MySQL Community Edition data store. For console installation, only D2C_DB_MYSQL_COMMUNITY_SUPPORT_CONSOLE is used.
    • D2C_DB_VENDOR_MYSQL - Specifies whether a MySQL Community Edition database will be used as the external system database. For console mode installations, D2C_DB_VENDOR_CONSOLE is used to specify an Oracle or MySQL Community Edition external system database.
    • D2C_DB_PORT_MYSQL - Specifies the port number of the MySQL Community Edition external database. For console mode installations, D2C_DB_PORT_CONSOLEis used to specify the port of either an Oracle or MySQL Community Edition externa system database.
    • USER_INPUT_KEY_LOCATION | USER_INPUT_KEY_LOCATION_CONSOLE - Specifies the fully qualified path of the encryption key to be shared by the nodes in a cluster environment.
  • Throttling (Limits API). Support for throttling to prevent a user or group of users from adversely impacting the performance of the connectivity service has been added. The Limits API allows administrators to set limits on how many rows can be returned for ODBC, JDBC, and OData requests. An error is returned if an application fetches rows beyond the specified limit.
  • Refresh Map. The new refresh map button has been added to the Mapping tab. This button allows you to refresh the map without connecting to the data store. This feature is useful when you are in the process of developing your application and you have made changes to the objects in your backend data store. Pressing this button forces the data store to rebuild the map allowing the new objects to show up in the relational map the next time your application connects to the data source. (The map can also be refreshed with a Management API call or when establishing a connection.)
  • SQL Editor. The SQL editor in the SQL Testing view has been upgraded. The functionality of the new editor is similar to that of the previous editor. However, the history panel is not currently supported with the new editor.
  • OpenAccess Server. The OpenAccess server component has been deprecated. The OpenAccess server is no longer required to connect with Oracle Eloqua.
On-Premises Connector
  • Upgraded to use Tomcat 8.0.41
  • Upgraded to use Java SE 8
  • Support for Windows Server 2003 has been deprecated
Hybrid Data Pipeline ODBC Driver
  • Certified with CentOS Linux 4.x, 5.x, 6.x, and 7.x
  • Certified with Debian Linux 7.11, 8.5
  • Certified with Oracle Linux 4.x, 5.x, 6.x, and 7.x
  • Certified with Ubuntu Linux 14.04, 16.04
  • Support for Windows Server 2003 has been deprecated
Apache Hive
  • Added SSL support for Apache Hive 0.13.0 and higher
  • Certified with Apache Hive 0.13, 0.14, 1.0, 1.1, 1.2
  • Certified with Amazon (AMI) 3.2, 3.3.1, 3.7
  • Certified with Cloudera (CDH) 5.0, 5.1, 5.2, 5.3, 5.4, 5.4, 5.6, 5.7
  • Certified with Hortonworks (HDP) 2.1, 2.2
  • Certified with IBM BigInsights 4.1
  • Certified with Pivotal HD (PHD) 2.1
Greenplum
  • Made generally available
  • Certified with Greenplum 4.3
  • Certified with Pivotal HAWQ 1.2, 2.0
IBM DB2
  • Certified with IBM DB2 V11.1 for LUW
  • Certified with DB2 for i 7.2
Informix
  • Made generally available
  • Certified with Informix 12.10
  • Certified with Informix 11.7, 11.5, 11.0
  • Certified with Informix 10.0
  • Certified with Informix 9.4, 9.3, 9.2
Oracle Marketing Cloud (Oracle Eloqua)

The Oracle Marketing Cloud data store provides access to Oracle Eloqua. Improved features and functionality for this data store are available with this Hybrid Data Pipeline release.

  • Write Access
    • Support for INSERT/UPDATE/DELETE operations on CONTACT, ACCOUNT and CustomObjects_XXX
  • Bulk Calls
    • Performance improvement for bulk calls
    • Supports fetching more than 5 million records
    • Supports fetching up to 250 columns for bulk calls
    • Supports pushing OR operators for bulk calls (This does not apply to Activities)
  • REST Calls
    • Some queries with OR and AND operators have been optimized.
  • Metadata
    • The data store now uses null as the catalog name. Previously, ECATALOG was used as the catalog name.
    • The current version of the data store maps columns with integer data to type INTEGER. The previous version mapped the integer type to string.
  • In contrast to the previous version, the current version of the data store cannot split OR queries and push them separately to Oracle Eloqua APIs. Therefore, compared to the previous version, the current version may take longer to return results involving OR queries.
  • The previous version of the data store used the ActivityID field as the primary key for Activity_EmailXXX objects, such as Activity_EmailOpen, Activity_EmailClickthrough, and Activity_EmailSend. In contrast, the current version of the data store uses the ExternalID field as the primary key instead of ActivityID.
PostgreSQL
  • Certified with PostgreSQL 9.3, 9.4, 9.5, 9.6
Progress OpenEdge
  • Certified with Progress OpenEdge 11.4, 11.5, 11.6
Salesforce
  • Certified with Salesforce API 38
SAP Sybase ASE
  • Made generally available
  • Certified with SAP Adaptive Server Enterprise 16.0
  • Added support for NTLMv2 authentication. NTLMv2 authentication can be specified in the Authentication Method field under the Security tab.
  • Certified with Microsoft SQL Server 2016

Resolved Issues

Web UI
  • Resolved an issue where the SQL editor in the SQL Testing view returned errors when executing SQL commands against Google Analytics data sources
OData
  • Resolved an issue where OData requests were timing out before application could finish retrieving the results
Hybrid Data Pipeline Management API
  • Resolved an issue where a 201 was returned when adding members to a group data source through the Management API
  • Resolved an issue where a normal user would receive a 400 instead of a 404 error when using the user query parameter to Management API calls
  • Resolved an issue where user creation API allowed invalid values for the status field
DB2
  • Resolved an issue where the error "Numeric value out of range" occurs when calling SQLStatistics in DB2 with the ODBC driver
Google Analytics
  • Resolved an issue where the SQL editor in the SQL Testing view returned errors when executing SQL commands against Google Analytics data sources

Known Issues

Hybrid Data Pipeline server installation
  • When installing the server in a load balancing environment, a .pem file with a private key and trusted CA certificate must be specified even though a private key is not required for a load balancing environment.
  • Silent installation of the server for an On-Premises Connector implementation is not currently supported. You must perform a GUI or console mode installation of the server to install the server for an On-Premises Connector implementation.
  • A silent installation in console mode with the server configured to use MySQL Community Edition as an external system database is not currently supported. However, a silent installation with this configuration can be performed using the installer in GUI mode.
JDBC driver installation
  • If you attempt to install the JDBC driver in GUI mode to the default installation directory but do not have appropriate permissions for the default directory, the installer indicates that the installation has succeeded when in fact the driver has not been installed. When attempting an installation under the same circumstances but in console mode, the proper error message is displayed.
  • On UNIX/Linux, the JDBC driver does not accept the values specified in the redistribution files generated during the installation of the Hybrid Data Pipeline server. In turn, the driver does not connect to the Hybrid Data Pipeline server by default.
Web UI
  • If a Hybrid Data Pipeline administrator creates a user with a password that contains a percentage mark (%), the new user may face issues while trying to login. In addition, Hybrid Data Pipeline functionality may not work as expected.
  • When an administrator tries to add new users using the Add Users window, the Password and Confirm Password fields occasionally do not appear properly in the popup window.
  • COPY DETAILS functionality is not currently working in Internet Explorer 11 due to a limitation with the third party plugin Clipboard.js on bootstrap modals. More details on this can be found at https://github.com/zenorocha/clipboard.js/wiki/Known-Issues.
Management API
  • When the Limits API (throttling) is used to set a row limit and createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE) is being used, a row-limit-exceeded error is returned at the row limit instead of one row beyond the limit. For example, if a row limit is set at 45 rows when returning a scrollable, insensitive result set beyond the specified limit, the connectivity service returns the following error on the 45th row as opposed to the expected 46th row: The limit on the number of rows that can be returned from a query -- 45 -- has been exceeded.
  • If a Hybrid Data Pipeline administrator creates a user with a password that contains a percentage mark (%), the new user may face issues while trying to login. In addition, Hybrid Data Pipeline functionality may not work as expected.
OData
  • $expand only supports one level deep.
  • For example, with the entity hierarchy:
    Customers
    |-- Orders
    | |-- OrderItems
    |-- Contacts


    The following queries are supported:
    Customers?$expand=Orders
    Customers?$expand=Contacts
    Customers?$expand=Orders,Contacts


    However, this query is not supported:
    Customers?$expand=Orders,OrderItems

    OrderItems is a second level entity with respect to Customers. To query Orders and OrderItems, the query must be rooted at Orders. For example:
    Orders?$expand=OrderItems
    Orders(id)?$expand=OrderItems


  • When manually editing the ODataSchemaMap value, the table names and column names specified in the value are case-sensitive. The case of the table and column names must match the case of the tables and column names reported by the data source.
    Note: It is highly recommended that you use the OData Schema Editor to generate the value for the ODataSchemaMap data source option. The Schema Editor takes care of table and column name casing and other syntactic details.
  • When using the substring function on properties that map to a CHAR column in the data source, it is data source dependent as to whether the substring function treats the trailing spaces as significant. When going against Oracle, the trailing spaces are preserved. When going against other data sources, the trailing spaces are discarded.
  • The $expand clause is not supported with OpenEdge data sources.
  • The day scalar function is not working when specified in a $filter clause when querying a DB2 data source.
On-Premise Connector
  • If User Account Control is enabled on your Windows machine and you installed the On-Premises Connector in a system folder (such as Windows or Program Files), you must run the On-Premises Connector Configuration Tool in administrator mode.
  • When using Kerberos with Microsoft Dynamics, the JRE installed with the On-Premises Connector must be configured to run with Kerberos. Take the following steps to configure the JRE.
    • Download a zip file containing new version of the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files for JDK/JRE 8 at http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html.
    • Unzip the file into the \jre\lib\security directory to update the Java security policy files to support 256-bit encryption:
      • C:\<installdir>\jre\lib\security\local_policy.jar
      • C:\<installdir>\jre\lib\security\US_export_policy.jar
  • Uninstalling and re-installing the On-Premise Connector causes the Connector ID of the On-Premise Connector to change. Any Hybrid Data Pipeline data sources using the old Connector ID must be updated to use the new Connector ID. Installing to a new directory allows both the old and new On-Premise Connector to exist side-by-side. However, you must update the Connector ID option in previously-defined Hybrid Data Pipeline data sources to point to the new On-Premise Connector. In addition, you must update Connector ID wherever it was used, such as the definitions of Group Connectors and Authorized Users.
JDBC Driver
  • If you attempt to install the JDBC driver in GUI mode to the default installation directory but do not have appropriate permissions for the default directory, the installer indicates that the installation has succeeded when in fact the driver has not been installed. When attempting an installation under the same circumstance but in console mode, the proper error message is displayed.
  • On UNIX/Linux, the JDBC driver does not accept the values specified in the redistribution files generated during the installation of the Hybrid Data Pipeline server. In turn, the driver does not connect to the Hybrid Data Pipeline server by default.
  • The default value for the Service connection property does not connect to the Hybrid Data Pipeline server. To connect, set Service to the Hybrid Data Pipeline server in your connection URL.
  • The JDBC 32-bit silent installation fails on Windows 10. Use the standard installation instead.
  • Executing certain queries against MS Dynamics CRM may result in a “Communication failure. Protocol error."
  • Using JNDI data sources, encryptionMethod must be configured through setExtendedOptions.
  • For additional notes on the JDBC driver, see the JDBC driver readme file.
Hybrid Data Pipeline ODBC Driver
  • The default ODBC.INI generated by the installer is missing required entries for Service=, PortNumber=, and HybridDataPipelineDataSource=.
  • Console mode installation is supported only on UNIX.
  • When you first install a driver, you are given the option to install a default data source for that driver. We recommend that you install default data sources when you first install the drivers. If you do not install the default data source at this time, you will be unable to install a default data source for this driver later. To install a default data source for a driver after the initial installation, you must uninstall the driver and then reinstall it.
  • For additional notes on the ODBC driver, see the ODBC driver readme file.
All Data Sources
  • It is recommended that Login Timeout is enabled (set to 0) for a Data Source.
  • Using setByte to set parameter values fails when the data source does not support the TINYINT SQL type. Use setShort or setInt to set the parameter value instead of setByte.
Google Analytics
  • Validation message is not displayed when a user enters a Start Date value less than the End Date value in Create/Update Google Analytics page.
  • Once a Google Analytics OAuth profile is created for a specific Google account, changing the Google Account associated with the profile results in "the configuration options used to open the database do not match the options used to create the database" error being returned for any existing data sources.
Microsoft Dynamics CRM
  • Executing certain queries against MS Dynamics CRM with the JDBC driver may result in a “Communication failure. Protocol error."
  • Testing has shown the following two errors from Microsoft Dynamics CRM Online when executing queries against the ImportData and TeamTemplate tables:
    • Attribute errortype on Entity ImportData is of type picklist but has Child Attributes Count 0
    • Attribute issystem on Entity TeamTemplate is of type bit but has Child Attributes Count 0
    Note: We have filed a case with Microsoft and are waiting to hear back about the cause of the issue.

  • The initial on-premises connection when the relational map is created can take some time. It is even possible to receive an error "504: Gateway Timeout". When this happens, Hybrid Data Pipeline continues to build the map in the background such that subsequent connection attempts are successful and have full access to the relational map.
OpenEdge 10.2b
  • Setting the MaxPooledStatements data source option in an OpenEdge data store to a value other than zero can cause statement not prepared errors to be returned in some situations.
Oracle Marketing Cloud (Oracle Eloqua)
  • Data store issues
    • There are known issues with Batch Operations.
    • The Update/Delete implementation can update only one record at a time. Because of this, the number of APIs executed depends on the number of records that get updated or deleted by the query plus the number of API calls required to fetch the IDs for those records.
    • Lengths of certain text fields are reported as higher than the actual lengths supported in Oracle Eloqua.
  • We are currently working with Oracle to resolve the following issues with the Oracle Eloqua REST API.
    • AND operators that involve different columns are optimized. In other cases, the queries are only partially optimized.
    • OR operators on the same column are optimized. In other cases, the queries are completely post-processed.
    • The data store is not able to insert or update the NULL value to any field explicitly.
    • The data store is unable to update few fields. They are always reported as NULL after update.
    • Oracle Eloqua uses a double colon (::) as an internal delimiter for multivalued Select fields. Hence when a value with the semi-colon character (;) is inserted or updated into a multivalued Select field, the semicolon character gets converted into the double colon character.
    • Query SELECT count (*) from template returns incorrect results.
    • Oracle Eloqua APIs do not populate the correct values in CreatedBy and UpdatedBy fields. Instead of user names, they contain a Timestamp value.
    • Only equality filters on id fields are optimized. All other filter conditions are not working correctly with Oracle Eloqua APIs and the data store is doing post-processing for such filters.
    • Filters on Non-ID Integer fields and Boolean fields are not working correctly. Hence the driver needs to post-process all these queries.
    • The data store does not distinguish between NULL and empty string. Therefore, null fields are often reported back as empty strings.
    • Values with special characters such as curly braces ({,}), back slash (\), colon (:), slash star (/*) and star slash (*/) are not supported in where clause filter value.
Oracle Sales Cloud
  • Currently, passing filter conditions to Oracle Sales Cloud works only for simple, single column conditions. If there are multiple filters with 'AND' and 'OR', only partial or no filters are passed to Oracle Sales Cloud.
  • Oracle Sales Cloud reports the data type of String and Date fields as String. Therefore, when such fields are filtered or ordered in Hybrid Data Pipeline, they are treated as String values. However, when filter conditions are passed to Oracle Sales Cloud, Oracle Sales Cloud can distinguish between the actual data types and apply Date specific comparisons to Date fields. Therefore, query results can differ depending on whether filters have been passed down to Oracle Sales Cloud or processed by Hybrid Data Pipeline.
  • There appears to be a limitation with the Oracle Sales Cloud REST API concerning the >=, <=, and != comparison operators when querying String fields. Therefore, Hybrid Data Pipeline has not been optimized to pass these comparison operators to Oracle Sales Cloud. We are working with Oracle on this issue.There appears to be a limitation with the Oracle Sales Cloud REST API concerning queries with filter operations on Boolean fields. Therefore, Hybrid Data Pipeline has not been optimized to pass filter operations on Boolean fields to Oracle Sales Cloud. We are working with Oracle on this issue.
  • The drivers currently report ATTACHMENT type fields in the metadata but do not support retrieving data for these fields. These fields are set to NULL.
  • Join queries between parent and child tables are not supported.
  • Queries on child tables whose parent has a composite primary key are not supported. For example, the children of ACTIVITIES_ACTIVITYCONTACT and LEADS_PRODUCTS are not accessible.
  • Queries on the children of relationship objects are not supported. For example, the children of ACCOUNTS_RELATIONSHIP, CONTACTS_RELATIONSHIP, and HOUSEHOLDS_RELATIONSHIP are not accessible.
  • Queries on grandchildren with multiple sets of Parent IDs and Grand Parent IDs used in an OR clause are not supported. For example, the following query is not supported.
    Select * From ACCOUNTS_ADDRESS_ADDRESSPURPOSE
    Where (ACCOUNTS_PARTYNUMBER = 'OSC_12343' AND
    ACCOUNTS_ADDRESS_ADDRESSNUMBER = 'AUNA-2XZKGH')
    OR (ACCOUNTS_PARTYNUMBER = 'OSC_12344' AND
    ACCOUNTS_ADDRESS_ADRESSNUMBER = 'AUNA-2YZKGH")
  • When querying documented objects like "CATALOGPRODUCTITEMS" and "CATEGORYPRODUCTITEMS", no more than 500 records are returned, even when more records may be present. This behavior is also seen with some custom objects. We are currently working with Oracle support to resolve this issue.
  • A query on OPPORTUNITIES_CHILDREVENUE_PRODUCTS or LEADS_PRODUCTGROUPS with a filter on the primary key column returns 0 records even when more records are present. We are currently working with Oracle support to resolve this issue.
  • Queries that contain subqueries returning more than 100 records are not supported. For example, the following query is not supported.
    Select * From ACCOUNTS_ADDRESS
    Where ACCOUNTS_PARTYNUMBER
    In (Select Top 101 PARTYNUMBER From ACCOUNTS
  • When you create custom objects, your Oracle Sales Cloud administrator must enable these objects for REST API access through Application Composer. Otherwise, you will not be able to query against these custom objects.
Oracle Service Cloud
  • When you create a custom object, your Oracle Service Cloud administrator must enable all four columns of the Object Fields tab of the Object Designer, or you cannot query against the custom objects.
  • The initial connection when the relational map is created can take some time. It is even possible to receive an error "504: Gateway Timeout". When this happens, Hybrid Data Pipeline continues to build the map in the background such that subsequent connection attempts are successful and have full access to the relational map.
SugarCRM
  • Data sources that are using the deprecated enableExportMode option will still see a problem until they are migrated to the new data source configuration.
  • Data source connections by default now use Export Mode to communicate with the Sugar CRM server, providing increased performance when querying large sets of data. Bulk export mode causes NULL values for currency columns to be returned as the value 0. Because of this, there is no way to differentiate between a NULL value and 0, when operating in export mode. This can be a problem when using currency columns in the SQL statements, because Hybrid Data Pipeline must satisfy some filter conditions on queries, such as with operations like =, <>, >, >=, <, <=, IS NULL and IS NOT NULL.  For example, suppose a currency column in a table in SugarCRM has 3 null values and 5 values that are 0. When a query is executed to return all NULL values (SELECT * FROM <table> WHERE <>currency column> IS NULL), then 3 rows are returned. However, if a query is executed to return all rows where the column performs an arithmetic operation (SELECT * FROM <table> WHERE <currency column> + 1 = 1), then all 8 records are returned because the 3 NULL values are seen as 0.

Third Party Acknowledgments

Refer to Hybrid Data Pipeline Third Party Acknowledgments.



Release 4.0.1

Progress DataDirect Hybrid Data Pipeline is a data access server that provides simple, secure access to cloud and on-premises data sources, such as RDBMS, Big Data, and NoSQL. Hybrid Data Pipeline allows business intelligence tools and applications to use ODBC, JDBC, or OData to access data from supported data sources. Hybrid Data Pipeline can be installed in the cloud or behind a firewall. Hybrid Data Pipeline can then be configured to work with applications and data sources in nearly any business environment. Progress DataDirect Hybrid Data Pipeline consists of four primary, separately installed components.
  • The Hybrid Data Pipeline server provides access to multiple data sources through a single, unified interface. The server can be hosted on premises or in the cloud.

  • The On-Premises Connector enables the Hybrid Data Pipeline to establish a secure connection from the cloud to an on-premises data source.

  • The ODBC driver enables ODBC applications to communicate to a data source through the Hybrid Data Pipeline server.

  • The JDBC driver enables JDBC applications to communicate to a data source through the Hybrid Data Pipeline server.

In addition to these four primary components, Progress DataDirect also provides a customized version of OpenAccess server. The OpenAccess server is a connectivity layer required for an Eloqua data store in a Hybrid Data Pipeline environment.

4.0.1 Release Notes

Enhancements

Hybrid Data Pipeline server
  • Added support for bypassing hostname and port validation when performing a silent installation. When hostname validation fails during the interactive installation process, you are prompted to reenter the hostname or skip validation. If you choose to skip validation, the hostname and port validation properties in your response file will have the following settings.
    SKIP_HOSTNAME_VALIDATION=true
    SKIP_PORT_VALIDATION=true
    Running an installation in silent mode with a response file containing these settings allows the silent installation to continue even if hostname or port validation fail.  When the validation fails during the silent installation process, the installer generates the file SilentInstallInfo.log in the home directory of the target machine but completes a full installation.
  • Added support for version information to be returned from /api/admin/version endpoint.  This feature is only accessible via admin accounts.  The response is returned in a JSON-style format with the following syntax.
    {
    "HDPVersion": "<major>.<minor>.<service_pack>.<build_number>"
    "DASVersion":"<major>.<minor>.<service_pack>.<build_number>"
    }
  • Upgraded to Tomcat 8.0.39
Oracle Sales Cloud
  • Enhanced to support queries with equality filters on non-indexed fields. Previously, equality filter conditions were passed to the Oracle Sales Cloud server for indexed fields only.
  • Enhanced performance for queries with non-equality filters. Previously, the entire table (or tables) were retrieved before the data from a non-equality operator could be filtered. These filters are now passed to Oracle Sales Cloud for faster processing.
  • Enhanced to support the columns specified in a Select list. Previously, all fields were retrieved from the Oracle Sales Cloud. Now only the fields specified in the Select list are retrieved.
  • Proxy options have been enabled for Oracle Sales Cloud Driver. You define the proxy server options in the data source definition, in the Extended Options field of the Advanced tab. The syntax is shown in the following example:
    ProxyHost=Server1;ProxyPort=5122;ProxyUser=JohnDoe;ProxyPassword=John'sPW;
    • ProxyHost identifies a proxy server to use for the connection, using either the server name or an IP address specified in either IPv4 or IPv6 format.
    • ProxyPort specifies the port number where the proxy server is listening for HTTPS requests.
    • ProxyUser specifies the user name needed to connect to a proxy server when authentication on the proxy server is enabled.
    • ProxyPassword specifies the password needed to connect to a proxy server when authentication on the proxy server is enabled.

Resolved Issues

Hybrid Data Pipeline installer
  • Resolved an issue where the final.log file was generated even though the installation succeeded
  • Resolved an issue with console installation where user was prompted for OpenAccess options even though OpenAccess was not selected
  • Resolved an issue where silent installation failed with a non-standard port for an external Oracle database
  • Resolved an issue where silent installation failed when a response file generated from console mode was used for the silent installation
  • Resolved an issue where silent installation failed when external database was selected and configured
  • Resolved an issue where the schema was not created during a silent installation with an external database configuration
Hybrid Data Pipeline server
  • Enabled compression
  • Resolved an issue where the HDPVersion in the version API response did not include the build number for the package
  • Resolved an issue where the server would make calls to external resources
  • Resolved an issue where server would not start in an environment that already had CATALINA_HOME configured
  • Resolved an issue where shutdown scripts would shut down process not related to the server
  • Resolved an issue where the server did not shut down completely when executing the shutdown script
Credentials database
  • Resolved an issue where only the DB Admin credentials were being validated when Oracle was selected as an external database
  • Resolved an issue where embedded database was started when environment was configured to use external database
  • Resolved a failure when creating users with external database configured for Oracle version 11.2.0.4 patch 7
Hybrid Data Pipeline Management API
  • Resolved an issue where a 201 was returned when adding members to a group data source through the Management API
  • Resolved an issue where a normal user would receive a 400 instead of a 404 error when using the user query parameter to Management API calls
  • Resolved an issue where user creation API allowed invalid values for the status field
OData
  • Resolved an issue where OData requests were timing out before application could finish retrieving the results
Oracle Sales Cloud
  • Resolved a file input/output error when connecting to an Oracle Sales Cloud data source
  • Resolved an issue where an empty folder named OracleSalesCloud_Schema was created in the installation directory
  • Resolved an issue where decimal values with a precision greater than 7 were not returned correctly when retrieved from an Oracle Sales Cloud data source

Known Issues

Web UI
  • If a Hybrid Data Pipeline administrator creates a user with a password that contains a percentage mark (%), the new user may face issues while trying to login. In addition, Hybrid Data Pipeline functionality may not work as expected.
  • Google Analytics data sources return an error when used in the SQL Editor. However, they work with Hybrid Data Pipeline ODBC, JDBC and OData clients.
  • When an administrator tries to add new users using the Add Users window, the Password and Confirm Password fields occasionally do not appear properly in the popup window.
Hybrid Data Pipeline Management API
  • If a Hybrid Data Pipeline administrator creates a user with a password that contains a percentage mark (%), the new user may face issues while trying to login. In addition, Hybrid Data Pipeline functionality may not work as expected.
On-Premise Connector
  • If User Account Control is enabled on your Windows machine and you installed the On-Premises Connector in a system folder (such as Windows or Program Files), you must run the On-Premises Connector Configuration Tool in administrator mode.
  • When using Kerberos with Microsoft Dynamics, the JRE installed with the On-Premises Connector must be configured to run with Kerberos. Take the following steps to configure the JRE.
    • Download a zip file containing new version of the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files for JDK/JRE 7 at http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html.
    • Unzip the file into the \jre\lib\security directory to update the Java security policy files to support 256-bit encryption:
      • C:\<installdir>\jre\lib\security\local_policy.jar
      • C:\<installdir>\jre\lib\security\US_export_policy.jar
  • Uninstalling and re-installing the On-Premise Connector causes the Connector ID of the On-Premise Connector to change. Any Hybrid Data Pipeline data sources using the old Connector ID must be updated to use the new Connector ID. Installing to a new directory allows both the old and new On-Premise Connector to exist side-by-side. However, you must update the Connector ID option in previously-defined Hybrid Data Pipeline data sources to point to the new On-Premise Connector. In addition, you must update Connector ID wherever it was used, such as the definitions of Group Connectors and Authorized Users.
OData
  • $expand only supports one level deep.
  • For example, with the entity hierarchy:
    Customers
    |-- Orders
    | |-- OrderItems
    |-- Contacts


    The following queries are supported:
    Customers?$expand=Orders
    Customers?$expand=Contacts
    Customers?$expand=Orders,Contacts


    However, this query is not supported:
    Customers?$expand=Orders,OrderItems

    OrderItems is a second level entity with respect to Customers. To query Orders and OrderItems, the query must be rooted at Orders. For example:
    Orders?$expand=OrderItems
    Orders(id)?$expand=OrderItems


  • When manually editing the ODataSchemaMap value, the table names and column names specified in the value are case-sensitive. The case of the table and column names must match the case of the tables and column names reported by the data source.
    Note: It is highly recommended that you use the OData Schema Editor to generate the value for the ODataSchemaMap data source option. The Schema Editor takes care of table and column name casing and other syntactic details.
  • When using the substring function on properties that map to a CHAR column in the data source, it is data source dependent as to whether the substring function treats the trailing spaces as significant. When going against Oracle, the trailing spaces are preserved. When going against other data sources, the trailing spaces are discarded.
  • The $expand clause is not supported with OpenEdge data sources.
  • The day scalar function is not working when specified in a $filter clause when querying a DB2 data source.
All Data Sources
  • It is recommended that Login Timeout is enabled (set to 0) for a Data Source.
  • Using setByte to set parameter values fails when the data source does not support the TINYINT SQL type. Use setShort or setInt to set the parameter value instead of setByte.
DB2
  • "Numeric value out of range” error occurs when calling SQLStatistics with the Hybrid Data Pipeline ODBC driver.
Google Analytics
  • Once a Google Analytics OAuth profile is created for a specific Google account, changing the Google Account associated with the profile results in "the configuration options used to open the database do not match the options used to create the database" error being returned for any existing data sources.
  • Google Analytics data sources return an error when used in the SQL Editor. However, they work with Hybrid Data Pipeline ODBC, JDBC and OData clients.
  • Validation message is not displayed when a user enters a Start Date value less than the End Date value in Create/Update Google Analytics page.
Microsoft Dynamics CRM
  • Executing certain queries against MS Dynamics CRM with the JDBC driver may result in a “Communication failure. Protocol error."
  • Testing has shown the following two errors from Microsoft Dynamics CRM Online when executing queries against the ImportData and TeamTemplate tables:
    • Attribute errortype on Entity ImportData is of type picklist but has Child Attributes Count 0
    • Attribute issystem on Entity TeamTemplate is of type bit but has Child Attributes Count 0
    Note: We have filed a case with Microsoft and are waiting to hear back about the cause of the issue.

  • The initial on-premises connection when the rel
  • ational map is created can take some time. It is even possible to receive an error "504: Gateway Timeout". When this happens, Hybrid Data Pipeline continues to build the map in the background such that subsequent connection attempts are successful and have full access to the relational map.
OpenEdge 10.2b
  • Setting the MaxPooledStatements data source option in an OpenEdge data store to a value other than zero can cause statement not prepared errors to be returned in some situations.
Oracle Database
  • Executing queries against column of type xmltype results in the following error: “This column type is not currently supported by this driver."
Oracle Sales Cloud
  • Currently, passing filter conditions to Oracle Sales Cloud works only for simple, single column conditions. If there are multiple filters with 'AND' and 'OR', only partial or no filters are passed to Oracle Sales Cloud.
  • Oracle Sales Cloud reports the data type of String and Date fields as String. Therefore, when such fields are filtered or ordered in Hybrid Data Pipeline, they are treated as String values. However, when filter conditions are passed to Oracle Sales Cloud, Oracle Sales Cloud can distinguish between the actual data types and apply Date specific comparisons to Date fields. Therefore, query results can differ depending on whether filters have been passed down to Oracle Sales Cloud or processed by Hybrid Data Pipeline.
  • There appears to be a limitation with the Oracle Sales Cloud REST API concerning the >=, <=, and != comparison operators when querying String fields. Therefore, Hybrid Data Pipeline has not been optimized to pass these comparison operators to Oracle Sales Cloud. We are working with Oracle on this issue.There appears to be a limitation with the Oracle Sales Cloud REST API concerning queries with filter operations on Boolean fields. Therefore, Hybrid Data Pipeline has not been optimized to pass filter operations on Boolean fields to Oracle Sales Cloud. We are working with Oracle on this issue.
  • The drivers currently report ATTACHMENT type fields in the metadata but do not support retrieving data for these fields. These fields are set to NULL.
  • Join queries between parent and child tables are not supported.
  • Queries on child tables whose parent has a composite primary key are not supported. For example, the children of ACTIVITIES_ACTIVITYCONTACT and LEADS_PRODUCTS are not accessible.
  • Queries on the children of relationship objects are not supported. For example, the children of ACCOUNTS_RELATIONSHIP, CONTACTS_RELATIONSHIP, and HOUSEHOLDS_RELATIONSHIP are not accessible.
  • Queries on grandchildren with multiple sets of Parent IDs and Grand Parent IDs used in an OR clause are not supported. For example, the following query is not supported.
    Select * From ACCOUNTS_ADDRESS_ADDRESSPURPOSE
    Where (ACCOUNTS_PARTYNUMBER = 'OSC_12343' AND
    ACCOUNTS_ADDRESS_ADDRESSNUMBER = 'AUNA-2XZKGH')
    OR (ACCOUNTS_PARTYNUMBER = 'OSC_12344' AND
    ACCOUNTS_ADDRESS_ADRESSNUMBER = 'AUNA-2YZKGH")
  • When querying documented objects like "CATALOGPRODUCTITEMS" and "CATEGORYPRODUCTITEMS", no more than 500 records are returned, even when more records may be present. This behavior is also seen with some custom objects. We are currently working with Oracle support to resolve this issue.
  • A query on OPPORTUNITIES_CHILDREVENUE_PRODUCTS or LEADS_PRODUCTGROUPS with a filter on the primary key column returns 0 records even when more records are present. We are currently working with Oracle support to resolve this issue.
  • Queries that contain subqueries returning more than 100 records are not supported. For example, the following query is not supported.
    Select * From ACCOUNTS_ADDRESS
    Where ACCOUNTS_PARTYNUMBER
    In (Select Top 101 PARTYNUMBER From ACCOUNTS
  • When you create custom objects, your Oracle Sales Cloud administrator must enable these objects for REST API access through Application Composer. Otherwise, you will not be able to query against these custom objects.
Oracle Service Cloud
  • When you create a custom object, your Oracle Service Cloud administrator must enable all four columns of the Object Fields tab of the Object Designer, or you cannot query against the custom objects.
  • The initial connection when the relational map is created can take some time. It is even possible to receive an error "504: Gateway Timeout". When this happens, Hybrid Data Pipeline continues to build the map in the background such that subsequent connection attempts are successful and have full access to the relational map.
SugarCRM
  • Data sources that are using the deprecated enableExportMode option will still see a problem until they are migrated to the new data source configuration.
  • Data source connections by default now use Export Mode to communicate with the Sugar CRM server, providing increased performance when querying large sets of data. Bulk export mode causes NULL values for currency columns to be returned as the value 0. Because of this, there is no way to differentiate between a NULL value and 0, when operating in export mode. This can be a problem when using currency columns in the SQL statements, because Hybrid Data Pipeline must satisfy some filter conditions on queries, such as with operations like =, <>, >, >=, <, <=, IS NULL and IS NOT NULL.  For example, suppose a currency column in a table in SugarCRM has 3 null values and 5 values that are 0. When a query is executed to return all NULL values (SELECT * FROM <table> WHERE <>currency column> IS NULL), then 3 rows are returned. However, if a query is executed to return all rows where the column performs an arithmetic operation (SELECT * FROM <table> WHERE <currency column> + 1 = 1), then all 8 records are returned because the 3 NULL values are seen as 0.
Hybrid Data Pipeline JDBC Driver
  • Executing certain queries against MS Dynamics CRM may result in a “Communication failure. Protocol error."
  • Using JNDI data sources, encryptionMethod must be configured through setExtendedOptions.
  • Default value for Service connection option does not connect to Hybrid Data Pipeline server. Set Service=<my hybrid data pipeline server> in your connectionURL to successfully connect to your server.
  • For additional notes on the JDBC driver, see the JDBC driver readme file.
Hybrid Data Pipeline ODBC Driver
  • When calling SQLStatistics in DB2 with the ODBC driver, the error "Numeric value out of range” occurs.
  • The default ODBC.INI generated by the installer is missing required entries for Service=, PortNumber=, and HybridDataPipelineDataSource=.
  • For additional notes on the ODBC driver, see the ODBC driver readme file.
OpenAccess Server for Hybrid Data Pipeline

Third Party Acknowledgments

Third party acknowledgments are listed on the following Web page.



Release 4.0.0

Progress DataDirect Hybrid Data Pipeline is a data access server that provides simple, secure access to cloud and on-premises data sources, such as RDBMS, Big Data, and NoSQL. Hybrid Data Pipeline allows business intelligence tools and applications to use ODBC, JDBC, or OData to access data from supported data sources. Hybrid Data Pipeline can be installed in the cloud or behind a firewall. Hybrid Data Pipeline can then be configured to work with applications and data sources in nearly any business environment. Progress DataDirect Hybrid Data Pipeline consists of four primary, separately installed components.
  • The Hybrid Data Pipeline server provides access to multiple data sources through a single, unified interface. The server can be hosted on premises or in the cloud.

  • The On-Premises Connector enables the Hybrid Data Pipeline to establish a secure connection from the cloud to an on-premises data source.

  • The ODBC driver enables ODBC applications to communicate to a data source through the Hybrid Data Pipeline server.

  • The JDBC driver enables JDBC applications to communicate to a data source through the Hybrid Data Pipeline server.

In addition to these four primary components, Progress DataDirect also provides a customized version of OpenAccess server. The OpenAccess server is a connectivity layer required for an Eloqua data store in a Hybrid Data Pipeline environment.

4.0.0 Release Notes

Known Issues

Hybrid Data Pipeline Server Installer
  • When choosing an External database under “Custom Installation”, the admin user and user fields are pre-populated with erroneous values.
  • When configuring OpenAccess server under “Custom Installation”, the Enable OpenAccess Integration check box is checked, but the Hostname and Eloqua Port boxes are disabled. To enable them, uncheck the check box and then check it again.
  • When performing an upgrade install and using an external database, you must choose the Custom installation path and re-enter the information for your external database.
Web UI
  • Google Analytics data sources return an error when used in the SQL Editor. However, they work with Hybrid Data Pipeline ODBC, JDBC and OData clients.
  • When an administrator tries to add new users using the Add Users window, the Password and Confirm Password fields occasionally do not appear properly in the popup window.
On-Premise Connector
  • If User Account Control is enabled on your Windows machine and you installed the On-Premises Connector in a system folder (such as Windows or Program Files), you must run the On-Premises Connector Configuration Tool in administrator mode.
  • When using Kerberos with Microsoft Dynamics, the JRE installed with the On-Premises Connector must be configured to run with Kerberos. Take the following steps to configure the JRE.
    • Download a zip file containing new version of the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files for JDK/JRE 7 at http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html.
    • Unzip the file into the \jre\lib\security directory to update the Java security policy files to support 256-bit encryption:
      • C:\<installdir>\jre\lib\security\local_policy.jar
      • C:\<installdir>\jre\lib\security\US_export_policy.jar
  • Uninstalling and re-installing the On-Premise Connector causes the Connector ID of the On-Premise Connector to change. Any Hybrid Data Pipeline data sources using the old Connector ID must be updated to use the new Connector ID. Installing to a new directory allows both the old and new On-Premise Connector to exist side-by-side. However, you must update the Connector ID option in previously-defined Hybrid Data Pipeline data sources to point to the new On-Premise Connector. In addition, you must update Connector Id wherever it was used, such as the definitions of Group Connectors and Authorized Users.
All Data Sources
  • It is recommended that Login Timeout not be disabled (set to 0) for a Data Source.
  • Using setByte to set parameter values fails when the data source does not support the TINYINT SQL type. Use setShort or setInt to set the parameter value instead of setByte.
DB2
  • "Numeric value out of range” error when calling SQLStatistics with the Hybrid Data Pipeline ODBC driver.
Google Analytics
  • Once a Google Analytics OAuth profile is created for a specific Google account, changing the Google Account associated with the profile results in "the configuration options used to open the database do not match the options used to create the database" error being returned for any existing data sources.
  • Validation message is not displayed when a user enters a Start Date value less than the End Date value in Create/Update Google Analytics page.
OData
  • $expand only supports one level deep.
  • For example, with the entity hierarchy:
    Customers
    |-- Orders
    | |-- OrderItems
    |-- Contacts


    The following queries are supported:
    Customers?$expand=Orders
    Customers?$expand=Contacts
    Customers?$expand=Orders,Contacts


    However, this query is not supported:
    Customers?$expand=Orders,OrderItems

    OrderItems is a second level entity with respect to Customers. To query Orders and OrderItems, the query must be rooted at Orders. For example:
    Orders?$expand=OrderItems
    Orders(id)?$expand=OrderItems


  • When manually editing the ODataSchemaMap value, the table names and column names specified in the value are case-sensitive. The case of the table and column names must match the case of the tables and column names reported by the data source.
    Note: It is highly recommended that you use the OData Schema Editor to generate the value for the ODataSchemaMap data source option. The Schema Editor takes care of table and column name casing and other syntactic details.
  • When using the substring function on properties that map to a CHAR column in the data source, it is data source dependent as to whether the substring function treats the trailing spaces as significant. When going against Oracle, the trailing spaces are preserved. When going against other data sources, the trailing spaces are discarded.
  • The $expand clause is not supported with OpenEdge data sources.
  • The day scalar function is not working when specified in a $filter clause when querying a DB2 data source.
Oracle Database
  • Executing queries against column of type xmltype result in the following error: “This column type is not currently supported by this driver."
Oracle Sales Cloud
  • Create Mapping is not fully supported for the Oracle Sales Cloud data source. Typically, when editing a data source from the Data Sources page, a user would need to select "Force New" for Create Mapping under the Mapping tab to refresh a schema. However, this currently results in an input/output error. As a workaround, create a new data source with the desired configuration.
  • External storage for processing large results is not currently supported for Oracle Sales Cloud. All processing currently takes place in memory. This primarily impacts queries with post processing options and limits the size of the query that can be successfully processed to the system resources available to the Hybrid Data Pipeline connectivity service.
  • The drivers currently report ATTACHMENT type fields in the metadata but do not support retrieving data for these fields. These fields are set to NULL.
  • Join queries between parent and child tables are not supported.
  • Queries on child tables whose parent has a composite primary key are not supported. For example, the children of ACTIVITIES_ACTIVITYCONTACT and LEADS_PRODUCTS are not accessible.
  • Queries on grandchildren with multiple sets of Parent IDs and Grand Parent IDs used in an OR clause are not supported. For example, the following query is not supported.
    Select * from ACCOUNTS_ADDRESS_ADDRESSPURPOSE
    where (ACCOUNTS_PARTYNUMBER = 'OSC_12343' AND
    ACCOUNTS_ADDRESS_ADDRESSNUMBER = 'AUNA-2XZKGH')
    or (ACCOUNTS_PARTYNUMBER = 'OSC_12344' AND
    ACCOUNTS_ADDRESS_ADRESSNUMBER = 'AUNA-2YZKGH")
  • When querying documented objects like "CATALOGPRODUCTITEMS" and "CATEGORYPRODUCTITEMS", no more than 500 records are returned, even when more records may be present. This behavior is also seen with some custom objects. We are currently working with Oracle support to resolve this issue.
  • A query on OPPORTUNITIES_CHILDREVENUE_PRODUCTS or LEADS_PRODUCTGROUPS with a filter on the primary key column returns 0 records even when more records are present. We are currently working with Oracle support to resolve this issue.
  • Queries that contain subqueries returning more than 100 records are not supported. For example, the following query is not supported.
    select * from ACCOUNTS_ADDRESS
    where ACCOUNTS_PARTYNUMBER
    in (select top 101 PARTYNUMBER from ACCOUNTS
  • When you create custom objects, your Oracle Sales Cloud administrator must enable these objects for REST API access through Application Composer. Otherwise, you will not be able to query against these custom objects.
Oracle Service Cloud
  • When you create a custom object, your Oracle Service Cloud administrator must enable all four columns of the Object Fields tab of the Object Designer, or you cannot query against the custom objects.
  • The initial connection when the relational map is created can take some time. It is even possible to receive an error "504: Gateway Timeout". When this happens, Hybrid Data Pipeline continues to build the map in the background such that subsequent connection attempts are successful and have full access to the relational map.
Microsoft Dynamics CRM
  • Testing has shown the following two errors from Microsoft Dynamics CRM Online when executing queries against the ImportData and TeamTemplate tables:
    • Attribute errortype on Entity ImportData is of type picklist but has Child Attributes Count 0
    • Attribute issystem on Entity TeamTemplate is of type bit but has Child Attributes Count 0
    Note: We have filed a case with Microsoft and are waiting to hear back about the cause of the issue.

  • The initial on-premises connection when the relational map is created can take some time. It is even possible to receive an error "504: Gateway Timeout". When this happens, Hybrid Data Pipeline continues to build the map in the background such that subsequent connection attempts are successful and have full access to the relational map.
OpenEdge 10.2b
  • Setting the MaxPooledStatements data source option in an OpenEdge data store to a value other than zero can cause statement not prepared errors to be returned in some situations.
SugarCRM
  • Data sources that are using the deprecated enableExportMode option will still see a problem until they are migrated to the new data source configuration.
  • Data source connections by default now use Export Mode to communicate with the Sugar CRM server, providing increased performance when querying large sets of data. Bulk export mode causes NULL values for currency columns to be returned as the value 0. Because of this, there is no way to differentiate between a NULL value and 0, when operating in export mode. This can be a problem when using currency columns in the SQL statements, because Hybrid Data Pipeline must satisfy some filter conditions on queries, such as with operations like =, <>, >, >=, <, <=, IS NULL and IS NOT NULL.  For example, suppose a currency column in a table in SugarCRM has 3 null values and 5 values that are 0. When a query is executed to return all NULL values (SELECT * FROM <table> WHERE <>currency column> IS NULL), then 3 rows are returned. However, if a query is executed to return all rows where the column performs an arithmetic operation (SELECT * FROM <table> WHERE <currency column> + 1 = 1), then all 8 records are returned because the 3 NULL values are seen as 0.
Hybrid Data Pipeline JDBC Driver
  • Executing certain queries against MS Dynamics CRM may result in a “Communication failure. Protocol error."
  • Using JNDI data sources, encryptionMethod must be configured through setExtendedOptions.
  • Default value for Service connection option does not connect to Hybrid Data Pipeline server. Set Service=<my hybrid data pipeline server> in your connectionURL to successfully connect to your server.
  • For additional notes on the JDBC driver, see the JDBC driver readme file.
Hybrid Data Pipeline ODBC Driver
  • The default ODBC.INI generated by the installer is missing required entries for Service=, PortNumber=, and HybridDataPipelineDataSource=.
  • For additional notes on the ODBC driver, see the ODBC driver readme file.
OpenAccess Server for Hybrid Data Pipeline

Third Party Acknowledgments

Third party acknowledgments are listed on the following Web page.



patch-whats-new

Read Next

What's new