An asterisk (*) indicates support that was added in a hotfix or software patch subsequent to a release.
4.6.0 Release Notes
Issue HDP-3878 OData model creation failure
OData model creation was failing when the connectivity service was building an OData model from a very large database. Additionally, if unable to read metadata from unique or unusual tables, the creation of the OData model would result in either no rows returned or only partial rows returned. Hybrid Data Pipeline now builds the OData model from the tables selected to be in the model, as opposed to all the tables in the database.
The Hybrid Data Pipeline server and On-Premises Connector have been upgraded to install and use Tomcat 9.0.20.
Hybrid Data Pipeline supports transactions against data stores that provide transaction support such as DB2, MySQL, Oracle, and SQL Server. Transactions are supported for JDBC, ODBC, and OData client applications. For JDBC and ODBC applications, transactions are handled via the TransactionMode property and Transaction Mode option, respectively. For OData client applications, Hybrid Data Pipeline supports transactions for OData Version 4 batch requests.
Hybrid Data Pipeline supports SQL read-only access to JSON-based REST services through the Autonomous REST Connector. When you create a REST data source, the connector creates a relational model of the returned JSON data and translates SQL statements to REST API requests.
Web UI multitenant user management
The Web UI now supports multitenant user management functionality. System administrators can use the Web UI to isolate groups of users, such as organizations or departments, that are being hosted on Hybrid Data Pipeline. In addition, administrators can create roles and provision users using the Web UI. Depending on permissions, administrators may also use the Web UI to manage data sources, specify throttling and other limits, and set system configurations.
PostgreSQL system database
Hybrid Data Pipeline requires an internal or external system database for storing user and configuration information. PostgreSQL 11 is now supported as an external system database.
JDBC and ODBC throttling
A beta version of a new throttling limit has been introduced in the System Limits view. The XdbcMaxResponse limit can be used to set the approximate maximum size of JDBC and ODBC HTTP result data.
Hybrid Data Pipeline uses an embedded JRE at runtime. However, you can integrate an external JRE with a standing deployment of Hybrid Data Pipeline. The following JREs are currently supported.
- Oracle Java 8 JRE
- OpenJDK 8 JRE
The Hybrid Data Pipeline server and On-Premises Connector have been upgraded to install and use Tomcat 9.0.19.
The OData concurrent queries limit has been renamed from MaxConcurrentQueries to ODataMaxConcurrentQueries. This limit determines the maximum number of concurrent active OData queries per data source.
JDBC driver JVM requirements
- The following JVM implementations are now supported.
- Oracle Java 8 and 11
- OpenJDK 8 and 11
- Java SE 6 and 7 JVM implementations are no longer supported.
Windows platform support
The following Windows platforms have reached the end of their product life cycle and are no longer supported by the drivers or the On-Premises Connector.
- Windows 8.0 (versions 8.1 and higher are still supported)
- Windows Vista (all versions)
- Window XP (all versions)
- Windows Server 2003 (all versions)
The following enhancements and changes have been made to support Oracle connectivity.
- The LOB Prefetch Size option has been added to the Advanced tab. LOB prefetch is supported for Oracle database versions 126.96.36.199 and higher. This option allows you to specify the size of prefetch data the driver returns for BLOBs and CLOBs. With LOB prefetch enabled, the driver can return LOB meta-data and the beginning of LOB data along with the LOB locator during a fetch operation. This can have significant performance impact, especially for small LOBs which can potentially be entirely prefetched, because the data is available without having to go through the LOB protocol.
- The default value for the Data Integrity Level has been updated to accepted.
- The default value for the Encryption Level has been updated to accepted.
The following enhancements and changes have been made to support Salesforce connectivity.
- The Salesforce Bulk API, including PK chunking, is now supported for bulk fetch operations. This functionality can be configured with the following parameters.
- Enable Bulk Fetch specifies whether the Salesforce Bulk API will be used for selects based on the value of the Bulk Fetch Threshold parameter.
- Bulk Fetch Threshold specifies a number of rows that, if exceeded, signals that the Salesforce Bulk API should be used for select operations.
- Enable Primary Key Chunking specifies whether primary key chunking is used for select operations.
- Primary Key Chunk Size specifies the size, in rows, of a primary key chunk when primary key chunking has been enabled.
- The Enable Bulk Load default has been updated to ON. By default, the bulk load protocol can be used for inserts, updates, and deletes based on the Bulk Load Threshold parameter.
- The Map System Column Names default has been updated to 0. By default, the names of the Salesforce system columns are not changed when mapping the Salesforce data model.
- The Custom Suffix default has been updated to include. By default, the "_c" and "_x" suffixes are included for table and column names when mapping the Salesforce data model.
See Hybrid Data Pipeline known issues for details.