Progress DataDirect Hybrid Data Pipeline is a data access server that provides simple, secure access to cloud and on-premises data sources, such as RDBMS, Big Data, and NoSQL. Hybrid Data Pipeline allows business intelligence tools and applications to use ODBC, JDBC, or OData to access data from supported data sources. Hybrid Data Pipeline can be installed in the cloud or behind a firewall. Hybrid Data Pipeline can then be configured to work with applications and data sources in nearly any business environment. Progress DataDirect Hybrid Data Pipeline consists of four primary, separately installed components.
-
The Hybrid Data Pipeline server provides access to multiple data sources through a single, unified interface. The server can be hosted on premises or in the cloud.
-
The On-Premises Connector enables the Hybrid Data Pipeline to establish a secure connection from the cloud to an on-premises data source.
-
The ODBC driver enables ODBC applications to communicate to a data source through the Hybrid Data Pipeline server.
-
The JDBC driver enables JDBC applications to communicate to a data source through the Hybrid Data Pipeline server.
Changes Since Release 4.1.0
Enhancements
Hybrid Data Pipeline server
- Account Lockout Policy (Limits API). Support has been added for implementing an account lockout policy. An account lockout policy allows the administrator to set the number of consecutive failed authentication attempts that result in a user account being locked, as well as the lockout period and the duration of time that failed attempts are counted. When a lockout occurs, the user is unable to authenticate until the specified period of time has passed or until the administrator unlocks the account.
- Configurable CORS Behavior (Limits API). Support for disabling cross-origin resource sharing (CORS) filter for environments that do not require it. Since Hybrid Data Pipeline does not currently support filtering of cross-origin requests, disabling CORS filters can provide added security against cross-site forgery attacks.
Apache Hive
- Certified with Apache Hive 2.0 and 2.1.
IBM DB2
- Certified with DB2 for i 7.3
Oracle Database
- Certified with Oracle 12c R2 (12.2).
Resolved Issues
Hybrid Data Pipeline server
- Version 4.1.0.44. Bug 71841. Resolved an issue where the Hybrid Data Pipeline server failed to honor the START_ON_INSTALL environment variable to stop and start Tomcat services.
- Version 4.1.0.44. Resolved an issue where the installer accepted an SSL certificate only in the PEM file format during the installation of the server for a cluster environment. The installer now accepts the SSL certificate (root certificate) in PEM, DER, or base64 encodings for a cluster installation.
- Version 4.1.0.44. Resolved an issue where an SSL certificate was required for a cluster installation. An SSL certificate is no longer required for a cluster installation.
- Version 4.1.0.44. Resolved an issue that prevented the installer from supporting a number of upgrade scenarios.
JDBC driver
- Version 4.1.0.7. Resolved an issue where the JDBC driver was not connecting to the Hybrid Data Pipeline server by default when running on a UNIX/Linux system.
4.1.0 Release Notes
Security
OpenSSL
SSL Enabled Data Stores
- The default value for Crypto Protocol Version has been updated to TLSv1, TLSv1.1, TLSv1.2 for data stores that support the option. This change improves the security of the connectivity service by employing only the most secure cryptographic protocols as the default behavior. At connection, the connectivity service will attempt to use the most secure protocol first, TLS 1.2, then fall back to use 1.1 and then 1.0.
On-Premises Connector
- The On-Premises Connector has been enhanced to resolve a security vulnerability. We strongly recommend upgrading to the latest version to take advantage of this fix.
Apache Hive Data Store
- Hybrid Data Pipeline now supports SSL for Apache Hive data stores running Apache Hive 0.13.0 or higher.
SQL Server Data Store
- Support for NTLMv2 authentication has been added for the SQL Server data store. NTLMv2 authentication can be specified in the Authentication Method field under the Security tab.
Enhancements
Hybrid Data Pipeline server
- Hybrid Data Pipeline Cluster. To support scalability, the Hybrid Data Pipeline service can be deployed on multiple nodes behind a load balancer. Incoming requests can be evenly distributed across cluster nodes. SSL communication is supported if the load balancer supports SSL termination. Session affinity is supported to bind a client query to a single node for improved performance. (Session affinity must be enabled in the load balancer to support the Web UI and ODBC and JDBC clients.) HTTP health checks are supported via the Health Check API.
- MySQL Community Edition Data Store. Support for MySQL Community Edition has been added to Hybrid Data Pipeline. During installation of the Hybrid Data Pipeline server and the On-Premises Connector, you provide the location of the MySQL Connector/J driver. After installation, you may then configure data sources that connect to a MySQL Community Edition data store and execute queries with ODBC, JDBC, and OData applications.
- MySQL Community Edition System Database. Support for MySQL Community Edition as an external system database has been added. During the installation process, you are prompted to select either an internal database or an external database to store system information necessary for the operation of Hybrid Data Pipeline. With this enhancement, you can choose either Oracle or MySQL Community Edition as an external database.
- Installation Procedures and Response File. The installation procedures have been modified with the introduction of support for the Hybrid Data Pipeline cluster, the MySQL Community Edition data store, and the MySQL Community Edition system database. New prompts have been added to the installation process. Several of these prompts have corresponding settings that must be used in the response file for silent installation of the server. If you are performing silent installations of the server, your response file must be modified accordingly. The following list provides the new settings. The settings may differ depending on whether you generate the response file with a GUI or console installation.
Note: Values for the SKIP_HOSTNAME_VALIDATION and SKIP_PORT_VALIDATION options have been changed from false | true to 0 | 1. These options have the same name in GUI-generated and console-generated response files.
Note: Values for the SKIP_LB_HOSTNAME_VALIDATION option are currently 0 for disable and true for enable. In a future release, the values will be 0 for disable and 1 for enable. This option has the same name in GUI-generated and console-generated response files.
New response file options.The first name in the list is the name of the response file option generated by the GUI installer. The second name in the list is the name generated by the console mode installer. (If only one value is provided, there is no corresponding value for console mode.)
-
USING_LOAD_BALANCING_YES | D2C_USING_LOAD_BALANCING _CONSOLE - Specifies whether you are installing the service on a node behind a load balancer.
-
LOAD_BALANCING_HOST_NAME | LOAD_BALANCING_HOST_NAME_CONSOLE - Specifies the hostname of the load balancer appliance or the machine hosting the load balancer service.
-
USING_LOAD_BALANCING_NO - Specifies whether you are installing the service on a node behind a load balancer. For console installation, only D2C_USING_LOAD_BALANCING _CONSOLE is used.
-
SKIP_LB_HOSTNAME_VALIDATION | SKIP_LB_HOSTNAME_VALIDATION - Specifies whether the installer should validate the load balancer hostname during the installation of a node.
-
D2C_CERT_FILE | D2C_CERT_FILE_CONSOLE - Specifies the fully qualified path of the Certificate Authority certificate that signed the load balancer server certificate. This certificate is used to create the trust store used by ODBC and JDBC clients.
-
D2C_DB_MYSQL_COMMUNITY_SUPPORT_YES | D2C_DB_MYSQL_COMMUNITY_SUPPORT_CONSOLE - Specifies whether the service will support MySQL Community Edition data store.
-
D2C_DB_MYSQL_JAR_PATH | D2C_DB_MYSQL_JAR_PATH_CONSOLE - Specifies whether the fully qualified path of the MySQL Connector/J jar file to support a MySQL Community Edition data store.
-
D2C_DB_MYSQL_COMMUNITY_SUPPORT_NO - Specifies whether the service will support MySQL Community Edition data store. For console installation, only D2C_DB_MYSQL_COMMUNITY_SUPPORT_CONSOLE is used.
-
D2C_DB_VENDOR_MYSQL - Specifies whether a MySQL Community Edition database will be used as the external system database. For console mode installations, D2C_DB_VENDOR_CONSOLE is used to specify an Oracle or MySQL Community Edition external system database.
-
D2C_DB_PORT_MYSQL - Specifies the port number of the MySQL Community Edition external database. For console mode installations, D2C_DB_PORT_CONSOLEis used to specify the port of either an Oracle or MySQL Community Edition externa system database.
-
USER_INPUT_KEY_LOCATION | USER_INPUT_KEY_LOCATION_CONSOLE - Specifies the fully qualified path of the encryption key to be shared by the nodes in a cluster environment.
- Throttling (Limits API). Support for throttling to prevent a user or group of users from adversely impacting the performance of the connectivity service has been added. The Limits API allows administrators to set limits on how many rows can be returned for ODBC, JDBC, and OData requests. An error is returned if an application fetches rows beyond the specified limit.
- Refresh Map. The new refresh map button has been added to the Mapping tab. This button allows you to refresh the map without connecting to the data store. This feature is useful when you are in the process of developing your application and you have made changes to the objects in your backend data store. Pressing this button forces the data store to rebuild the map allowing the new objects to show up in the relational map the next time your application connects to the data source. (The map can also be refreshed with a Management API call or when establishing a connection.)
- SQL Editor. The SQL editor in the SQL Testing view has been upgraded. The functionality of the new editor is similar to that of the previous editor. However, the history panel is not currently supported with the new editor.
- OpenAccess Server. The OpenAccess server component has been deprecated. The OpenAccess server is no longer required to connect with Oracle Eloqua.
On-Premises Connector
- Upgraded to use Tomcat 8.0.41
- Upgraded to use Java SE 8
- Support for Windows Server 2003 has been deprecated
Hybrid Data Pipeline ODBC Driver
- Certified with CentOS Linux 4.x, 5.x, 6.x, and 7.x
- Certified with Debian Linux 7.11, 8.5
- Certified with Oracle Linux 4.x, 5.x, 6.x, and 7.x
- Certified with Ubuntu Linux 14.04, 16.04
- Support for Windows Server 2003 has been deprecated
Apache Hive
- Added SSL support for Apache Hive 0.13.0 and higher
- Certified with Apache Hive 0.13, 0.14, 1.0, 1.1, 1.2
- Certified with Amazon (AMI) 3.2, 3.3.1, 3.7
- Certified with Cloudera (CDH) 5.0, 5.1, 5.2, 5.3, 5.4, 5.4, 5.6, 5.7
- Certified with Hortonworks (HDP) 2.1, 2.2
- Certified with IBM BigInsights 4.1
- Certified with Pivotal HD (PHD) 2.1
Greenplum
- Made generally available
- Certified with Greenplum 4.3
- Certified with Pivotal HAWQ 1.2, 2.0
IBM DB2
- Certified with IBM DB2 V11.1 for LUW
- Certified with DB2 for i 7.2
Informix
- Made generally available
- Certified with Informix 12.10
- Certified with Informix 11.7, 11.5, 11.0
- Certified with Informix 10.0
- Certified with Informix 9.4, 9.3, 9.2
Oracle Marketing Cloud (Oracle Eloqua)
The Oracle Marketing Cloud data store provides access to Oracle Eloqua. Improved features and functionality for this data store are available with this Hybrid Data Pipeline release.
- Write Access
- Support for INSERT/UPDATE/DELETE operations on CONTACT, ACCOUNT and CustomObjects_XXX
- Bulk Calls
- Performance improvement for bulk calls
- Supports fetching more than 5 million records
- Supports fetching up to 250 columns for bulk calls
- Supports pushing OR operators for bulk calls (This does not apply to Activities)
- REST Calls
- Some queries with OR and AND operators have been optimized.
- Metadata
- The data store now uses null as the catalog name. Previously, ECATALOG was used as the catalog name.
- The current version of the data store maps columns with integer data to type INTEGER. The previous version mapped the integer type to string.
- In contrast to the previous version, the current version of the data store cannot split OR queries and push them separately to Oracle Eloqua APIs. Therefore, compared to the previous version, the current version may take longer to return results involving OR queries.
- The previous version of the data store used the ActivityID field as the primary key for Activity_EmailXXX objects, such as Activity_EmailOpen, Activity_EmailClickthrough, and Activity_EmailSend. In contrast, the current version of the data store uses the ExternalID field as the primary key instead of ActivityID.
PostgreSQL
- Certified with PostgreSQL 9.3, 9.4, 9.5, 9.6
Progress OpenEdge
- Certified with Progress OpenEdge 11.4, 11.5, 11.6
Salesforce
- Certified with Salesforce API 38
SAP Sybase ASE
- Made generally available
- Certified with SAP Adaptive Server Enterprise 16.0
- Added support for NTLMv2 authentication. NTLMv2 authentication can be specified in the Authentication Method field under the Security tab.
- Certified with Microsoft SQL Server 2016
Resolved Issues
Web UI
- Resolved an issue where the SQL editor in the SQL Testing view returned errors when executing SQL commands against Google Analytics data sources
OData
- Resolved an issue where OData requests were timing out before application could finish retrieving the results
Hybrid Data Pipeline Management API
- Resolved an issue where a 201 was returned when adding members to a group data source through the Management API
- Resolved an issue where a normal user would receive a 400 instead of a 404 error when using the user query parameter to Management API calls
- Resolved an issue where user creation API allowed invalid values for the status field
DB2
- Resolved an issue where the error "Numeric value out of range" occurs when calling SQLStatistics in DB2 with the ODBC driver
Google Analytics
- Resolved an issue where the SQL editor in the SQL Testing view returned errors when executing SQL commands against Google Analytics data sources
Known Issues
Hybrid Data Pipeline server installation
- When installing the server in a load balancing environment, a .pem file with a private key and trusted CA certificate must be specified even though a private key is not required for a load balancing environment.
- Silent installation of the server for an On-Premises Connector implementation is not currently supported. You must perform a GUI or console mode installation of the server to install the server for an On-Premises Connector implementation.
- A silent installation in console mode with the server configured to use MySQL Community Edition as an external system database is not currently supported. However, a silent installation with this configuration can be performed using the installer in GUI mode.
JDBC driver installation
- If you attempt to install the JDBC driver in GUI mode to the default installation directory but do not have appropriate permissions for the default directory, the installer indicates that the installation has succeeded when in fact the driver has not been installed. When attempting an installation under the same circumstances but in console mode, the proper error message is displayed.
- On UNIX/Linux, the JDBC driver does not accept the values specified in the redistribution files generated during the installation of the Hybrid Data Pipeline server. In turn, the driver does not connect to the Hybrid Data Pipeline server by default.
Web UI
- If a Hybrid Data Pipeline administrator creates a user with a password that contains a percentage mark (%), the new user may face issues while trying to login. In addition, Hybrid Data Pipeline functionality may not work as expected.
- When an administrator tries to add new users using the Add Users window, the Password and Confirm Password fields occasionally do not appear properly in the popup window.
- COPY DETAILS functionality is not currently working in Internet Explorer 11 due to a limitation with the third party plugin Clipboard.js on bootstrap modals. More details on this can be found at https://github.com/zenorocha/clipboard.js/wiki/Known-Issues.
Management API
- When the Limits API (throttling) is used to set a row limit and createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE) is being used, a row-limit-exceeded error is returned at the row limit instead of one row beyond the limit. For example, if a row limit is set at 45 rows when returning a scrollable, insensitive result set beyond the specified limit, the connectivity service returns the following error on the 45th row as opposed to the expected 46th row: The limit on the number of rows that can be returned from a query -- 45 -- has been exceeded.
- If a Hybrid Data Pipeline administrator creates a user with a password that contains a percentage mark (%), the new user may face issues while trying to login. In addition, Hybrid Data Pipeline functionality may not work as expected.
OData
- $expand only supports one level deep.
- For example, with the entity hierarchy:
Customers
|-- Orders
| |-- OrderItems
|-- Contacts
The following queries are supported:
Customers?$expand=Orders
Customers?$expand=Contacts
Customers?$expand=Orders,Contacts
However, this query is not supported:
Customers?$expand=Orders,OrderItems
OrderItems
is a second level entity with respect to Customers
. To query Orders
and OrderItems
, the query must be rooted at Orders. For example:
Orders?$expand=OrderItems
Orders(id)?$expand=OrderItems
- When manually editing the ODataSchemaMap value, the table names and column names specified in the value are case-sensitive. The case of the table and column names must match the case of the tables and column names reported by the data source.
Note: It is highly recommended that you use the OData Schema Editor to generate the value for the ODataSchemaMap data source option. The Schema Editor takes care of table and column name casing and other syntactic details.
- When using the substring function on properties that map to a CHAR column in the data source, it is data source dependent as to whether the substring function treats the trailing spaces as significant. When going against Oracle, the trailing spaces are preserved. When going against other data sources, the trailing spaces are discarded.
- The $expand clause is not supported with OpenEdge data sources.
- The day scalar function is not working when specified in a $filter clause when querying a DB2 data source.
On-Premise Connector
- If User Account Control is enabled on your Windows machine and you installed the On-Premises Connector in a system folder (such as Windows or Program Files), you must run the On-Premises Connector Configuration Tool in administrator mode.
- When using Kerberos with Microsoft Dynamics, the JRE installed with the On-Premises Connector must be configured to run with Kerberos. Take the following steps to configure the JRE.
- Download a zip file containing new version of the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files for JDK/JRE 8 at http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html.
- Unzip the file into the \jre\lib\security directory to update the Java security policy files to support 256-bit encryption:
- C:\<installdir>\jre\lib\security\local_policy.jar
- C:\<installdir>\jre\lib\security\US_export_policy.jar
- Uninstalling and re-installing the On-Premise Connector causes the Connector ID of the On-Premise Connector to change. Any Hybrid Data Pipeline data sources using the old Connector ID must be updated to use the new Connector ID. Installing to a new directory allows both the old and new On-Premise Connector to exist side-by-side. However, you must update the Connector ID option in previously-defined Hybrid Data Pipeline data sources to point to the new On-Premise Connector. In addition, you must update Connector ID wherever it was used, such as the definitions of Group Connectors and Authorized Users.
JDBC Driver
- If you attempt to install the JDBC driver in GUI mode to the default installation directory but do not have appropriate permissions for the default directory, the installer indicates that the installation has succeeded when in fact the driver has not been installed. When attempting an installation under the same circumstance but in console mode, the proper error message is displayed.
- On UNIX/Linux, the JDBC driver does not accept the values specified in the redistribution files generated during the installation of the Hybrid Data Pipeline server. In turn, the driver does not connect to the Hybrid Data Pipeline server by default.
- The default value for the Service connection property does not connect to the Hybrid Data Pipeline server. To connect, set Service to the Hybrid Data Pipeline server in your connection URL.
- The JDBC 32-bit silent installation fails on Windows 10. Use the standard installation instead.
- Executing certain queries against MS Dynamics CRM may result in a “Communication failure. Protocol error."
- Using JNDI data sources, encryptionMethod must be configured through setExtendedOptions.
Hybrid Data Pipeline ODBC Driver
- The default ODBC.INI generated by the installer is missing required entries for Service=, PortNumber=, and HybridDataPipelineDataSource=.
- Console mode installation is supported only on UNIX.
- When you first install a driver, you are given the option to install a default data source for that driver. We recommend that you install default data sources when you first install the drivers. If you do not install the default data source at this time, you will be unable to install a default data source for this driver later. To install a default data source for a driver after the initial installation, you must uninstall the driver and then reinstall it.
All Data Sources
- It is recommended that Login Timeout is enabled (set to 0) for a Data Source.
- Using setByte to set parameter values fails when the data source does not support the TINYINT SQL type. Use setShort or setInt to set the parameter value instead of setByte.
Google Analytics
- Validation message is not displayed when a user enters a Start Date value less than the End Date value in Create/Update Google Analytics page.
- Once a Google Analytics OAuth profile is created for a specific Google account, changing the Google Account associated with the profile results in "the configuration options used to open the database do not match the options used to create the database" error being returned for any existing data sources.
Microsoft Dynamics CRM
- Executing certain queries against MS Dynamics CRM with the JDBC driver may result in a “Communication failure. Protocol error."
- Testing has shown the following two errors from Microsoft Dynamics CRM Online when executing queries against the ImportData and TeamTemplate tables:
- Attribute errortype on Entity ImportData is of type picklist but has Child Attributes Count 0
- Attribute issystem on Entity TeamTemplate is of type bit but has Child Attributes Count 0
Note: We have filed a case with Microsoft and are waiting to hear back about the cause of the issue.
- The initial on-premises connection when the relational map is created can take some time. It is even possible to receive an error "504: Gateway Timeout". When this happens, Hybrid Data Pipeline continues to build the map in the background such that subsequent connection attempts are successful and have full access to the relational map.
OpenEdge 10.2b
- Setting the MaxPooledStatements data source option in an OpenEdge data store to a value other than zero can cause statement not prepared errors to be returned in some situations.
Oracle Marketing Cloud (Oracle Eloqua)
- Data store issues
- There are known issues with Batch Operations.
- The Update/Delete implementation can update only one record at a
time. Because of this, the number of APIs executed depends on the
number of records that get updated or deleted by the query plus the
number of API calls required to fetch the IDs for those
records.
- Lengths of certain text fields are reported as higher than the actual
lengths supported in Oracle Eloqua.
- We are currently working with Oracle to resolve the following issues with
the Oracle Eloqua REST API.
- AND operators
that involve different columns are optimized. In other cases, the
queries are only partially optimized.
- OR operators on
the same column are optimized. In other cases, the queries are
completely post-processed.
- The data store is not able to insert or update the
NULL value to any field
explicitly.
- The data store is unable to update few fields. They
are always reported as NULL after
update.
- Oracle Eloqua uses a double colon (::) as an internal delimiter for
multivalued Select fields. Hence when a value with the semi-colon
character (;) is inserted or
updated into a multivalued Select field, the semicolon character
gets converted into the double colon character.
- Query SELECT count (*)
from template returns incorrect results.
- Oracle Eloqua APIs do not populate the correct
values in CreatedBy and UpdatedBy fields. Instead of user names,
they contain a Timestamp value.
- Only equality filters on id fields are optimized.
All other filter conditions are not working correctly with Oracle
Eloqua APIs and the data store is doing post-processing for such
filters.
- Filters on Non-ID Integer fields and Boolean fields
are not working correctly. Hence the driver needs to post-process
all these queries.
- The data store does not distinguish between NULL and empty string. Therefore, null
fields are often reported back as empty strings.
- Values with special characters such as curly braces
({,}), back slash (\),
colon (:), slash star (/*) and star slash (*/) are not supported in where clause
filter value.
Oracle Sales Cloud
- Currently, passing filter conditions to Oracle Sales Cloud works only for simple, single column conditions. If there are multiple filters with 'AND' and 'OR', only partial or no filters are passed to Oracle Sales Cloud.
- Oracle Sales Cloud reports the data type of String and Date fields as String. Therefore, when such fields are filtered or ordered in Hybrid Data Pipeline, they are treated as String values. However, when filter conditions are passed to Oracle Sales Cloud, Oracle Sales Cloud can distinguish between the actual data types and apply Date specific comparisons to Date fields. Therefore, query results can differ depending on whether filters have been passed down to Oracle Sales Cloud or processed by Hybrid Data Pipeline.
- There appears to be a limitation with the Oracle Sales Cloud REST API concerning the >=, <=, and != comparison operators when querying String fields. Therefore, Hybrid Data Pipeline has not been optimized to pass these comparison operators to Oracle Sales Cloud. We are working with Oracle on this issue.There appears to be a limitation with the Oracle Sales Cloud REST API concerning queries with filter operations on Boolean fields. Therefore, Hybrid Data Pipeline has not been optimized to pass filter operations on Boolean fields to Oracle Sales Cloud. We are working with Oracle on this issue.
- The drivers currently report ATTACHMENT type fields in the metadata but do not support retrieving data for these fields. These fields are set to NULL.
- Join queries between parent and child tables are not supported.
- Queries on child tables whose parent has a composite primary key are not supported. For example, the children of ACTIVITIES_ACTIVITYCONTACT and LEADS_PRODUCTS are not accessible.
- Queries on the children of relationship objects are not supported. For example, the children of ACCOUNTS_RELATIONSHIP, CONTACTS_RELATIONSHIP, and HOUSEHOLDS_RELATIONSHIP are not accessible.
- Queries on grandchildren with multiple sets of Parent IDs and Grand Parent IDs used in an OR clause are not supported. For example, the following query is not supported.
Select * From ACCOUNTS_ADDRESS_ADDRESSPURPOSE
Where (ACCOUNTS_PARTYNUMBER = 'OSC_12343' AND
ACCOUNTS_ADDRESS_ADDRESSNUMBER = 'AUNA-2XZKGH')
OR (ACCOUNTS_PARTYNUMBER = 'OSC_12344' AND
ACCOUNTS_ADDRESS_ADRESSNUMBER = 'AUNA-2YZKGH")
- When querying documented objects like "CATALOGPRODUCTITEMS" and "CATEGORYPRODUCTITEMS", no more than 500 records are returned, even when more records may be present. This behavior is also seen with some custom objects. We are currently working with Oracle support to resolve this issue.
- A query on OPPORTUNITIES_CHILDREVENUE_PRODUCTS or LEADS_PRODUCTGROUPS with a filter on the primary key column returns 0 records even when more records are present. We are currently working with Oracle support to resolve this issue.
- Queries that contain subqueries returning more than 100 records are not supported. For example, the following query is not supported.
Select * From ACCOUNTS_ADDRESS
Where ACCOUNTS_PARTYNUMBER
In (Select Top 101 PARTYNUMBER From ACCOUNTS
- When you create custom objects, your Oracle Sales Cloud administrator must enable these objects for REST API access through Application Composer. Otherwise, you will not be able to query against these custom objects.
Oracle Service Cloud
- When you create a custom object, your Oracle Service Cloud administrator must enable all four columns of the Object Fields tab of the Object Designer, or you cannot query against the custom objects.
- The initial connection when the relational map is created can take some time. It is even possible to receive an error "504: Gateway Timeout". When this happens, Hybrid Data Pipeline continues to build the map in the background such that subsequent connection attempts are successful and have full access to the relational map.
SugarCRM
- Data sources that are using the deprecated enableExportMode option will still see a problem until they are migrated to the new data source configuration.
- Data source connections by default now use Export Mode to communicate with the Sugar CRM server, providing increased performance when querying large sets of data. Bulk export mode causes NULL values for currency columns to be returned as the value 0. Because of this, there is no way to differentiate between a NULL value and 0, when operating in export mode. This can be a problem when using currency columns in the SQL statements, because Hybrid Data Pipeline must satisfy some filter conditions on queries, such as with operations like =, <>, >, >=, <, <=, IS NULL and IS NOT NULL. For example, suppose a currency column in a table in SugarCRM has 3 null values and 5 values that are 0. When a query is executed to return all NULL values (SELECT * FROM <table> WHERE <>currency column> IS NULL), then 3 rows are returned. However, if a query is executed to return all rows where the column performs an arithmetic operation (SELECT * FROM <table> WHERE <currency column> + 1 = 1), then all 8 records are returned because the 3 NULL values are seen as 0.