What could go wrong connecting a massively distributed data processing system to your core business data? Learn how to use Apache Sqoop in this free webinar.
Apache Sqoop is the standard tool for loading and exporting data between Hadoop and traditional data stores such as relational databases or SaaS applications, through a standard JDBC interface. Sqoop serves as the data access layer for the Hadoop ecosystem to connect external structured data.
According to Hortonworks, “Apache Sqoop efficiently transfers bulk data between Apache Hadoop and structured datastores such as relational databases. Sqoop helps offload certain tasks (such as ETL processing) from the EDW to Hadoop for efficient execution at a much lower cost. Sqoop can also be used to extract data from Hadoop and export it into external structured datastores. Sqoop works with relational databases such as Teradata, Netezza, Oracle, MySQL, Postgres, and HSQLDB.”
Apache Sqoop does the following to integrate bulk data movement between Hadoop and structured datastores (images from Hortonworks):
But what are the best practices for using Sqoop? What about interoperability with JDBC data sources from relational to cloud? Discover all this and more in our webinar!
Title: The Inside Scoop on Apache Sqoop
Date: August 25, 2016
Time: 11:00 am ET
Idaliz Baez, Progress DataDirect
Alex Silva, PluralSight
Suzanne is passionate about promoting the Progress Data Connectivity and Integration business and corporate initiatives through social media and other marketing channels using extraordinary and compelling content and effective metrics. She is also team lead for DCI content developers, new hires and interns.
Copyright © 2018 Progress Software Corporation and/or its subsidiaries or affiliates.
All Rights Reserved.
Progress, Telerik, and certain product names used herein are trademarks or registered trademarks of Progress Software Corporation and/or one of its subsidiaries or affiliates in the U.S. and/or other countries. See Trademarks for appropriate markings.