Home Services Partners Company
170x performance boost for Salesforce web services API applications using JDBC real time cache

170x performance boost for Salesforce web services API applications using JDBC real time cache

August 14, 2012 0 Comments

Shops have been very pleased with data access performance using DataDirect Connect XE ODBC and JDBC drivers to Salesforce.com, Force.com and Database.com. This is in part due to Salesforce's commitment to maintaining a high level of transaction performance, and they

The query execution took 1028ms.

Next, I enabled the cache by executing the following statements through the SQL-92 interface (Complete grammar is available in the User's Guide and Reference):


Then I executed the query again to fill the cache; and a subsequent time to leverage it.

Now, the query is taking 6 ms (down from 1028 ms)!!

If you want to test using my heroku web application for yourself, just leave a blog comment with your e-mail address and I will send you the url. Over 170x faster, how did you come up with that number? Disclaimer: Transaction times vary by time and day per Salesforce.com, and I observed performance increases of up to 700x. The performance improvement observed above is from a single test run, and I highly recommend testing in your own application environment. Further, DataDirect has a dedicated performance testing lab and R&D team that run weekly benchmarks across various test scenarios that measure throughput, scalability, and resource efficiency. Clearly, I did not consult with them on this test :) 1.  Install Progress DataDirect ConnectXE for JDBC using the instructions in the Progress DataDirect Connect for JDBC Installation Guide.  Click here to download a free 15-day trial for JDBC or ODBC. 2. Populate some data into your force.com developer account.  I did this using the following JDBC code sample and inserted a bunch of dummy opportunities for dummy accounts using Starbucks locations across the US.  Note: Make sure to add EnableBulkLoad=true connection property to codelessly leverage the Salesforce.com bulk load API codelessly for improved performance and efficient use of web service calls.

public void loadOpportunities(Connection connection, int month, int year) throws SQLException{
ResultSet rs = null;
Statement stmt = connection.createStatement();
PreparedStatement pstmt = connection.prepareStatement("INSERT INTO OPPORTUNITY (ACCOUNTID,SYS_NAME, AMOUNT, STAGENAME, CLOSEDATE) VALUES (?,?,?,?,?)");

int count=0; java.util.Random rand = new java.util.Random();

//STAGENAME field values String[] stage = new String[2]; stage[0] = "Closed Won"; stage[1] = "Closed Lost";

rs = stmt.executeQuery("SELECT ROWID, SYS_NAME FROM ACCOUNT"); //get rowids from account table while (rs.next()){ count++;

pstmt.setString(1,rs.getString(1)); pstmt.setString(2,rs.getString(2)); pstmt.setInt(3,rand.nextInt(101) ); pstmt.setString(4,stage[rand.nextInt(2)]); pstmt.setString(5,getClosedDate(month,year));


if ( (count % BATCH_SIZE) == 0 ) { pstmt.executeBatch(); System.out.println ("Loaded "+BATCH_SIZE+" opportunity records"); }//end if }//end while }

3. Run SQL-92 prompt servlet on Heroku (hosted development platform from Salesforce) by following the tutorials:

(Thanks to James Ward, developer evangelist, at Salesforce for helping me deploy the JDBC driver out to Heroku using above tutorials) 4. Below is servlet code being measured:

long startTime = System.currentTimeMillis();

// CREATE STATEMENT Statement stmt; stmt = connection.createStatement();

// EXECUTE SQL boolean ret = false; ResultSet results = null; int updateCount,i = 0;

ret = stmt.execute(field); if (ret == true){ results = stmt.getResultSet(); } else{ updateCount = stmt.getUpdateCount(); out.println ("Update Count: "+updateCount+br); }

if (ret == true) { // GET ALL RESULTS StringBuffer buf = new StringBuffer(); ResultSetMetaData rsmd = results.getMetaData(); int numCols = rsmd.getColumnCount(); int rowcount = 0;

// get column header info for (i=1; i if (i > 1) buf.append(" | "); buf.append(rsmd.getColumnLabel(i)); } buf.append(br);

// break it off at 100 rows max while (results.next() && rowcount < 100){ // Loop through each column, getting the column // data and displaying

for (i=1; i if (i > 1) buf.append(" | "); buf.append(results.getString(i)); } buf.append(br); rowcount++; } results.close();

long endTime = System.currentTimeMillis();

<!-- How other shops leverage DataDirect data access caching technology The cache can be tuned to refresh at any given interval including during maintenance windows to keep data moving 24/7. You can view specific maintenance windows, listed by instance, at trust.salesforce.com/trust/status/#maint. And being a good tenant, shops can reduce the number of web service calls being made to their Salesforce instances through realtime caching of SQL access. --> Post a blog comment to share how the built in caching works for you!

Sumit Sakar

Sumit Sarkar

Sumit Sarkar is a Chief Data Evangelist at Progress, with over 10 years experience working in the data connectivity field. The world's leading consultant on open data standards connectivity with cloud data, Sumit's interests include performance tuning of the data access layer for which he has developed a patent pending technology for its analysis; business intelligence and data warehousing for SaaS platforms; and data connectivity for aPaaS environments, with a focus on standards such as ODBC, JDBC, ADO.NET and ODATA. He is an IBM Certified Consultant for IBM Cognos Business Intelligence and TDWI member. He has presented sessions on data connectivity at various conferences including Dreamforce, Oracle OpenWorld, Strata Hadoop, MongoDB World and SAP Analytics and Business Objects Conference, among many others. 

Read next Build an ETL Pipeline with Kafka Connect via JDBC Connectors
Comments are disabled in preview mode.