The Big Data and Cloud “movements” have acted as facilitators for tremendous growth in fit-for-purpose databases. However, this does not come without a new set of challenges in how we access the data through our business-critical applications. I’ve written an in-depth piece for both Cloud Computing Journal and Big Data Journal that looks at the evolution of these data access methods in order to better understand why we are in the mess we are in today. Check out the excerpt below, and click on the images to read more.
From Cloud Computing Journal:
The Evolution of Data Sources
Back in the '80s the development of relational databases brought with it a standardized SQL protocol that could be easily implemented within mainframe applications to query and manipulate the data. These relational database systems supported transactions in a very reliable fashion through what was called "ACID" compliance (Atomicity, Consistency, Isolation, and Durability). These databases provided a very structured method of dealing with data and were very reliable. But ACID compliance also brought along lots of overheard process. Hence a downfall - they were not optimized to handle large transaction requests, nor could they handle huge volumes of transactions. To counteract this, we've did some significant performance and throughput enhancements within data connectivity drivers that lit a fire under the SQL speeds and connectivity efficiencies.
View all posts from Jeff Reser on the Progress blog. Connect with us about all things application development and deployment, data integration and digital business.
Copyright © 2018 Progress Software Corporation and/or its subsidiaries or affiliates.
All Rights Reserved.
Progress, Telerik, and certain product names used herein are trademarks or registered trademarks of Progress Software Corporation and/or one of its subsidiaries or affiliates in the U.S. and/or other countries. See Trademarks for appropriate markings.