Deliver superior customer experiences with an AI-driven platform for creating and deploying cognitive chatbots
Deliver Awesome UI with the most complete toolboxes for .NET, Web and Mobile development
Automate UI, load and performance testing for web, desktop and mobile
A complete cloud platform for an app or your entire digital business
Detect and predict anomalies by automating machine learning to achieve higher asset uptime and maximized yield
Automate decision processes with a no-code business rules engine
Optimize data integration with high-performance connectivity
Connect to any cloud or on-premises data source using a standard interface
Build engaging multi-channel web and digital experiences with intuitive web content management
Personalize and optimize the customer experience across digital touchpoints
Build, protect and deploy apps across any platform and mobile device
Rapidly develop, manage and deploy business apps, delivered as SaaS in the cloud
I’ve been posting comments from an email discussion I had with Luis Ramos one of our Object Databases experts here at Progress (previous posts: Part 1, Part 2). In our last exchange, Luis commented on the reasons to switch from RDBMS to ODBMS, the market, and where to get more information.
In our final post for this special series we cover the question, Are ODBMS coming back? I put the question to our team of object database experts: Luis Ramos, Jeff Wagner and Adrian Marriott.
Me: Are we experiencing a new renaissance for ODBMS?
Adrian Marriott, Principal Consultant Progress Software: There are many technical reasons why OODBMS are attractive and that's why they never went away. They've been used continuously by a huge number of programs world-wide for over two decades. There's no "renaissance."
In terms of the next 'big thing' that will drive adoption and faster growth for object databases, I think it's the appearance of solid-state drives. Hard-discs with spinning platters are using archaic technology, several orders of magnitude slower than memory access and this encourages developers to think of fetching data 'out there' from disc. As solid state drives become faster - particularly for write - the idea of a seamless persistent object model that survives program invocations and can just be used directly and transparently from disc will become more compelling.
Jeff Wagner, Product Manager ObjectStore: Adrian's right, Progress ObjectStore revenues have continued to beat our expectations year in and year out. We continue to see adoption of object database technologies as people search for ways to increase performance, lower their TCO and shorten their time-to-market. Object databases can increase performance over RDBMS’s because most products offer an in-memory cache component. This can result in 3 or more orders of magnitude increase in performance. TCO is much lower since developers are not burdened with object-to-relational mapping code. This code can account for over 60% of the total code for an application persisting data in an RDBMS. This also means that development time is reduced, lowering overall development costs and getting products to market faster. Maintenance costs are reduced once an object database is deployed since it typically requires little/no administration.
Luis Ramos, Principal Consultant Progress Software: I concur with Adrian's and Jeff's points. These are excellent arguments. To that I would add that object-databases have a fundamentally more scalable architecture because the queries and processing are done at the client side, where the caches reside, rather than the server side. To scale an RDBMS, you would need to get a more powerful and expensive box. To scale an OODBMS, you can leverage as much off the shelf hardware as you need to host the clients and their caches.
Regarding solid-state drives, I am sure proponents of Relational databases would make a similar argument that using solid-state drives would boost the performance of their servers significantly. However, relational joins will still be expensive because its time complexity is fundamentally an "n squared" algorithm compared to following references which fundamentally has a constant time complexity. Add to that the cost of OR mapping if the data needs to be cached at the client.
Do you agree with Adrian, Jeff and Luis? Do you disagree? Feel free to comment in our comments section!
View all posts from Conrad Chuang on the Progress blog. Connect with us about all things application development and deployment, data integration and digital business.
Copyright © 2018 Progress Software Corporation and/or its subsidiaries or affiliates.
All Rights Reserved.
Progress, Telerik, and certain product names used herein are trademarks or registered trademarks of Progress Software Corporation and/or one of its subsidiaries or affiliates in the U.S. and/or other countries. See Trademarks for appropriate markings.