Deliver superior customer experiences with an AI-driven platform for creating and deploying cognitive chatbots
Deliver Awesome UI with the most complete toolboxes for .NET, Web and Mobile development
Automate UI, load and performance testing for web, desktop and mobile
A complete cloud platform for an app or your entire digital business
Detect and predict anomalies by automating machine learning to achieve higher asset uptime and maximized yield
Automate decision processes with a no-code business rules engine
Optimize data integration with high-performance connectivity
Connect to any cloud or on-premises data source using a standard interface
Build engaging multi-channel web and digital experiences with intuitive web content management
Personalize and optimize the customer experience across digital touchpoints
Build, protect and deploy apps across any platform and mobile device
Wired Magazine continues a fascinating thread which I first picked up on Jonathan Schwartz's blog on what may be quickly becoming the next frontier for enterprise computing.
The problem can be framed quite succinctly - how can you transfer vast quantities of data from one place to another? Schwartz posses the problem:- if you have a Petabyte of data (that's a million gigabytes), what would be the most efficient way of transferring it from say San Francisco to Hong Kong ? He goes on to paint a rather bleak picture:
"So if you had a half megabit per second internet connection, which is relatively high in the US (relatively low compared to residential bandwidth available in, say, Korea), it'd take you 16 billion seconds, or 266 million minutes, or 507 years to transmit the data."
In fact, by his calculation you could record this amount of information on the a set of hard disks with equivalent storage capacity and leisurely sail across the Pacific ocean and still deliver this information faster. Ridiculous as it sounds, this is a reasonable solution until you get to the problem facing the Hubble telescope.
Google's Chris DiBona is reported to have met with NASA to determine an effective way of solving this problem, amd I wonder with the knowledge that Schwartz's solution is not a practical approach. The solution: FedExNet. It works something like this:- Google packages dedicated machines which are then shipped to teams of scientists across the globe. Each team then transfers their portion of the Hubble telescope data and return it to Google. There the consolidation process takes place and the archive grows. Should team want the data back, the process can simply reversed.
You can read more on what Google intends to do with all this data in the Wired article, but it's interesting that resorting to physical media remains the optimal solution. Given infrastructure projections that I've read, it seems this approach will remain the same for sometime, but data consolidations of such massive scales will likely lend itself to a new set of data access patterns.
Likely to follow include seismic shifts in query strategies, techniques and technologies to ensure applications can extract discrete, but sufficiently useful amounts of information from these mega-databases. Perhaps a community driven query engine will emerge that leverages Web 2.0 tagging to splice together queries to build more efficient queries ? Given the degree of data, that may not turn out to be that crazy of an idea...
View all posts from Jonathan Bruce on the Progress blog. Connect with us about all things application development and deployment, data integration and digital business.
Copyright © 2018 Progress Software Corporation and/or its subsidiaries or affiliates.
All Rights Reserved.
Progress, Telerik, and certain product names used herein are trademarks or registered trademarks of Progress Software Corporation and/or one of its subsidiaries or affiliates in the U.S. and/or other countries. See Trademarks for appropriate markings.