Deliver superior customer experiences with an AI-driven platform for creating and deploying cognitive chatbots
Deliver Awesome UI with the most complete toolboxes for .NET, Web and Mobile development
Automate UI, load and performance testing for web, desktop and mobile
A complete cloud platform for an app or your entire digital business
Detect and predict anomalies by automating machine learning to achieve higher asset uptime and maximized yield
Automate decision processes with a no-code business rules engine
Optimize data integration with high-performance connectivity
Connect to any cloud or on-premises data source using a standard interface
Build engaging multi-channel web and digital experiences with intuitive web content management
Personalize and optimize the customer experience across digital touchpoints
Build, protect and deploy apps across any platform and mobile device
Rapidly develop, manage and deploy business apps, delivered as SaaS in the cloud
What are the steps you should take to be ready to take advantage of Machine Learning, and why is now the time to do so? Learn the answers in part two of our series on demystifying Machine Learning. You can find the first part here.
Machine Learning is poised to take a massive leap in adoption in the near future, as most businesses have begun to develop Machine Learning strategies, with an increasing number in advanced stages. In our last post, we talked about what Machine Learning means, how machines really learn and how it can help you. But Machine Learning is not a new concept—it’s been around since the late 1950s—so why is now the time to adopt? And once you decide to adopt, how do you develop your strategy? These are crucial questions that need demystifying before a Machine Learning plan can be put in place.
In the previous post in this series, we mentioned that processing the flood of data from modern devices can happily be automated by machines, which is the promise of Machine Learning. However, it’s only in recent years that this idea has moved from theory into practice. Two key advances in technology have enabled this change.
Deriving patterns from data has always been possible, but most problems had a severe lack of data associated with them. If you didn’t know exactly how each cog in your industrial machine was being used, not just once but at all times, how accurate could any predictions about the whole device really be? Today there is a growing recognition in the value of the Industrial Internet of Things (IIoT)—for example, oil rigs are already packed with as many as 40,000 data tags.
This data is the raw material needed for machine learning, but by itself the value is limited. There’s too much data for a human to really comprehend. To make it actionable, it needs to be refined further.
The growth in computational power has been exponential. This summer, the world’s top supercomputer had a peak speed of about 125,500 Teraflops. To put that in context, that’s almost 54 times faster than the leader in 2010, over 39,000 times faster than at the turn of the millennium and nearly a million times more powerful than in 1993. It’s safe to say things are possible today that simply weren’t realistic even a few years ago, and enormous computational power is now available at a scale that is accessible to the SMB market as well as the major multinational enterprises.
In parallel with raw computational power, complex new algorithms have been developed to allow data scientists to run models using all available data. Previously models had to be generalized to simplify the the analytics process, but machine comprehension can now ingest 100% of the data generated by every asset or person. The result is a far higher degree of accuracy than could be achieved with human analysis alone.
While it’s obvious that increasing prediction accuracy is a generally good goal, the impact of even small gains in accuracy can be deceptively powerful. It could revolutionize manufacturing through not only predicting failure for machinery and avoiding costly downtime, but also by impacting warranty claims, risk mitigation, part harmonization and cost-benefit analyses. Armed with this predictive knowledge, manufacturers can better manage recalls, which can cost automakers over a million dollars a day. According to McKinsey, manufacturing alone can save $630 billion a year by 2025 with predictive maintenance.
We’ve now covered the basics of how machine learning works, and why the time is right for adoption. However, a crucial question remains—how do you actually implement these changes? While there will be changes to the data scientist lifecycle, from a business perspective, you do not need to be an expert in math to determine the best way to leverage Machine Learning for your organization.
Currently data scientists are tasked with manually creating models used to predict problems. They attempt to identify patterns in historical data that indicate past failures, and then apply this model to current machine data looking for matching patterns. Unfortunately, our research shows that in many scenarios, only 20% of failures are replicated, while the remaining 80% are random. Identifying only 20% of failures is a recipe for future problems.
That is why our Cognitive Predictive Maintenance approach models normal state. Instead of identifying past failure states, we model normal state and then we compare current machine data and identify anomalies. This allows organizations to identify a much higher level of potential problems, leading to highly accurate predictions. The platform constantly tunes the model with the latest data to ensure that changes to the operating environment don’t degrade the quality of the predictions. This process is automated so that data scientists are freed from tedious steps and can spend more time making informed decisions.
As a business leader, it’s not necessary to understand every algorithmic nuance to see how predictive analytics can be applied to your organization. What you need to understand is how your business objectives can be met with analytics.
There are several concrete steps you can take to make sure you’re prepared. If you’re reading this, you’re already doing the first one, which is to educate yourself on the basics of the technology and its business value. Once you have that baseline of understanding, it’s time to implement it in your organization.
At Progress, we pride ourselves on delivering everything you need to develop the business applications of the future. Our cognitive-first solution makes it easy to build your app at every level, from UX to data connectivity to cognitive intelligence as a service, with first-class support at every step. Whether you need a full stack for an end-to-end application, or have an existing product to enhance, we’re here to help.
Mark Troester is the Vice President of Strategy at Progress. He guides the strategic go-to-market efforts for the Progress cognitive-first strategy. Mark has extensive experience in bringing application development and big data products to market. Previously, he led product marketing efforts at Sonatype, SAS and Progress DataDirect. Before these positions, Mark worked as a developer and developer manager for start-ups and enterprises alike. You can find him on LinkedIn or @mtroester on Twitter.
Copyright © 2018 Progress Software Corporation and/or its subsidiaries or affiliates.
All Rights Reserved.
Progress, Telerik, and certain product names used herein are trademarks or registered trademarks of Progress Software Corporation and/or one of its subsidiaries or affiliates in the U.S. and/or other countries. See Trademarks for appropriate markings.