Is Industrial Data Piling Up Faster Than Dirty Laundry?

Is Industrial Data Piling Up Faster Than Dirty Laundry?

December 08, 2017 0 Comments
Is Industrial Data Piling Up Faster Than Dirty Laundry_870x450

Are you making use of all your valuable data? Learn about anomaly detection and prediction and how it addresses modern-day industrial challenges.

Leaner Times, Fatter Objectives

Many manufacturers face the predicament of rapidly depleting budgets, spiraling demand, and unpredictable machine downtime.

Left with little choice, plants are forced to function round the clock to meet objectives. Few realize that the key to addressing the unrealistic challenges that have become characteristic of any production cycle lies in tightening up operations – not stretching them to a breaking point. The problem with the latter option is that not only is it bound to witness fatigue and failure over time, but that once the inevitable failure actually happens, it’s much harder to get back on track… and the consequences are therefore far more catastrophic. Let’s take a look at a few numbers – according to an ITIC survey, 98% of organizations claim that an hour of downtime costs them over US$100,000, 81% indicated that the same downtime costs them in excess of US$300,000, and 33% reported that their cost amounted to US$1–5 million.

So the question is, how does one ensure tighter, smarter operations? Particularly in asset-intensive industries, machine failure and downtime are annoyingly common occurrences. And it’s been proven time and again that it’s practically not possible to plan uninterrupted operations. Or is it?

Technology to the Rescue

Technology to the Rescue_1200x625

Enter technology. What man can’t do, technology can. Right? And there have been more than enough technological developments to pick from over the past few years. In particular, the Industrial Internet of Things (IIoT) caused much commotion when it first entered the scene a few years ago. It provided the perfect way to connect and monitor everything on the production floor at once. In essence, IIoT offered the much-needed opportunity to transform industrial processes and take them to the next level. Here was the key to unlock every potential opportunity on the shop floor – whether in terms of detecting maintenance requirements before they spiralled out of control and halted processes, or in terms of identifying additional production capacity.

Yet, industrial operations today are largely the same as they were a decade ago. Aside from technological advancements in terms of better machines, not much has been done in terms of how to work said machines better. Why? The truth is, many stakeholders don’t realize that unlike other cases of technology adoption, where all you need to do is buy, install, connect and use a new machine, IIoT isn’t about using a machine to generate output. Many industries that have adopted IIoT have done just that – they’ve installed the sensors, the networks and other paraphernalia required to run the IIoT ecosystem. And run it does. It’s been gathering data about every movement and every click since it’s been installed.

However, adopting IIoT is about using this data to generate insights that can be leveraged to affect smart decisions – and thus, operations. Without this awareness, industries continue to operate in a largely blind manner – without leveraging the goldmines of information right under their noses. In the meantime, however, production challenges continue to mount, morph and multiply.

Too Much Information

This partial implementation of IIoT has in fact only added to the already existing industrial challenges. Industries are now left with massive data lakes, and have literally no clue what to do with them.

In essence, organizations are struggling to cope with not just increasingly sophisticated machines, but also the data they’re generating – which is running into exabytes. Let’s look at just how much data we’re talking about, and how quickly it’s growing:

  • 90% of the world’s data was created in just the past two years at a speed of 2.5 exabytes/day
  • 52% of all current organizational data is classified as “dark data,” or data of unknown value, with another 33% being redundant/obsolete
  • Merely 15% of all the data is business-critical – in fact, unmanaged data can cost up to US$3.3 trillion to manage by 2020
  • An average medium-sized enterprise that houses 1,000 TB of data spends an estimated US$650,000 pa for storage of non-critical information
  • An autonomous vehicle uses about 40 TB of data per eight hours of driving
  • Data analytics platforms are anticipated to constitute the fastest growing manufacturing segment by 2022, growing at a CAGR of 55.4%

Anomaly Detection and Prediction – Making Sense of the Noise

Making Sense of the Noise_1200x625

So what does this tell us? Yes, there’s a lot of machine noise for sure. But it wouldn’t be noise if it were filtered to identify the right signals – the blips on the radar. And why is this step missing? Largely because of deployment challenges. These can be triggered by various factors – deployment of appropriate data infrastructure, data quality and availability, and selection of appropriate data models, to name a few. Though data gathered and analyzed across industries can vary significantly, these challenges are largely similar. In fact, the challenges are so common that Gartner studies indicate that despite 48% of large companies investing in big data in 2016, growth has declined due to deployment-related challenges. And though approximately 75% of the respondents indicated that their organizations had invested in or were planning to invest in big data, many remained stranded in the pilot stage. According to Nick Heudecker, Research Director at Gartner, “The big issue is not so much big data itself, but rather how it is used.”

What’s needed, therefore, is a system to gather, analyze and synthesize data to identify variations from the normal – in other words, anomalies. For instance, a multi-billion-dollar and increasingly complex challenge facing every industry today is unpredictable asset downtime. Its repercussions are far-reaching and catastrophic, ranging from machine damage and downtime to expensive repair/replacement bills, not to mention opportunity cost for the duration of stopped production. Anomaly detection and prediction helps adopt a proactive approach to asset failure, thus minimizing downtime and other associated risks. It gathers asset data (both real-time data from machine sensors and historical maintenance data), analyzes operational, failure and maintenance patterns, and identifies potential failures across asset types and stages, which in turn facilitates predictive maintenance. Recurring failures thus identified can be factored in risk mitigation strategies, and the most optimal strategy identified to minimize and eliminate unplanned/unpredictable machine downtime.

Who Doesn’t Want Operations Perfectly Aligned to Their Business Goals?

The adoption, integration and deployment of anomaly detection and prediction may vary across industries, business applications and asset types, but the one common thread in any case is its perfect alignment with pre-determined business objectives. For instance, in manufacturing, it may be used to enhance productivity, maximize efficiency and minimize downtime; while in automotive, it may be used to identify the most optimal routes to reach a chosen destination; and in utilities, it may be used to optimize power grids.

Whatever the objective, the success of anomaly detection and prediction lies in the gathering and synthesis of real-time data, as well as its correlation with historical data to produce actionable insights. This, exactly, is what you call smart operations. Only when you can leverage the latent insights in big data and affect appropriate process changes to yield effective results can you say that operations are truly smart.

In essence, anomaly detection and prediction is exactly what you need to derive actual value from your existing information. It helps minimize disruptions, make better-informed business decisions and enhance the bottom line – which one of these would you say no to?

To learn more about anomaly detection and prediction and how it addresses modern-day industrial challenges, download our ebook.

Read the ebook

Anita Rajasekaran

Anita Rajasekaran

Anita Raj is a Product Marketing and Growth Hacking Strategist on the Progress DataRPM team, with over 10 years of experience working in the field of big data, cloud and machine learning. She brings a deep expertise in running growth marketing leadership experience at multi-billion-dollar enterprise companies and high growth start-ups. 

Comments
Comments are disabled in preview mode.
Topics
 
 
 
Latest Stories in
Your Inbox
Subscribe
More From Progress
OData: The Fastest Way to RESTify Your Databases
OData: The Fastest Way to RESTify Your Databases
Download Whitepaper
 
thumbnail_Kinvey_BusinessValue_232_132
The Business Value of Kinvey
Download Whitepaper
 
EME_Webinar_thumbnail
Finding App ROI Through Time And Money
Watch Webinar