How predictive maintenance is finally coming of age.
This probing question was famously part of computing pioneer Alan Turing’s test to determine whether computers are capable of thinking.
If AI could pass this test, Turing suggested, then the computer could finally be said to ‘think’.
As it happens, Scotland’s Forth Bridge is also famous for its permanent maintenance team which was forever painting the steel structure to protect it from the wind and rain. Australian actor Paul Hogan did the same job on the Sydney Harbour Bridge before he got his big break as Crocodile Dundee. Painting bridges by starting at one end and then repeating the process when you get to the other end, is a perfect example of reactive maintenance strategies.
Nearly 70 years on from the Turing test, AI still can’t write great sonnets. But in the face of changeable external factors like weather, it can certainly improve the performance of physical assets by informing predictive maintenance. Through the power of the insight that AI delivers, better decisions can be made by people or machines, and it’s these better decisions which result in valuable efficiencies.
Predictive insight comes in many forms. By monitoring equipment for vibration or noise, sensors can detect a problem before it occurs, while infrared monitoring can do the same job for electrical equipment. Regardless of method, predictive insight is fundamentally about improving productivity, reliability, efficiency and reducing risk.
Real World Data
If you’re managing a large asset, you might have the asset’s internal data, process and knowledge under control, but what about the external real-world variables that are also at play? Inevitably, weather, climate and many other external factors are already having an impact on your maintenance regimes, and with it your ability to deliver productivity objectives.
Adding Sensors to Your Super Model
No physical asset lives in a bubble, and it’s worth thinking about how to connect it to the data sources that represent the real-world outside the physical bounds of the facility. By attuning your Super Model to the external environment via sensors, you can begin to generate the predictive insights that will make for better decision making, which will, in turn, allow for continuous improvement.
A recent example of putting real-world data to good use is how Google used predictive analytics powered by DeepMind AI to optimise energy use in their data centres. Thousands of sensors were already capturing valuable data about temperature, power usage and cooling pump speeds. This historical data was used to train the neural networks that succeeded in reducing the energy consumption of their cooling systems by 40%. Importantly, the DeepMind team recognised that each data centre was unique in terms of its architecture and external environment. By designing the predictive model so that it could react immediately to local temperature and pressure variables, the recommended actions were finely tuned to the unique conditions of each data centre.
In the field of supply chain logistics, Glasgow’s Streamba are working on sophisticated predictive analytics that take account of the effects of weather when scheduling a support vessel’s voyage to drilling platforms out at sea. By understanding the effects of wave height on the vessel’s ability to service the platform, Streamba’s VOR technology is able to align forward demand prediction with weather conditions. This allows the operator to optimise the voyage and minimise non-productive time on the journey.
Real-world data encompasses communities as well as the natural elements. Before a hospital is built, predictive analytics can analyse data on the surrounding population’s disease patterns, demographics and longevity. From this data it can glean useful insights such as likely medical outcomes and projected readmission rates, all of which will influence the size and scope of the hospital’s facilities. At a plant or factory, data on the fly-in fly-out lifestyle of contractors can potentially shed light on safety incidents.
Sensors also have a role to play in reducing the impact of extreme weather events on an asset’s operations, especially as climate change impacts on external parameters. Now that these events are more frequent and intense than before, you need to anticipate what effects that might have.
When an unexpected cold weather event hits, for instance, you might be hundreds of miles away. With the roads buried under snow there’s a good chance you won’t be able to send anyone out to check on the integrity of your facility.
With sensors in place, extreme weather can be tracked as it approaches. Engineers monitoring that data remotely can start modelling different potential outcomes on the asset’s digital twin. This will enable them to identify where problems are likely to occur on the physical asset, triggering preventative measures on the ground.
When the cold weather event actually hits, those sensors can measure the performance stresses on your asset in real-time. That gives you a much better picture of the ongoing impact on operations, and can enable some remote process tuning responses to get the best production from the plant without shutting it down. Once teams on the ground are able to move, they can then prioritise the most important repair and maintenance interventions, guided by the data that you send to them on mobile tools.
Communication breakdowns and delays are not uncommon in the aftermath of an extreme weather event, so here blockchain technologies can provide a failsafe service, giving all stakeholders access to payment and delivery data, for example, even when the legacy systems are out of action or destroyed. That way secure and efficient transactions can continue to flow, enabling logistical operations to swing back into action as soon as possible.
In the aftermath of the Fukushima nuclear meltdown, sensors harnessed to the power of open data have been used to map contamination levels for the affected population. On top of the 500 external sensors that feed into a web-hosted database, empowered citizens voluntarily upload radiation measurements of their own, helping everyone to avoid contamination hotspots when planning a journey.
External data is available from all kinds of sources, including open source climate data. So decide what’s going to be useful, and hook it up to your Super Model. This will provide a much broader picture of what’s going on across all aspects of your asset, which will allow you to deliver better decisions.
With better real-world information flowing into your predictive analytics models, continuous improvement has just got a whole lot easier.
Meanwhile, all that historical data, together with the lessons learned, are captured and stored in the knowledge repository of your Super Model. This improves overall risk assessment and can help mould the designs of future assets so that they are more resilient in the face of climate events.
There’s no better testament to lessons learned than the Forth Bridge, which is still in use 128 years after it was first opened to railway traffic. Its elegant but strong cantilevered architecture was designed reactively in the wake of the Tay Bridge disaster, whose collapse had plunged 75 train passengers to their deaths in 1879.
In the nineteenth century, risk assessment for large infrastructure projects of this kind was the domain of the Astronomer Royal, who unfortunately lacked the insights afforded by AI-powered predictive analytics. As a result, he advised the Tay Bridge’s designer, Sir Thomas Bouch, that wind load could safely be ignored.
It only took a winter gale to prove him wrong.