Mining data with new software and analysis.
When it comes to reliability, Murray Wiseman isn’t selling software, he’s selling “ideas.” The president of OMDEC (Optimal Maintenance Decision) Inc., a Toronto company that develops and applies leading-edge maintenance solutions, outlined some of these ideas and how achieving reliability is changing during a maintenance conference workshop.
Wiseman says today’s knowledge-based data collection systems handle the complexities of decisions, help to pinpoint elusive data items, and put it all into a form that optimizes decision-making to achieve maximum equipment performance and reliability at minimum cost.
The volume of data will continue to increase and so will the power of computers to filter, sort and analyze data. The quest to maximize the use of this information about equipment repair and replacement has only just begun. Capitalizing on the powers and options that computer systems and software provide will keep us on the track to continuous improvement in maintenance practices, says Wiseman.
Despite new ideas, the real story hasn’t changed. Maintenance professionals still say that better-quality data would help them do a better job. More sophisticated and more capable software, improving analysis techniques, and emerging new maintenance concepts will achieve that.
One of these concepts is life cycle costing or LCC. The methodology – a process of achieving reliability from data – begins as an exercise in solving problems, making decisions among competing options, using data to determine the optimum approach, and then making the change. The decision itself is the result of thorough analysis, often handled by engineers and finance people working together.
Field level inputs can be limited to data and opinions as evidence is gathered to feed the analysis. Once those decisions are made, they must be put into place. Some changes are straight-forward and simple: replacing “like” with “like.” Others are far more complicated: replacing “x” with “y.”
Another concept is living RCM (LRCM) pioneered and developed by OMDEC over a four-year process led by Wiseman, who is often referred to as the father of LRCM. It’s an extension methodology of basic reliability-centred maintenance to the day-to-day work order process that updates the RCM knowledge base, synchronizes the CMMS failure codes as changes to the RCM knowledge base, and ensures that “perfect” data is transcribed onto the work order for reliability analysis.
In that way, LRCM provides an audit trail of evolving knowledge that records the progress in an organization’s understanding of each failure mode and its effects and consequences. Continuous knowledge refinement, as required by the LCRM process, improves the effectiveness of maintenance.
But which data will support optimal decision making, how should one transform the relevant data into optimal maintenance policies, and how should one verify the performance of those policies to improve them continuously?
Living RCM methodology addresses these challenging questions by integrating with the natural maintenance environment regardless of which technology platforms are in use, by linking significant work orders to the RCM knowledge base, and by generating unbiased samples for reliability analysis.
LRCM focuses on condition-based maintenance and deals with ways to improve confidence in predictive maintenance. There’s a five-step process to achieve this:
Condition-based maintenance, or CBM, has gained traction in the last few years and is the core of LRCM. It’s defined as the gathering, processing and analyzing of relevant data and observations to make good and timely decisions on whether to intervene immediately, to plan for maintenance at a specific time, or to continue operating the equipment until the next CBM inspection interval.
The criteria for the decision is the probability and the consequences of the failure. CBM is now the preferred method for proactive maintenance and a prerequisite for LRCM.
Modelling and simulation achieves availability improvements, cost reduction, optimized repair parts and maintenance resources and extended equipment life. This is accomplished by evaluating multiple scenarios and detailed recommendations, and analyzing availability, parts needs and timing, as well as maintenance capacity.
DESIGN Pro (or DES) is a model development platform based on discrete event simulation, an extension of the Monte Carlo method, which describes the motion of atomic particles in a nuclear explosion. It’s of use only to nuclear power plants, but DESIGN Pro has spawned a more generally useful maintenance-oriented new methodology called DEMAND Pro.
It too is based on a model, such as a simplified representation of a system at some particular point in time, to assist the understanding of the real system, and on simulation. The latter is the manipulation of a model that compresses system operations over time to perceive interactions and behaviours that would not be apparent otherwise.
All of these improved knowledge-based information systems are a great aid to better maintenance. Use them to keep continuous improvement in focus.
Steve Gahbauer is an engineer and Toronto-based freelance writer, the former engineering editor of PLANT and a regular contributing editor. E-mail firstname.lastname@example.org.
Comments? Join the discussion below.
This article appears in the Nov/Dec. 2013 issue of PLANT.