Time to Repair – KPI, Wicked Problem, or Both?
Time to repair equipment or restore capability in response to a trouble ticket is appealing as a KPI in a service contract. But is this number also too variable to pin down and agree over? Contractor circumstances, contingencies outside of the control of contractor or FM, and the importance to the FM parent organization of correcting the problem quickly can combine to make setting a standard time to complete unscheduled field service almost a wicked problem. Wicked problems, a recognized genre for over a half century (background) are at some level insoluble because they resist yielding a single, reliable, answer, among other onerous characteristics. Time to repair by a contractor is, admittedly, not on the scale of social policies or international diplomacy, where wicked problems are often identified and studied, but in outsourced, wide scope, on-call maintenance, human behavior is operating within sometimes incomplete, contradictory, and changing circumstances that can resist benchmarking.
A Key Performance Indicator must be, as Stacey Barr shows, clear to understood, convenient to obtain, and practical to use to track and guide performance and measure improvement. But all similar trouble tickets are not equal in the work they cause. Contractor staff available on a given day at a given time, or parts, or weather, or travel time, and sometimes permits, approvals, and engineering can be factors. Still, we want to know, is the contractor’s typical time to restore justifiable, or could the contractor do better?
Take a simple repair, replacing a failed GFI outlet. If the outlet is in a busy conference room – in the executive suite, say – discovered on a Monday morning with meetings and presentations coming up – that is a different problem from failure of one of five GFI outlets along a bathroom counter. The first warrants an electrician pronto, the second, repair when convenient, possibly coincident with another ticket, to save travel and time. This one is easy enough to resolve, of course. The maintenance planner and contractor set appropriate urgency in the CMMS or over the phone. One repair ticket closes on the same day, the other in possibly as much as ten days – for the same satisfactory work. Unplanned maintenance is rich in contingencies. Until plenty of data accumulates to analyze, an FM is setting a benchmark into loose mud.
In an ongoing IFMA Engage discussion, Kamran Wahab in Dubai asked whether there is an internationally recognized benchmark for HVAC reactive maintenance, say number of reactive call-outs per month in a given facility. Gordon Rogers responded that, while he knew of no practical single number that didn’t take in the particulars of the system(s), “the best benchmark data I have seen is published by CBRE Whitestone in their ‘Costlab‘ product” (which collects and analyses cost data.) Gordon continued with guidelines for approximating a benchmark as a practical estimate based on multiple factors. I followed Gordon in response, noting that, when properties are spread out geographically and trouble calls are few for each property, bracketing in on a benchmark could be slow. Furthermore, as another colleague points out, if a contractor proposes a performance benchmark without presenting records and data, then how valid is this performance number?
Fortunately, FMs can break down a maintenance case into times to acknowledge a ticket, mobilize, travel, perform the work, report, and obtain owner/occupant confirmation. FMs and the contractor can agree on time to acknowledge a ticket, plan and mobilize response, and, after restoration, time for record updating and reporting. Valid onsite maintenance working times are published, for example, RSMeans. Travel times are addressable too, agreeing on a typical trip and time of day, using Google. That leave parts, but contractors and their suppliers can shrink parts delivery times, often to overnight. The other contingencies mentioned also have flexibility. A contractor can keep an eye on the weather. Relationships can have sway in getting permits and inspections, and in obtaining rental equipment. Go with your estimates to start off. Write down the processes and fill in times. If you want to check out the variation to expect, feasible options are available. Palisade @Risk, for example, can take an Excel worksheet and simulate hundreds of maintenance events in wide variety, using ranges that you specify for every condition, all in a few minutes, over and over, allowing you to change and recombine conditions and factors in any way that you like to understand whether the uncertainties in view really have much influence on cost and performance.
So much for wickedness: DOA. So, can we can reasonably expect that individual or average times to close tickets can serve as a KPI? One purpose of KPIs is to know if a work processes is performing reliably. But just as important is to verify improvements or spot problems. Subject to a simple but rigorous analysis (Stacey Barr) KPIs can show if improvement really takes place when a process changes, or if genuine problems turn up at some point. Do you and your contractor expect performance to improve in ways that your FM customers think important? If not, why not? Time to repair and return the equipment or system to service affects not only occupant satisfaction, but is linked, directly or indirectly, to economy, reliability, asset management, and risks of severe outages affecting business goals.
Something old and something new come to mind about service performance. The “something old” is to build and maintain communication and trust. Flexibility and cooperation flow from staying in touch. People remember how you make them feel. Fairness carries weight. Good will, cooperation, and communication never get old when performance is the objective. They never wear out their welcome.
The “something new” is the capability maturity of the contractor and of your FM organization. What is that? Simply: an organization that strives to improve performance and expects to do so isn’t born that way. That organization, big or small, learns to meet problems and contingencies with steadily more refined processes – just reactive at first, and eventually self-organized, with competence to measure, innovate, and improve. CMMI Institute develops capability maturity levels for service organizations.
There are plentiful, inexpensive and uncomplicated ways to solve problems and tune reactive maintenance. That is a subject for a later blog.
Your comments and discussions are always welcome on Engage, e-mail, group or on social media.