Who remembers a failed technology project in healthcare? I’m guessing all hands went up, since an estimated (though not formally verified) 80 percent of such projects fail. Why?
Until recently, the answer was “we don’t know” since research on new technologies was limited to randomised controlled trials which compared a technology-on arm with a technology-off arm. Such studies could answer only generic, decontextualised questions such as, “Does the technology work on a small scale and under controlled conditions?”. Some researchers studied the process of technology adoption by individuals – but they rarely studied non-adoption or abandonment – because studying a non-event or an event sometime in the future is much harder than studying a one-off event in the present or recent past. Very few studies considered the organisational dimension. If researchers studied real-world implementation at all, they focused on small-scale demonstration projects – but didn’t look at why such projects failed to extend locally (scale-up), more distantly (spread) or continue long-term (sustainability).
Fortunately, in the past 15 years or so, research has begun to fill these important knowledge gaps, and there is now enough empirical evidence in the literature to support a new framework for studying five things: the non-adoption and abandonment of technologies by individuals (staff and patients) and problems with scale-up, spread and sustainability. The diagram shows the findings from our systematic review and multi-site empirical case study on this topic: I’ve called it the NASSS framework.
NASSS has seven domains – the condition or illness, the technology, the value proposition (that is, the initial assessment of whether the technology is worth developing), the actual or intended adopters (staff, patients, caregivers), the organisation, the wider system (especially the policy, legal and regulatory context), and the process of adaptation over time.
Each of these domains can be simple (that is, few components and predictable – as in making a sandwich), complicated (many components but in a stable relationship to one another, hence ultimately predictable and resolvable – as in building a rocket) or complex (many components that are dynamically related and unpredictable – as in raising a child).
Take domain 1 (the illness), for example. A broken ankle is “simple” – but so is a heart attack (in that it is relatively straightforward to diagnose and has a clear treatment pathway). An example of a complicated illness is cancer, because it requires coordination of chemotherapy, surgery and radiotherapy (along with management of multiple and potentially serious side effects) – all dictated by an evidence-based care pathway. Now take a complex case – say an IV drug user who is also an alcoholic with psychosis and hepatitis C. And let’s say the person is also from an immigrant group, has uncertain citizenship status and speaks limited English. How much of this person’s trajectory can you predict with confidence? Is there a ‘pathway’ at all?
Now, take the technology. A simple technology – the telephone for example, or the kind of defibrillator you find on the walls in public places – is dependable, freestanding, cheap and substitutable (meaning that if the manufacturer withdrew from the market, you could easily get another one that would do the same job). A complicated technology is less dependable (it might be at risk of ‘crashing’ for example), less freestanding (e.g. it is designed to be ‘tethered’ to the patient’s medical record) and less substitutable (e.g. because of a block contract with the supplier). And a complex technology is one that is intended to be widely interoperable across multiple organisations and sectors and which almost certainly does not yet exist (tip: beware the ‘vapourware’ of politicians and salespeople).
And so on. The paper is open access so you can read about how complexity in these and other domains (beware, for example, complexity in relationships between organisations who are supposed to produce a shared budget to fund the technology and its implementation) can make a well-intentioned technology programme hit the sand.
The NASSS framework is new. We’ve tested it so far on video outpatient consultations, pendant alarms, GPS tracking devices for people with dementia, telehealth kit for heart failure, integrated data warehouses for risk assessment that span primary, secondary and social care, and various patient- and carer-held apps. The framework undoubtedly needs further testing. But on the basis of research to date, the NASSS hypothesis states that if all domains are in the ‘simple’ zone, the technology programme has a high chance of success. If some are complicated, there is still a good chance of success – but things will take a lot longer and be more expensive. But if the technology programme is characterised by multiple domains that are not just complicated but complex, such programmes have limited chance of ever being successfully implemented.
By Professor Trish Greenhalgh, University of Oxford