Tag Archives: Technology

Factors Associated with De-Adoption

CLAHRC WM News Blog readers know about factors associated with adoption of new technology. Where the treatment is within the gift of a single clinician, then the following barriers / facilitators determine the probability of adoption:

  1. The strength of the evidence.
  2. Prior beliefs – when a person has no strong opinion, then evidence of given strength will be more influential than when it must compete with strong prior beliefs.[1] For example, I would take some convincing that homeopathy is effective.
  3. Psychological approach – when the new evidence requires practitioners to give up something they are accustomed to doing, then change is harder to achieve. (X-rays came into routine use within four years of Röentgen’s discovery, while antisepsis took over a generation.)
  4. Psychological predisposition – according to Rogers, some people are psychologically predisposed to be early adopters or laggards (but this can be specific to the technology concerned).
  5. Role models and other forms of influence from the social environment.
  6. The presence of subconscious ‘clues’ in the environment – nudge theory.[2]
  7. Financial incentives at the personal level – but watch out for perverse effects.

When adoption is not in the gift of individual clinicians, the organisation as a whole has to respond. Many barriers / facilitators can be encountered.

  1. Changing supply chains so that the appropriate technology is available and can be maintained. This is a large barrier in low-income countries.
  2. Arranging for training / education when a new technology supplants an existing technology.
  3. Support across the organisational hierarchy to send out the right social ‘signals’ (see also above).
  4. Co-ordination across barriers – different professions and across organisational boundaries. We have discussed barriers and facilitators to cross-border facilitation in previous blogs.[3]
  5. Financial incentives at the organisational level,[4] although again these can have negative side-effects.[5] [6]
  6. Fit with established workflows and the immediate demands of a situation – a particular problem with IT, as described in previous blogs.[7] [8] Put simply, the more disruptive the technology, the harder change is to achieve and the greater the risk that any adoption will introduce new risks.

All of the above problems require an organisation to have time and people to help solve problems – the concept of absorptive capacity, which has been explored in our CLAHRC.[9]

But what about de-adoption; does that have different features? This topic was studied in a recent issue of the BMJ.[10] They looked at different individual features associated with de-adoption of carotid revascularisation procedures that are falling from vogue, but which are still indicated in some cases. Here clinicians should ‘exnovate’ by scaling back rather than eschewing the procedure completely. More experienced physicians and smaller practices were associated with faster exnovation, but patient factors, strangely, were not. The authors suggest that early adopters tend to be early de-adopters. Far from convincing me that there is something special about de-adoption / exnovation, the evidence actually presented did not suggest that the factors are qualitatively different to those associated with adoption in the first place.

— Richard Lilford, CLAHRC WM Director


  1. Johnson SR, Tomlinson GA, Hawker GA, Granton JT, Feldman BM. Methods to elicit beliefs for Bayesian priors: a systematic review. J Clin Epidemiol. 2010; 63(4): 355-69.
  2. Lilford RJ. Demystifying Theory. NIHR CLAHRC West Midlands News Blog. 10 April 2015.
  3. Lilford RJ. Evaluating Interventions to Improve the Integration of Care (Among Multiple Providers and Across Multiple Sites). NIHR CLAHRC West Midlands News Blog. 10 February 2017.
  4. Combes G, Allen K, Sein K, Girling A, Lilford R. Taking hospital treatments home: a mixed methods case study looking at the barriers and success factors for home dialysis treatment and the influence of a target on uptake rates. Implement Sci. 2015; 10: 148.
  5. Lilford RJ. Financial Incentives for Providers of Health Care: The Baggage Handler and the Intensive Care Physician. NIHR CLAHRC West Midlands News Blog. 25 July 2014.
  6. Lilford RJ. Two Things to Remember About Human Nature When Designing Incentives. NIHR CLAHRC West Midlands News Blog. 27 January 2017.
  7. Lilford RJ. Introducing Hospitals IT Systems – Two Cautionary Tales. NIHR CLAHRC West Midlands News Blog. 4 August 2017.
  8. Lilford RJ. New Framework to Guide the Evaluation of Technology-Supported Services. NIHR CLAHRC West Midlands News Blog. 12 January 2018.
  9. Currie G, Croft C. Enhancing absorptive capacity of healthcare organizations: The case of commissioning service interventions to avoid undesirable older people’s admissions to hospitals. In: Swan J, Newell S, Nicolini D. Mobilizing Knowledge in Healthcare. Oxford: Oxford University Press; 2016. p.65-81.
  10. Bekelis K, Skinner J, Gottlieb D, Goodney P. De-adoption and exnovation in the use of carotid revascularization: retrospective cohort study. BMJ. 2017; 359: j4695.

New Framework to Guide the Evaluation of Technology-Supported Services

Heath and care providers are looking to digital technologies to enhance care provision and fill gaps where resource is limited. There is a very large body of research on their use, brought together in reviews, which among many others, include, establishing effectiveness in behaviour change for smoking cessation and encouraging adherence to ART,[1] demonstrating improved utilisation of maternal and child health services in low- and middle-income countries,[2] and delineating the potential for improvement in access to health care for marginalised groups.[3] Frameworks to guide health and care providers when considering the use of digital technologies are also numerous. Mehl and Labrique’s framework aims to help a low- or middle-income country consider how they can use digital mobile health innovation to help succeed in the ambition to achieving universal health coverage.[4] The framework tells us what is somewhat obvious, but by bringing it together it provides a powerful tool for thinking, planning, and countering pressure from interest groups with other ambitions. The ARCHIE framework developed by Greenhalgh, et al.[5] is a similar tool but for people with the ambition of using telehealth and telecare to improve the daily lives of individuals living with health problems. It sets out principles for people developing, implementing, and supporting telehealth and telecare systems so they are more likely to work. It is a framework that, again, can be used to counter pressure from interest groups more interested in the product than the impact of the product on people and the health and care service. Greenhalgh and team have now produced a further framework that is very timely as it provides us with a tool for thinking through the potential for scale-up and sustainability of health and care technologies.[6]

Greenhalgh, et al. reviewed 28 previously published technology implementation frameworks in order to develop their framework, and use their own studies of digital assistive technologies to test the framework. Like the other frameworks this provides health and care providers with a powerful tool for thinking, planning and resisting. The Domains in the Framework include, among others, the health condition, the technology, the adopter system (staff, patients, carers), the organisation, and the Domain of time – how the technology embeds and is adapted over time. For each Domain in the Framework the question is asked whether it is simple, complicated or complex in relation to scale-up and sustainability of the technology. For example, the nature of the condition: is it well understood and predictable (simple), or poorly understood and unpredictable (complex)? Asking this question for each Domain allows us to avoid the pitfall of thinking something is simple when it is in reality complex. For example, there may be a lot of variability in the health condition between patients, but the technology may have been designed with a simplified textbook notion of the condition in mind. I suggest that even where clinicians are involved in the design of interventions, it is easy for them to forget how often they see patients that are not like the textbook, as they, almost without thinking, deploy their skills to adapt treatment and management to the particular patient. Greenhalgh, et al. cautiously conclude that “it is complexity in multiple domains that poses the greatest challenge to scale-up, spread and sustainability”. They provide examples where unrecognised complexity stops in its tracks the use of a technology.

— Frances Griffiths, Professor of Medicine in Society


  1. Free C, Phillips G, Galli L. The effectiveness of mobile-health technology-based health behaviour change or disease management interventions for health care consumers: a systematic review. PLoS Med. 2013;10:e1001362.
  2. Sondaal SFV, Browne JL, Amoakoh-Coleman M, Borgstein A, Miltenburg AS, Verwijs M, et al. Assessing the Effect of mHealth Interventions in Improving Maternal and Neonatal Care in Low- and Middle-Income Countries: A Systematic Review. PLoS One. 2016;11(5):e0154664.
  3. Huxley CJ, Atherton H, Watkins JA, Griffiths F. Digital communication between clinician and patient and the impact on marginalised groups: a realist review in general practice. Br J Gen Pract. 2015;65(641):e813-21.
  4. Mehl G, Labrique A. Prioritising integrated mHealth strategies for universal health coverage. Science. 2014;345:1284.
  5. Greenhalgh T, Procter R, Wherton J, Sugarhood P, Hinder S, Rouncefield M. What is quality in assisted living technology? The ARCHIE framework for effective telehealth and telecare services. BMC Medicine. 2015;13(1):91.
  6. Greenhalgh T, Wherton J, Papoutsi C, Lynch J, Hughes G, A’Court C, et al. Beyond Adoption: A New Framework for Theorizing and Evaluating Nonadoption, Abandonment, and Challenges to the Scale-Up, Spread, and Sustainability of Health and Care Technologies. J Med Internet Res. 2017;19(11):e367.

Risks of Children Using Technology Before Bed

We live in an increasingly technologically connected society, which even extends to children – for example, 74% of children (9-16 years old) in the UK use a mobile phone, with most receiving their first phone at the age of 10 years old;[1] while around half have a television in their bedroom at age 7.[2] For many it can be difficult to switch off at the end of the day – the allure of one more video, or another scan of social media can be strong. As such, many children use technology at bedtime, which may impact on their sleep as the light emitted by these devices has a higher concentration of ‘blue light’, which affects the levels of melatonin, a sleep-inducing hormone.[3] Previous research has shown the importance of sleep on children’s health and behaviour, and so Fuller and colleagues conducted a study looking at use of technology at bedtime and its effects on various health outcomes.[4] They surveyed 207 parents of 8-17 year olds and found that children who watched television at bedtime were significantly more likely to be overweight or obese than those who did not (odds ratio 2.4, 95% CI 1.35-4.18). Similar results were found for children who used a phone at bedtime (OR=2.3, 95% CI 1.31-4.05). There were no significant differences seen with computer or video game use. The authors also looked at sleeping behaviour and found a significant relationship between average hours of sleep and bedtime use of television (P=0.025), phone (P<0.001), computer (P<0.001), and video games (P=0.02). Further analysis showed that children who used various technologies were also more likely to be tired in the morning, less likely to eat breakfast, and more likely to text during the middle of the night. The authors recommend setting up ‘tech-free’ zones and making sure that devices are charged outside of the child’s bedroom.

Of course, this study only shows an association – it may be that some children have difficulty getting to sleep and so turn to technology in order to help them drift off. Meanwhile, the study is subject to reporting bias from the self-reported surveys of the parents, and so further studies are needed.

— Peter Chilton, Research Fellow


  1. GSMA report. https://www.gsma.com/publicpolicy/wp-content/uploads/2012/03/GSMA_Childrens_use_of_mobile_phones_2014.pdf. 2014.
  2. Heilmann A, Rouxel P, Fitzsimons E, Kelly Y, Watt RG. Longitudinal associations between television in the bedroom and body fatness in a UK cohort study. Int J Obes. 2017; 41: 1503-9.
  3. Fuller C, Lehman E, Hicks S, Novick MB. Bedtime Use of Technology and Associated Sleep Problems in Children. Glob Pediatr Health. 2017.
  4. Schmerler J. Q&A: Why Is Blue Light before Bedtime Bad for Sleep? Scientific American. 01 September 2015.

Introducing Hospital IT systems – Two Cautionary Tales

The beneficial effects of mature IT systems, such as at the Brigham and Women’s Hospital,[1] Intermountain Health Care,[2] and University Hospitals Birmingham NHS Foundation Trust,[3] have been well documented. But what happens when a commercial system is popped into a busy NHS general hospital? Lots of problems according to two detailed qualitative studies from Edinburgh.[4] [5] Cresswell and colleagues document problems with both stand-alone ePrescribing systems and with multi-modular systems.[4] The former drive staff crazy with multiple log-ins and duplicate data entry. Nor does their frustration lessen with time. Neither system types (stand-alone or multi-modular) presented a comprehensive overview of the patient record. This has obvious implications for patient safety. How is a doctor expected to detect a pattern in the data if they are not presented in a coherent format? In their second paper the authors examine how staff cope with the above problems.[5] To enable them to complete their tasks ‘workarounds’ were deployed. These workarounds frequently involved recourse to paper intermediaries. Staff often became overloaded with work and often did not have the necessary clinical information at their fingertips. Some workarounds were sanctioned by the organisation, others not. What do I make of these disturbing, but thorough, pieces of research? I would say four things:

  1. Move slowly and carefully when introducing IT and never, never go for heroic ‘big bang’ solutions.
  2. Employ lots of IT specialists who can adapt systems to people – do not try to go the other way round and eschew ‘business process engineering’, the risks of which are too high – be incremental.
  3. If you do not put the doctors in charge, make sure that they feel as if they are. More seriously – take your people with you.
  4. Forget integrating primary and secondary care, and social care and community nurses, and meals on wheels and whatever else. Leave that hubristic task to your hapless successor and introduce a patient held booklet made of paper – that’s WISDAM.[6]

— Richard Lilford, CLAHRC WM Director


  1. Weissman JS, Vogeli C, Fischer M, Ferris T, Kaushal R, Blumenthal B. E-prescribing Impact on Patient Safety, Use and Cost. Rockville, MD: Agency for Healthcare Research and Quality. 2007.
  2. Bohmer RMJ, Edmondson AC, Feldman L. Intermountain Health Care. Harvard Business School Case 603-066. 2002
  3. Coleman JJ, Hodson J, Brooks HL, Rosser D. Missed medication doses in hospitalised patients: a descriptive account of quality improvement measures and time series analysis. Int J Qual Health Care. 2013; 25(5): 564-72.
  4. Cresswell KM, Mozaffar H, Lee L, Williams R, Sheikh A. Safety risks associated with the lack of integration and interfacing of hospital health information technologies: a qualitative study of hospital electronic prescribing systems in England. BMJ Qual Saf. 2017; 26: 530-41.
  5. Cresswell KM, Mozaffar H, Lee L, Williams R, Sheikh A. W. Workarounds to hospital electronic prescribing systems: a qualitative study in English hospitals. BMJ Qual Saf. 2017; 26: 542-51.
  6. Lilford RJ. The WISDAM* of Rupert Fawdry. NIHR CLAHRC West Midlands News Blog. 5 September 2014.

Machine Learning

The CLAHRC WM Director has mused about machine learning before.[1] Obermeyer and Emanuel discuss this topic in the hallowed pages of the New England Journal of Medicine.[2] They point out that machine learning is already replacing radiologists, and will soon encroach on pathology. They have used machine learning in their own work in predicting death in patients with metastatic cancer. They claim that machine learning will soon be used in diagnosis, but identify two of the reasons why this will take longer than for the other uses mentioned above. First, diagnosis does not present neat outcomes (dead or alive; malignant or benign). Second, the predictive variables are unstructured in terms of availability and where they are located in a record. A third problem, not mentioned by the authors, is that data may be collected because (and only because) the clinician has suspected the diagnosis. The playing field is then tilted in favour of the machine in any comparative study. One other problem the CLAHRC WM Director has with machine learning is that the neural network in silico goes head-to-head with a human in studies. In none of the work do the authors compare the accuracy of ‘machine learning’ against standard statistical methods, such as logistic regression.

— Richard Lilford, CLAHRC WM Director


  1. Lilford RJ. Digital Future of Systematic Reviews. NIHR CLAHRC West Midlands. 16 September 2016.
  2. Obermeyer Z & Emanuel EJ. Predicting the Future – Big Data, Machine Learning, and Clinical Future. New Engl J Med. 2016; 375(13): 1216-7.

Improving Diabetes Care

Diabetes is one of the most demanding chronic diseases in terms of control. If the blood glucose levels are not adequately controlled, numerous complications ensure – gangrene of the feet, kidney failure, blindness, heart attack, and stroke. To achieve good control the insulin dose must be titrated, adapting to changes in diet and exercise, so that glucose levels are neither too high, nor too low. It is also important to keep blood pressure and blood lipids under control. One day this will all become much easier because ‘artificial pancreases’, which automatically carry out the required titration, are coming into service.[1] In the meantime, diabetes remains the ultimate quality control challenge. Individual quality control methods such as patient education, peer support, and provider support have been evaluated in over 150 RCTs.[2] There have also been a number of trials of multi-component interventions. The most recent of these [3] is interesting because the intervention was targeted at the provider (decision support) and at the patient (care co-ordinators). It was carried out in South Asia and patients were followed up for two and a half years. Only patients with poor control were eligible. The intervention was highly successful; the risk of ‘poor control’ (defined in terms of HbA1c, a long-term marker for glucose control) was halved in the intervention group. The positive effect was present with and without imputation for missing data.

— Richard Lilford, CLAHRC WM Director


  1. Sheng E. The Artificial Pancreas is Here. Scientific American. November 2016.
  2. Tricco AC, Ivers NM, Grimshaw JM, et al. Effectiveness of quality improvement strategies on the management of diabetes: a systematic review and meta-analysisLancet. 2012; 379: 2252-61.
  3. Ali MK, Singh K, Kondal D, et al. Effectiveness of a Multicomponent Quality Improvement Strategy to Improve Achievement of Diabetes Care Goals. Ann Intern Med. 2016; 165: 399-408.

Computer Beats Champion Player at Go – What Does This Mean for Medical Diagnosis?

A computer program has recently beaten one of the top players of the Chinese board game Go.[1] The reason that a computer’s success in Go is so important lies in the nature of Go. Draughts (or Checkers) can be solved completely by pre-specified algorithms. Similarly, chess can be solved by a pre-specified algorithm overlaid on a number of rules. But Go is different – while experienced players are better than novices, they cannot specify an algorithm for success that can be uploaded into a computer. This is because it is not possible to compute all possible combinations of moves in order to select the most propitious. This is for two reasons. First, there are too many possible combinations – much more than there are in chess. Second, experts cannot explicate the knowledge that makes them so. But the computer program can learn by accumulating experience. As it learns, it increases its ability to select moves that increase the probability of success – the neural network gradually recognises the most advantageous moves in response to the pattern of pieces on the board. So, in theory, a computer program could learn which patterns of symptoms, signs, and blood tests are most predictive of which diseases.

Why does the CLAHRC WM Director think this is a long way off? Well, it has nothing to do with the complexity of diagnosis, or intractability of the topic. No, it is a practical problem. For the computer program to become an expert Go player, it required access to hundreds of thousands of games, each with a clear win/lose outcome. In comparison, clinical diagnosis evolves over a long period in different places; the ‘diagnosis’ can be ephemeral (a person’s diagnosis may change as doctors struggle to pin it down); initial diagnosis is often wrong; and a person can have multiple diagnoses. Creating a self-learning program to make diagnoses is unlikely to succeed for the foreseeable future. The logistics of providing sufficient patterns of symptoms and signs over different time-scales, and the lack of clear outcomes, are serious barriers to success. However, a program to suggest possible diagnoses on the basis of current codifiable knowledge is a different matter altogether. It could be built using current rules, e.g. to consider malaria in someone returning from Africa, or giant-cell arteritis in an elderly person with sudden loss of vision.

— Richard Lilford, CLAHRC WM Director


  1. BBC News. Artificial intelligence: Google’s AlphaGo beats Go master Lee Se-dol. 12 March 2016.

Medical Technology – Separating the Wheat from the Chaff

Scientists often come up with elegant inventions that they wish to exploit commercially. However, an elegant invention may not be cost-effective. The world is experiencing a period of massive growth in investments in high-tech start-up companies.[1] Many of these achieve high values in short time periods, despite not generating any sales  – Silicon Valley is home to 142 ‘unicorns’ (unlisted start-up companies valued at more than $1 billion).[1] It is no secret that investments of this type are highly speculative, driven more by sentiment than analysis. Currently a ‘bubble’ appears to be forming.[2] A company’s market value is theoretically equal to the net present value of future cash flows. Problem – if there are no present cash flows on which to base future projections, then the company’s value is a wild guess. But is it so wild or could it be tamed?

A substantial proportion of start-ups are based on medical technology – for example, the fabled ‘unicorn’, Theranos, which makes (or rather is attempting to make) revolutionary blood spot testing equipment. Medical innovations are increasingly procured on the rational grounds that they are cost-effective.[3] [4] Health economics provides a set of techniques to calculate the cost-effectiveness of new treatments to help decide whether a treatment or diagnostic method should be supported in a service [5]; in England, the National Institute for Health and Care Excellence (NICE) makes procurement decisions on this basis. A recent article from our CLAHRC [6] provides a synopsis of techniques to inform procurement decisions by calculating cost-effectiveness of a technology when it is still at the idea or design stage.[7] [8] This, in turn, can be used to determine the optimal price.[9] Then future cash flows can be calculated taking into account repayment of the initial investment. Health economics at the supply side (i.e. to inform investment decisions) has two fundamental differences from health economics at the demand side (i.e. to inform procurement decisions):

  1. Uncertainties are greater.
  2. Uncertainties can be resolved or reduced during development of the technology. This means that the option to develop the technology can be kept alive until more information has been collected.

The corollaries of those two fundamental points are that:

  1. Parameter estimates for supply-side economic models are ‘Bayesian’ in the sense that they are prior probability estimates derived from experts (rather than observed frequencies) and;
  2. The calculations must include the present value of holding an option that may, or may not, be pursued at some future date.

Economic models cannot give a definitive answer to investment decisions. However, human judgement is clouded by all sorts of faulty heuristics (mental processes),[10] [11] and models direct decision-makers to look closely at assumptions and provide at least a partial antidote to ‘optimism bias’.[12] They are a guide for the savvy investor and should strengthen the supply side of the medical technology industry, thereby mitigating the risk of boom and bust.

— Richard Lilford, Professor of Public Health, University of Warwick


  1. The Economist. Theranos: The fable of the unicorn. The Economist. 31 October 2015. 69-70.
  2. Mahmood T. The Tech Industry is in Denial, but the Bubble is About to Burst. Tech Crunch. 26 June 2015.
  3. Cleemput I, Neyt M, Thiry N, De Laet C, Leys M. Using threshold values for cost per quality-adjusted life-year gained in healthcare decisions. Int J Technol Assess Health Care. 2011; 27(1): 71-6.
  4. Schwarzer R, Rochau U, Saverno K, et al. Systematic overview of cost-effectiveness thresholds in ten countries across four continents. J Comp Eff Res. 2015; 4(5): 485-504.
  5. Drummond MF. Methods for the economic evaluation of health care programmes. Oxford: Oxford University Press. 2005.
  6. Girling A, Young T, Chapman A, Lilford R. Economic assessment in the commercial development cycle for medical devices. Intl J Technol Assess in Health Care. 2015. [ePub].
  7. Girling A, Young T, Brown C, Lilford R. Early-Stage Valuation of Medical Devices: The Role of Developmental Uncertainty. Value Health. 2010;13(5):585-91.
  8. Vallejo-Torres L, Steuten LMG, Buxton MJ, Girling AJ, Lilford RJ, Young T. Integrating health economics modelling in the product development cycle of medical devices: A Bayesian approach. Int J Technol Assess Health Care. 2008; 24(4): 459-64.
  9. Girling A, Lilford R, Young T. Pricing of medical devices under coverage uncertainty – a modelling approach. Health Econ. 2012. 21(12): 1502-7.
  10. Kahneman D. Thinking, Fast and Slow. London: Penguin Group. 2012.
  11. Kahneman D, Slovic P, Tversky A. Judgement under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press. 1982.
  12. Sharot T. The Optimism Bias. Curr Biol. 2011; 21(23): R941-5.

Robotic hotels today – Nursing homes tomorrow?

Readers of our news blog will have seen recent posts on the dependency ratio – the notion that people in the middle of the age range produce the economic surplus needed to rear the next generation and look after the preceding generation over ever-lengthening timescales. Nowhere is the dependency ratio more adverse than in Japan – the country with the world’s greatest longevity has a fertility rate of 1.4 and a dependency ratio of 62:100.[1] One way to tackle the problem is to follow pro-natalist policies. Another is to boost productivity in the middle. What better way to do this than to substitute human labour with machines? The trouble has always been that service industries, on which advanced countries largely depend, resist mechanisation. But not anymore; the first hotel staffed by robots has opened and, unsurprisingly, it is in Nagasaki, Japan.[2]

Looking after old people is also very time intensive but it would appear that technology to ameliorate the problem is at hand. For example, battery-powered suits have been developed in Japan that function as an exoskeleton, sensing and amplifying the wearer’s muscle action, and helping carers lift patients into a bath or out of a bed. The suit can also be worn by patients themselves, to help them move around and do things without support.[3] Furthermore, a project by Sheffield Health and Social Care NHS Foundation Trust, in partnership with the University of Sheffield, is evaluating the effectiveness of a robotic baby seal, which reacts to touch and sound, to manage distress and anxiety in dementia patients.[4]

Would the CLAHRC WM Director prefer a warm-hearted human being over a robot when his time comes? Certainly, but he would prefer the robot over nothing at all.

— Richard Lilford, CLAHRC WM Director


  1. Index Mundi. Japan Demographics Profile 2014. July 2014.
  2. Cuthbertson A. Hotel staffed by humanoid robots set to open in Japan this summer. International Business Times. 9 February 2015.
  3. The Economist. Difference Engine: The Caring Robot. The Economist. 14 May 2013.
  4. Griffiths A. How Paro the robot seal is being used to help UK dementia patients. The Guardian. 8 July 2014.

Social Change

Social change may have social origins, for example the increasing emphasis on involving service users in the design and quality assurance of services. However, social change may result from technical advances; think of the printing press in the 15th century, or social networking in this century. CLAHRCs have a duty to promote adoption of safe and cost-effective new technologies. Sometimes, the technology is a “bolt on” – it can simply be added to the existing repertoire of services. MRI scanning is an example of such a non-disruptive technology. Yes, it can improve diagnostic sensitivity and improve care, but it can be slotted into existing patient pathways and service schedules in a largely unproblematic way.

Contrast this with the molecular techniques to identify microbes. Out will go laborious processes of plating bacteria on a succession of nutrient media to make a diagnosis. What about conventional histological staining and examination? Molecular techniques, particularly those based on genetic signatures, will sweep away much previous technology. Radical changes can also be anticipated in radiology as increasing imaging techniques follow Moore’s law, becoming progressively smaller and less expensive.

All these advances will enable some low-income countries to bypass existing technology and leapfrog into the new era, as has happened with mobile phone technology. In richer countries, however, disruptive change will ensue as large numbers of relatively skilled jobs are replaced by a smaller cadre of highly-skilled technical and managerial workers. These advances will also “take diagnosis out of the cupboard,” making it directly accessible to clinicians on the ward and in the clinic. Imaging, for example, will become an extension of, and to some extent replacement of, normal “bedside” clinical skills All these advances in microbiology, pathology and diagnostic imaging are truly disruptive.

It is time to broaden our gaze from the technological and scientific aspects, intriguing and important as they are, and consider broader societal implications.[1] By anticipating these changes, the workforce can be gradually re-deployed and/or re-trained so that upheaval is minimised and disruption of the service kept to a minimum. We do not want the work force to be ambushed in the way dock-workers and printers seem to have been in the 1980s.

These changes also have massive educational implications as technology moves from the laboratory to the ward. Portable microchips and ultrasound machines the size of a mobile phone can do harm in poorly-educated hands; a point well understood by Health Education England who are promoting a campaign on education in the new genetics.[2] Yet patients and the public also need to understand the technology and its limitations – a project for Public Involvement in Science. Imaging specialists, micro-biologists and pathology staff should become educators, quality assurers and problem solvers, rather than guilds holding custody of their art.

CLAHRCs have a role in defining the role of new technology (especially determining cost-effectiveness [3]), in helping to design and implement new services and in evaluation. Our CLAHRC is collaborating with regional partners in the development and evaluation of new ways of working and is a partner in an application to NHS England’s 100,000 Genomes Project. Likewise, there is fascinating and important work to be done on the cost-effectiveness of new technologies in low- and middle-income countries – we will give an example in the next News Blog.

— Richard Lilford, CLAHRC WM Director
— Tim Jones, Executive Director of Delivery, University Hospitals Birmingham NHS Foundation Trust


  1. Christensen CM, Grossman J, Hwang J. The Innovator’s Prescription. New York, NY: McGraw-Hill. 2009.
  2. NHS Health Education England. Genomics Education. 2014. [Online]
  3. Girling A, Young T, Brown C, Lilford R. Early-Stage Valuation of Medical Devices: The Role of Developmental Uncertainty. Value Health. 2010; 13(5): 585-91.