Tag Archives: Computer

Computer Interpretation of Foetal Heart Rates Does Not Help Distinguishing Babies That Need a Caesarean from Those That Do Not

In an earlier life I was involved in obtaining treatment costs for a pilot trial of computerised foetal heart monitoring versus standard foetal heart monitoring (CTG). The full trial, funded by NIHR, has now been published in the Lancet,[1] featuring Sara Kenyon from our CLAHRC WM theme 1. With over 46,000 participants the trial found no difference in a composite measure of foetal outcome or intervention rates. Perinatal mortality was only 3 per 10,000 women across both arms and the incidence of hypoxic encephalopathy was less than 1 per 1,000. Of course, the possibility of an educational effect from the computer decision support (‘contamination’) may have reduced the observed effect, but this could only be tested by a cluster trial. However, such a design would create its own set of problems, such as loss of precision and bias through interaction between method used and baseline risk across interventions and control sites. Also, the control group was not care as usual, but the visual display IT system shorn of its decision support (artificial intelligence) module.[2] Some support for the idea that control condition affected care in a positive direction, making any marginal effect of decision support hard to detect, comes from the low event rate across both study arms. Meanwhile, the lower than expected baseline event rates mean that any improvement in outcome will be hard to detect in future studies. So here is another topic that, like vitamin D given routinely to elderly people,[3] now sits below the “horizon of science” – the combination of low event rates and low plausible effect sizes mean that we can move on from this subject – at least in a high-income context. If you want to use the computerised method, and its costs are immaterial, then there is no reason not to; economics aside there appear to be no trade-offs here, since both benefits and harms were null.

— Richard Lilford, CLAHRC WM Director

References:

  1. The INFANT Collaborative Group. Computerised interpretation of fetal heart rate during labour (INFANT): a randomised controlled trial. Lancet. 2017.
  2. Keith R. The INFANT study – a flawed design foreseen. Lancet. 2017.
  3. Lilford RJ. Effects of Vitamin D Supplements. NIHR CLAHRC West Midlands News Blog. 24 March 2017.

Digital Future of Systematic Reviews

A good friend and colleague, Kaveh Shojania, recently shared an article about bitcoin (a form of digital currency), which predicts the end of the finance industry as we know it.[1] The article argues that commercial banks, in particular, will no longer be needed. But what about our own industry of clinical epidemiology? Two thoughts occur:

  1. The current endeavour might not be sustainable.
  2. There might be another way to study prognosis, diagnosis and treatment.

We have argued in a previous post that traditional systematic reviews might soon become a victim to their own success. News blog readers will remember that we have argued that the size of the literature will soon become just too large to review in the normal way. In addition to which we have posited the twin issues of “question inflation and effect size deflation”. That is to say the number of potential comparisons is already becoming unwieldy (some network meta-analyses include over 100 individual comparators [2]), and plausible effect sizes are getting smaller as the headroom for further improvements gets used up. Our colleague Norman Waugh tells us that his latest Cochrane review concerning glucagon-like peptides in diabetes runs to over 800 pages. Many have written about the role of automation to search and screen the relevant literature,[3-5] including ourselves in a previous post, but the task of analysing the shedload of retrieved articles will itself become almost insurmountable. At the rate things are going, this may happen sooner than you think![6]

What is to be done? One possibility is that the whole of clinical epidemiology will be largely automated. We have written before about electronic patient records as a potential source of data for clinical research. This ‘rich’ data will be available for analysis by standard statistical methods. However, machine learning is being taken increasingly seriously, and so it is possible to imagine a world in which the bulk of clinical epidemiological studies are largely automated under programme control. That is to say, machine learning algorithms will sit behind rapidly accumulating clinical databases, searching for signals and conducting replication studies autonomously, perhaps even across national borders. In previous posts we have waxed lukewarm about IT systems, which have the potential to disrupt doctor-patient relationships, and where greater precision may be achieved at the cost of increasing inaccuracy. However, it is also possible that these problems can be mitigated by collecting and adjusting for ever larger amounts of information, and perhaps by finding instrumental variables, including those afforded by Mendelian randomisation.

Will all this mean that the CLAHRC WM director will soon retire, while his young colleagues find themselves being made redundant? Almost certainly not. For as long as can be envisaged, human agency will be required to write and monitor computer algorithms, to apply judgement to the outputs, to work out what it all means, and to design and implement subsidiary studies. If anything, epidemiologists of the future will require deeper epistemological understanding, statistical ability and technical knowhow.

— Richard Lilford, CLAHRC WM Director
— Yen-Fu Chen, Senior Research Fellow

References:

  1. Lanchester J. When bitcoin grows up. London Rev Books. 2016; 38(8): 3-12.
  2. Zintzaras E, Doxani C, Mprotsis T, Schmid CH, Hadjigeorgiou GM. Network analysis of randomized controlled trials in multiple sclerosis. Clin Ther. 2012; 34(4): 857-69.
  3. O’Mara-Eves A, Thomas J, McNaught J, Miwa M, Ananiadou S. Using text mining for study identification in systematic reviews: a systematic review of current approaches. Syst Rev. 2015; 4: 5.
  4. Tsafnat G, Glasziou P, Choong MK, Dunn A, Galgani F, Coiera E. Systematic review automation technologies. Syst Rev. 2014; 3: 74.
  5. Choong MK, Galgani F, Dunn AG, Tsafnat G. Automatic evidence retrieval for systematic reviews. J Med Internet Res. 2014; 16(10): e223.
  6. Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010;7(9): e1000326.

Going Digital – the Electronic Patient Record

Everyone wants to go digital; it’s good, it’s modern, we must all be paperless. Welcome then the Electronic Patient Record. Great moves are underway to help hospitals go paperless in England, the USA and elsewhere.

Well, if you think it’s such a great idea read the recent Lancet paper by Martin and Sinsky.[1] They provide a thoughtful and well-referenced account of the shortcomings of electronic records in hospital care. You will find it hard to think that clinical care is improved by such systems once you have read the article. On the contrary, the evidence points the other way – these things actually impede good quality clinical care. One (perhaps the) reason is that they have become subverted. Instead of providing an information system for clinical care in real time, they have been heavily adapted to serve another master – the quality control industry.

The problem arises when clinical records (patient’s history, physical exam and progress) are digitised along with the easy stuff (electronic prescribing, laboratory results, scheduling) to create the all-singing, all-dancing electronic health record. There is a big difference between isolated systems performing particular tasks, such as digitising x-ray images, and going whole scale paperless. The critical point here concerns the ‘cognitive space’ where clinicians can show how their thought processes unfold. I guess that having the notes built around medical reasoning, rather than the tick box appetite of quality control procedures, helps in two ways. First, recording thoughts assists cognitive processes, as in writing down a list of differential diagnoses. Second, it helps others latch onto the story so far. These needs are brought out beautifully in the article, which chronicles the near unmitigated disaster that current electronic notes have become. The authors cite studies documenting the harm that modern electronic records do, and back these up with powerful anecdotes.

We health care professionals promote evidence-based decision-making, yet we are allowing ourselves to be sleepwalked into a poorly evaluated but massive intervention. Such evidence as the article can reference suggests that, far from assisting good care, electronic records (in their current form anyway) are inimical to it. We do enormous and expensive trials to find out whether we can extend life by a few months in an uncommon disease, but we let this potential monster intrude in a near evaluation vacuum. Maybe the question is how, rather than whether, electronic records should be used. In that case it seems clear that our target should be to find out how, since we clearly do not know how.

We need much more development and evaluation work on the design of electronic notes, configuration of services and the interaction between them. A way must be found to resolve the tension between all the other (‘secondary’) functions the notes perform and the real-time clinical care functions that current electronic systems have been shown to subvert. The suggestions made in the article are all extremely sensible. They privilege the clinical, and that is convivial to the clinical heart that beats inside my breast. But the constituencies who want to use notes for various organisational, quality control, and research purposes have not gone away. It seems that we cannot redesign the notes without at least considering these other putative needs. Our task is not complete if we just define what is needed for good clinical care in real time because social pressures to monitor health care providers in general, and doctors in particular, are not going away any time soon. There are three broad possibilities:

  1. Design the IT system so that it can both serve as a seamless record for real-time clinical care, and capture information for secondary purposes. This is unlikely to succeed given the evidence led in the article; a trade-off is all but inevitable between clinical prerogatives and wider organisational and social needs.
  2. Jettison the wider functions of audit and so on, and privilege the real time clinical care need. But we are not going to get away with this unless, at the very least, it can be shown that the ‘costs’ of collecting ancillary information exceeds its expected benefit.
  3. Change not just the structure of the electronic notes, but also work patterns and the personnel who enter different types of data. That is to say, it may be cost effective to re-engineer human resource and computer systems to separate, to a degree, entry and presentation of data for real time clinical care purposes and the data needed for ‘secondary’ purposes.

A great deal of research and development will be needed to achieve a near optimal system and the Industry would need to be incentivised to engage in such a process. That said, I suspect that much development and evaluation could be done off-line under simulation conditions. What is absolutely clear is that when coming to digitisation of health care records, we are embarking on one of the greatest socio-technical innovations ever undertaken. Information technology must interact with an extremely complex, subtle, and only partially understood healthcare environment. There is a clear role for CLAHRCs in this exercise, and our particular CLAHRC is collaborating with Prof Aziz Sheikh and colleagues in NIHR-sponsored work on introduction of IT systems in the NHS. In the meantime, people who implement IT systems should tread very gently – no place here for macho types who think they know it all. Careful, deliberate and patient R&D has produced, in the end, unimagined advances in medical care. Let the same sense of modesty guide our fledgling understanding of the information requirements of health care.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Martin SA, & Sinsky CA. The map is not the territory: medical records and 21st century practice. Lancet. 2016; [ePub].

Systematic Reviewing in the Digital Era

In the field of systematic reviewing it is easy (and often necessary) to dip yourself deep into the sea of the literature and forget about all things that are going on in the outside world. Reflecting upon myself I realised that I hadn’t actually attended a proper Cochrane meeting even though I’ve been doing reviews for more than a decade. Before rendering myself truly obsolete, I decided to seize the opportunity when the Cochrane UK and Ireland Symposium came to Birmingham earlier in March to catch up with the latest development in the field. And I wasn’t disappointed.

A major challenge for people undertaking systematic reviews is to deal with the sheer number of potentially relevant papers against the timeline beyond which a review would be considered irrelevant. Indeed the issue is so prominent that we (colleagues in Warwick and Ottawa) have recently written and published a commentary to discuss ‘how to do a systematic review expeditiously’.[1] One of the most arduous processes in doing a systematic review is screening through the large number of records retrieved from search of bibliographical databases. Two years ago the bravest attempt that I heard of in a Campbell Collaboration Colloquium was sifting through over 40,000 records in a review. Two years on the number has gone up to over 70,000. While there is little sign that the number of published research papers is going to plateau in the future, I wonder how much reviewers’ stamina and patience can keep pace – even if they have the luxury of time to do it. Here comes the rescue of the clever computer. If Google’s AlphaGo can beat the human champion of Go games,[2] why cannot artificial intelligence saves reviewers from the humble but tedious task of screening articles?

Back to the symposium there is no shortage of signs of this digital revolution on the agenda. To begin with, the conference has no brochure or abstract book to pick up or print. All you get is a mobile phone app which tells you what the sessions are and where to go. Several plenary and workshop sessions were related to automation, which I was eager to attend and from which I learned of a growing literature on the use of automation throughout the review process,[3] including article sifting,[4] data extraction,[5] quality assessment [6] and report generation. Although most attempts were still exploratory, the use of text mining, classification algorithm and machine-learning to assist with citation screening appears to have matured sufficiently to be considered for practical application. The Abstrackr funded by AHRQ is an example that is currently freely available (registration required) and has been subject to independent evaluation.[7] Overall, existing studies suggest such software may potentially save reviewers’ workload in the range of 30-70% (by ruling out references unlikely to be relevant and hence don’t need to be screened) with a fairly high level of recall (missing 5% or less of eligible articles).[4] However this is likely to be subject-dependent and more empirical evidence will be required to demonstrate its practicality and limitations.

It is important to understand a bit more behind the “black box” machine when using such software, and so we were introduced to some online text mining and analysis tools during the workshop sessions. One example is “TerMine”, which allows you to put in some plain text or specify a text file or an URL. Within a few seconds or so it will return a list of text with most relevant terms highlighted (this can be viewed as a table ranked by relevance). I did a quick experimental analysis of the CLAHRC WM’s Director and Co-Director’s Blog, and the results seem to be a fair reflection of the themes: community health workers, public health, organisational failure, Cochrane reviews and service delivery were among the highest ranking terms (besides other frequent terms of CLAHRC WM and the Director’s name). The real challenge in using such tools, however, is how then to organise the identified terms in a sensible way (although there is other software around that is capable of doing things like semantic or cluster analysis), and perhaps more importantly, what important terms might be under-presented or absent.

Moving beyond systematic reviews, there are more ambitious developments such as the “Contentmine”, which is trying to “liberate 100 million facts from the scientific literature” using data mining techniques. Pending the support of more permissive copyright regulations and open access practice in scientific publishing, the software will be capable of automatically extracting data from virtually all available literature and then re-organise and present the contents (including texts and figures etc.) in a format specified by the users.

Finally, with all these exciting progresses around the world, Cochrane itself is certainly not lying idle. You might have seen its re-branded websites, but there are a lot more going on behind the scene: people who have used Review Manager (RevMan) can expect to see a “RevMan Web version” in the near future; the Cochrane Central Register of Controlled Trials (CENTRAL) is being enhanced by aforementioned automation techniques and will be complemented by a Cochrane Register of Study Data (CRS-D), which will make retrieval and use of data across reviews much easier (and thus facilitate further exploration of existing knowledge such as undertaking ‘multiple indication reviews’ advocated by the CLAHRC WM Director) [8]; there will also be a further enhanced Cochrane website with “PICO Annotator” and “PICOfinder” to help people locating relevant evidence more easily; and the Cochrane Colloquium will be replaced by an even larger conference which will bring together key players of systematic reviewing both within and beyond health care around the world. So watch the space!

— Yen-Fu Chen, Senior Research Fellow

References:

  1. Tsertsvadze A, Chen Y-F, Moher D, Sutcliffe P, McCarthy N. How to conduct systematic reviews more expeditiously? Syst Rev. 2015; 4(1):1-6.
  2. Gibney E. What Google’s Winning Go Algorithm Will Do Next. Nature. 2016; 531: 284-5.
  3. Tsafnat G, Glasziou P, Choong MK, Dunn A, Galgani F, Coiera E. Systematic review automation technologies. Syst Rev. 2014; 3:74.
  4. O’Mara-Eves A, Thomas J, McNaught J, Miwa M, Ananiadou S. Using text mining for study identification in systematic reviews: a systematic review of current approaches. Syst Rev. 2015; 4: 5.
  5. Jonnalagadda SR, Goyal P, Huffman MD. Automating data extraction in systematic reviews: a systematic review. Syst Rev. 2015; 4: 78.
  6. Millard LA, Flach PA, Higgins JP. Machine learning to assist risk-of-bias assessments in systematic reviews. Int J Epidemiol. 2016; 45(1): 266-77.
  7. Rathbone J, Hoffmann T, Glasziou P. Faster title and abstract screening? Evaluating Abstrackr, a semi-automated online screening program for systematic reviewers. Syst Rev. 2015; 4: 80.
  8. Chen Y-F, Hemming K, Chilton PJ, Gupta KK, Altman DG, Lilford RJ. Scientific hypotheses can be tested by comparing the effects of one treatment over many diseases in a systematic review. J Clin Epidemiol. 2014; 67: 1309-19.

 

 

Computer Beats Champion Player at Go – What Does This Mean for Medical Diagnosis?

A computer program has recently beaten one of the top players of the Chinese board game Go.[1] The reason that a computer’s success in Go is so important lies in the nature of Go. Draughts (or Checkers) can be solved completely by pre-specified algorithms. Similarly, chess can be solved by a pre-specified algorithm overlaid on a number of rules. But Go is different – while experienced players are better than novices, they cannot specify an algorithm for success that can be uploaded into a computer. This is because it is not possible to compute all possible combinations of moves in order to select the most propitious. This is for two reasons. First, there are too many possible combinations – much more than there are in chess. Second, experts cannot explicate the knowledge that makes them so. But the computer program can learn by accumulating experience. As it learns, it increases its ability to select moves that increase the probability of success – the neural network gradually recognises the most advantageous moves in response to the pattern of pieces on the board. So, in theory, a computer program could learn which patterns of symptoms, signs, and blood tests are most predictive of which diseases.

Why does the CLAHRC WM Director think this is a long way off? Well, it has nothing to do with the complexity of diagnosis, or intractability of the topic. No, it is a practical problem. For the computer program to become an expert Go player, it required access to hundreds of thousands of games, each with a clear win/lose outcome. In comparison, clinical diagnosis evolves over a long period in different places; the ‘diagnosis’ can be ephemeral (a person’s diagnosis may change as doctors struggle to pin it down); initial diagnosis is often wrong; and a person can have multiple diagnoses. Creating a self-learning program to make diagnoses is unlikely to succeed for the foreseeable future. The logistics of providing sufficient patterns of symptoms and signs over different time-scales, and the lack of clear outcomes, are serious barriers to success. However, a program to suggest possible diagnoses on the basis of current codifiable knowledge is a different matter altogether. It could be built using current rules, e.g. to consider malaria in someone returning from Africa, or giant-cell arteritis in an elderly person with sudden loss of vision.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. BBC News. Artificial intelligence: Google’s AlphaGo beats Go master Lee Se-dol. 12 March 2016.

Diagnostic Errors – Extremely Important but How Can They be Measured?

Previous posts have emphasised the importance of diagnostic error.[1] [2] Prescient perhaps, since the influential Institute of Medicine has published a report into diagnostic error in association with the National Academies of Sciences, Engineering, and Medicine.[3] Subsequently, McGlynn et al. highlight the importance of measurement of diagnostic errors.[4] There is no single, encompassing method of measurement, while the three most propitious methods (autopsy reports, malpractice cases, and record review) all have strengths and weaknesses. One of the particular issues with record review, not brought out, is that one of the most promising interventions that could tackle the issue, computerised decision support, is likely to also affect the accuracy of the measurement of diagnostic errors. So we are left with a big problem that is hard to quantify in a way that is unbiased with respect to the most promising remedy. Either we have to measure IT based interventions using simulations (arguably not generalizable to real practice), or changes in rates among post-mortems or malpractice claims (arguably insensitive). There is yet another idea – to design computer support systems so that the doctor must give his provisional diagnosis before the decision support is activated, and then see how often the clinician alters behaviours in a way that can be traced back to any additional diagnosis suggested by the computer?

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Lilford RJ. Bad Apples vs. Bad Systems. NIHR CLAHRC West Midlands News Blog. 20 February 2015.
  2. Lilford RJ. Bring Back the Ward Round. NIHR CLAHRC West Midlands News Blog. 20 March 2015.
  3. Balogh EP, Miller BT, Ball JR (eds.). Improving Diagnosis in Health Care. National Academies of Sciences, Engineering, and Medicine. 2015.
  4. McGlynn EA, McDonald KM, Cassel CK. Measurement Is Essential for Improving Diagnosis and Reducing Diagnostic Error. JAMA. 2015; 314(23): 2501-2.

How Many Doctors Do We Really Need?

In a previous post we blogged about the changing nature of medical practice: the influences of regulation, guidelines, sub-specialisation, and patient expectations. We mentioned skills substitution, whereby less experienced staff take on tasks previously carried out by doctors. We also mentioned the role of Information Technology, but shied away from discussing the implications for medical manpower. However, it seems important to ask whether Information Technology could reduce the need for medical input by increasing the scope for skill substitution. Some patients have complex needs or vague symptoms, and such patients we assume will need to be seen by someone with deep medical knowledge to underpin professional judgements, and to provide patients with such an informed account of the probable causes of their illness and the risks and benefits of viable options. But much of medicine is rather algorithmic. A patient presents with back pain – follow the guidelines and refer the patient if any ‘red flags’ appear, for example. Many of the criteria for referral and treatment are specified in guidelines. Meanwhile, computers increasingly find abnormal patterns in a patient’s data that the doctor has overlooked. Work in CLAHRC WM shows that many patients do not receive indicated medicines.[1] Health promotion can be delivered by nurse and routine follow-up cases triaged by Physician Assistants. A technician can be trained to perform many surgical operations, such as hernia repair and varicose vein removals, and Physician Assistants already administer anaesthetics safely in many parts of the world.[2] Surely we should re-define medicine to cover the cognitively demanding aspect of care and those where judgements must be made under considerable uncertainty.

In the USA they talk about “people working up to their license”. What they mean is that it is inefficient for people to work for extended periods at cognitive or skill levels well below those they have attained by virtue of their intellect and education. Working way below the level is not only inefficient, but deeply frustrating for the clinician involved, predisposing them to burn out. Use doctors to doctor, not to fill in forms and perform routine surgical operations.

We conclude by suggesting that there is a case for re-engineering medical care or at least articulating a forward vision. The next step is some careful modelling, informed by experts, to map patterns of practice, assign tasks to cognitive categories, and calculate manpower configurations that are both safe and economical. Such a process would likely identify a more specific, cognitively elite role for expensive personnel who have trained for 15 years to obtain their license. In turn, this may suggest that less people of this type will be needed in the future.

While high-income countries should address the question “how much should we reduce the medical workforce, if at all?”, low-income countries face the reciprocal question, “by how much should we increase the medical work-force?” Countries such as Kenya have only two doctors per 10,000 population, compared to 28 in the UK, and 25 in the United States.[3] Much of the shortfall is covered by other cadres, especially medical officers (who work independently), and nurses. Health personnel are strongly buttressed by community health workers, a type of health worker that we have discussed in previous posts.[4] [5] Information Technology is unsurprisingly very under-developed in low-income countries, although telemedicine is increasingly used. It is particularly difficult to attract doctors to work in rural areas, and there is the perennial issue of the medical brain drain. The time is thus propitious to consider carefully the human resource needs not just of high-, but also of low- and middle-income countries, and consider how these may be affected by improving Information Technology infrastructure.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Wu J, Yao GL, Zhu S, Mohammed MA. Marshall T. Patient factors influencing the prescribing of lipid lowering drugs for primary prevention of cardiovascular disease in UK general practice: a national retrospective cohort study. PLoS One. 2013; 8(7): e67611.
  2. Mullan F & Frehywot S. Non-Physician Clinicians in 47 Sub-Saharan African Countries. Lancet. 2007; 370: 2158-63.
  3. World Health Organization. Health Workforce: Density of Physicians (total number per 1000 population): Latest available year. 2015.
  4. Lilford RJ. Lay Community Health Workers. NIHR CLAHRC West Midlands News Blog. 10 April 2015.
  5. Lilford RJ. An Intervention So Big You Can see it From Space. NIHR CLAHRC West Midlands News Blog. 4 December 2015.

Walking after Paraplegia

For those with paraplegia following spinal cord injury (SCI), a wheelchair is their primary means of mobility. However, this can often lead to medical co-morbidities that contribute significantly to SCI-related medical care costs. According to surveys these patients highly prioritise restoration of walking as a way to improve their quality of life.

A recent paper by King et al. looked at the feasibility of using a brain-computer interface to give paraplegic patients the chance to walk again.[1] The procedure involved linking an electroencephalogram-based system to a functional electrical stimulation system on leg muscles, which can then be controlled by thought. The study used a physically active 26 year-old male who underwent virtual reality training in order to reactivate the areas of the brain responsible for gait, and reconditioning of leg muscles using electro-stimulation. Over 19 weeks the patient was able to successfully complete 30 over-ground walking tests with no adverse events.

The authors concluded that these results provide proof-of-concept for using direct brain control to restore basic walking. Although the current system is likely to be too cumbersome for full-scale adoption, it may represent a precursor to a future, fully implantable system.

— Peter Chilton, Research Fellow

Reference:

  1. King CE, Wang PT, McCrimmon CM, Chou CCY, Do AH, Nenadic Z. The feasibility of a brain-computer interface functional electrical stimulation system for the restoration of overground walking after paraplegia. J Neuroeng Rehab. 2015; 12: 80.