Tag Archives: Artificial intelligence

More on why AI Cannot Displace Your Doctor Anytime Soon

News blog readers will be familiar with my profound scepticism about the role of artificial intelligence (AI) in medicine.[1] I have consistently made the point that there is no clear outcome to much medical process. This is quite different to a game of Go where, in the end, you either win or lose. Moreover, AI can simply replicate human error by replicating faulty parts of human processes. I previously used the example of racial bias in police work as an example.[2] Also, when you take a history, then the questions you ask are informed by medical logic or intuition. And eliciting the correct answer is partly a matter of good empathetic approach, as pointed out beautifully in a recent article by Alastair Denniston and colleagues.[3] So comparing AI with a physician is really comparing a physician with physician plus AI.

A further important article on the limitations of AI that has recently come out in the journal Science.[4] The article explains how AI can outperform human operators on a game of Space Invaders; but if the game is suddenly altered until all but one alien is removed, the AI performance deteriorates. A human player can immediately spot the problem, whereas the AI system is flummoxed for many iterations. The article explains how AI is coming full circle. First, computer scientists tried to mimic expert performance at a task. Then, AI completely bypassed the expert by means of a self-learning neural network. They declared victory when ‘AlphaGo’ beat Go champion Ke Jie. That was the high water mark for AI, and although a few enthusiasts declared victory,[5] serious AI scientists have turned back to human intelligence to inform their algorithms. They are even starting to study how children learn and using this knowledge in AI systems.

— Richard Lilford, CLAHRC WM Director

References:

  1. Lilford RJ. Update on AI. NIHR CLAHRC West Midlands News Blog. 1 June 2018.
  2. Lilford RJ. How Accurate Are Computer Algorithms Really? NIHR CLAHRC West Midlands News Blog. 26 January 2018.
  3. Liu X, Keane PA, Denniston AK. Time to regenerate: the doctor in the age of artificial intelligence. J Roy Soc Med. 2018; 111(4): 113-6.
  4. Hutson M. How researchers are teaching AI to learn like a child. Science. 24 May 2018.
  5. Lilford RJ. Computer Beats Champion Player at Go – What Does This Mean for Medical Diagnosis? NIHR CLAHRC West Midlands News Blog. 8 April 2016.
Advertisements

Update on AI

A recent article in Science [1] pointed out that scientists have to tweak their AI systems to get them to give the correct answer. But I have a different problem with AI – how do you know that the supposed right answer is actually right? In a game of Go this issue does not arise. You either win or you lose. But medicine is not like that. The machine may diagnose thyroid cancer. You take a biopsy and find thyroid cancer. But is this not the same thing as cases of thyroid cancer found in clinical practice – the machine may be unmasking cases that would never have come to light.[2] In a previous blog we pointed out that machine learning can replicate human bias – for instance, if police are more likely to charge black male youths than equally offending elderly white women, then the machine will learn precisely the wrong lesson, as pointed out in a previous News Blog.[3]

— Richard Lilford, CLAHRC WM Director

References:

  1. Hutson M. Has artificial intelligence become alchemy? Science. 2018; 360: 478.
  2. Lilford RJ. Thyroid Cancer: Another Indolent Tumour Prone to Massive Over Diagnosis. NIHR CLAHRC West Midlands News Blog. 24 March 2017.
  3. Lilford RJ. How Accurate are Computer Algorithms Really? NIHR CLAHRC West Midlands News Blog. 26 January 2018.

Machine Learning

The CLAHRC WM Director has mused about machine learning before.[1] Obermeyer and Emanuel discuss this topic in the hallowed pages of the New England Journal of Medicine.[2] They point out that machine learning is already replacing radiologists, and will soon encroach on pathology. They have used machine learning in their own work in predicting death in patients with metastatic cancer. They claim that machine learning will soon be used in diagnosis, but identify two of the reasons why this will take longer than for the other uses mentioned above. First, diagnosis does not present neat outcomes (dead or alive; malignant or benign). Second, the predictive variables are unstructured in terms of availability and where they are located in a record. A third problem, not mentioned by the authors, is that data may be collected because (and only because) the clinician has suspected the diagnosis. The playing field is then tilted in favour of the machine in any comparative study. One other problem the CLAHRC WM Director has with machine learning is that the neural network in silico goes head-to-head with a human in studies. In none of the work do the authors compare the accuracy of ‘machine learning’ against standard statistical methods, such as logistic regression.

— Richard Lilford, CLAHRC WM Director

References:

  1. Lilford RJ. Digital Future of Systematic Reviews. NIHR CLAHRC West Midlands. 16 September 2016.
  2. Obermeyer Z & Emanuel EJ. Predicting the Future – Big Data, Machine Learning, and Clinical Future. New Engl J Med. 2016; 375(13): 1216-7.

Computer Beats Champion Player at Go – What Does This Mean for Medical Diagnosis?

A computer program has recently beaten one of the top players of the Chinese board game Go.[1] The reason that a computer’s success in Go is so important lies in the nature of Go. Draughts (or Checkers) can be solved completely by pre-specified algorithms. Similarly, chess can be solved by a pre-specified algorithm overlaid on a number of rules. But Go is different – while experienced players are better than novices, they cannot specify an algorithm for success that can be uploaded into a computer. This is because it is not possible to compute all possible combinations of moves in order to select the most propitious. This is for two reasons. First, there are too many possible combinations – much more than there are in chess. Second, experts cannot explicate the knowledge that makes them so. But the computer program can learn by accumulating experience. As it learns, it increases its ability to select moves that increase the probability of success – the neural network gradually recognises the most advantageous moves in response to the pattern of pieces on the board. So, in theory, a computer program could learn which patterns of symptoms, signs, and blood tests are most predictive of which diseases.

Why does the CLAHRC WM Director think this is a long way off? Well, it has nothing to do with the complexity of diagnosis, or intractability of the topic. No, it is a practical problem. For the computer program to become an expert Go player, it required access to hundreds of thousands of games, each with a clear win/lose outcome. In comparison, clinical diagnosis evolves over a long period in different places; the ‘diagnosis’ can be ephemeral (a person’s diagnosis may change as doctors struggle to pin it down); initial diagnosis is often wrong; and a person can have multiple diagnoses. Creating a self-learning program to make diagnoses is unlikely to succeed for the foreseeable future. The logistics of providing sufficient patterns of symptoms and signs over different time-scales, and the lack of clear outcomes, are serious barriers to success. However, a program to suggest possible diagnoses on the basis of current codifiable knowledge is a different matter altogether. It could be built using current rules, e.g. to consider malaria in someone returning from Africa, or giant-cell arteritis in an elderly person with sudden loss of vision.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. BBC News. Artificial intelligence: Google’s AlphaGo beats Go master Lee Se-dol. 12 March 2016.