The CLAHRC WM Director has mused about machine learning before. Obermeyer and Emanuel discuss this topic in the hallowed pages of the New England Journal of Medicine. They point out that machine learning is already replacing radiologists, and will soon encroach on pathology. They have used machine learning in their own work in predicting death in patients with metastatic cancer. They claim that machine learning will soon be used in diagnosis, but identify two of the reasons why this will take longer than for the other uses mentioned above. First, diagnosis does not present neat outcomes (dead or alive; malignant or benign). Second, the predictive variables are unstructured in terms of availability and where they are located in a record. A third problem, not mentioned by the authors, is that data may be collected because (and only because) the clinician has suspected the diagnosis. The playing field is then tilted in favour of the machine in any comparative study. One other problem the CLAHRC WM Director has with machine learning is that the neural network in silico goes head-to-head with a human in studies. In none of the work do the authors compare the accuracy of ‘machine learning’ against standard statistical methods, such as logistic regression.
— Richard Lilford, CLAHRC WM Director
- Lilford RJ. Digital Future of Systematic Reviews. NIHR CLAHRC West Midlands. 16 September 2016.
- Obermeyer Z & Emanuel EJ. Predicting the Future – Big Data, Machine Learning, and Clinical Future. New Engl J Med. 2016; 375(13): 1216-7.
A computer program has recently beaten one of the top players of the Chinese board game Go. The reason that a computer’s success in Go is so important lies in the nature of Go. Draughts (or Checkers) can be solved completely by pre-specified algorithms. Similarly, chess can be solved by a pre-specified algorithm overlaid on a number of rules. But Go is different – while experienced players are better than novices, they cannot specify an algorithm for success that can be uploaded into a computer. This is because it is not possible to compute all possible combinations of moves in order to select the most propitious. This is for two reasons. First, there are too many possible combinations – much more than there are in chess. Second, experts cannot explicate the knowledge that makes them so. But the computer program can learn by accumulating experience. As it learns, it increases its ability to select moves that increase the probability of success – the neural network gradually recognises the most advantageous moves in response to the pattern of pieces on the board. So, in theory, a computer program could learn which patterns of symptoms, signs, and blood tests are most predictive of which diseases.
Why does the CLAHRC WM Director think this is a long way off? Well, it has nothing to do with the complexity of diagnosis, or intractability of the topic. No, it is a practical problem. For the computer program to become an expert Go player, it required access to hundreds of thousands of games, each with a clear win/lose outcome. In comparison, clinical diagnosis evolves over a long period in different places; the ‘diagnosis’ can be ephemeral (a person’s diagnosis may change as doctors struggle to pin it down); initial diagnosis is often wrong; and a person can have multiple diagnoses. Creating a self-learning program to make diagnoses is unlikely to succeed for the foreseeable future. The logistics of providing sufficient patterns of symptoms and signs over different time-scales, and the lack of clear outcomes, are serious barriers to success. However, a program to suggest possible diagnoses on the basis of current codifiable knowledge is a different matter altogether. It could be built using current rules, e.g. to consider malaria in someone returning from Africa, or giant-cell arteritis in an elderly person with sudden loss of vision.
— Richard Lilford, CLAHRC WM Director
- BBC News. Artificial intelligence: Google’s AlphaGo beats Go master Lee Se-dol. 12 March 2016.