You are viewing 1 of your 3 articles before login/registration is required
How might the interface of biological and artificial vision be better adapted for future diagnostic imaging practices?
Medical imaging is an essential component in diagnosing and treating ocular pathologies. With substantial technological advances over the decades, the resolution of images has improved dramatically (with some ophthalmologists even heralding smartphone photography as a useful diagnostic tool). But the complexity of the anatomical and functional changes depicted in these high-resolution images can sometimes be difficult to interpret.
And that’s where AI – with its ability to detect high-resolution image features and reduce the risk of diagnostic errors – comes into play. Sophisticated machine recognition systems – in particular, deep neural networks (which process data using sophisticated mathematics modeling) – are developing exponentially. In ophthalmology, deep learning has already been applied to fundus photography and OCT to aid in the detection of a range of eye diseases, such as diabetic retinopathy, glaucoma, macular oedema, and AMD.
But AI cannot be considered the gold standard for diagnostic imaging just yet. Various studies have noted, for example, that deep learning machines have “a tendency to commit bizarre misclassifications on inputs specifically selected to fool them” (1, 2). Such inputs – created with the intention of tricking the computer – are known as “adversarial attacks” and can be very useful for those who rely on deep learning and AI for visual processing.
In response to these image misclassifications, a recent Journal of Vision study (1) explores what “ordinary people” (in other words, people who are not professionally involved in image interpretation) know about the nature and prevalence of such classification errors in five experiments looking at whether humans can predict when and how machines will misclassify natural images.
The study finds strong evidence that “naive observers” can anticipate machine misclassifications, suggesting that “at least some machine failures are intuitive to naive human subjects.” For example, participants correctly identified which images were misclassified by machines on 79.8 percent of trials – “well above the chance-level accuracy of 50 percent.”
If untrained human observers have some ability to predict which images are easy or difficult for machines to classify – and also understand the kinds of errors machines are likely to make – the study suggests that hybrid human–machine teams might coordinate their efforts to improve diagnosing ocular pathologies with the assistance of deep learning tools. The study results may also have implications for real-world applications of automated visual classification systems (for example, in vehicles with autopilot), the researchers say.
Despite participants’ ability to recognize the overlap between natural adversarial examples and machines’ (mis)chosen categories, it is not known whether humans and machines “appreciate this overlap for the same reason.” The authors conclude that more research is needed to further understand where and how human and machine visual processing converge – and diverge.
M Nartker et al., “When will AI misclassify? Intuiting failures on natural images,” Journal of Vision, 23, 4 (2023). PMID: 37022698.
S Finlayson et al., “Adversarial attacks on medical machine learning,” Science, 363, 1287 (2019). PMID: 30898923.
By opting-in, you agree to receive email communications from The New Optometrist. You will stay up-to-date with optometry content, news, events and sponsors information.
You can view our privacy policy here
By opting-in, you agree to receive email communications from The New Optometrist. You will stay up-to-date with optometry content, news, events and sponsors information.
You can view our privacy policy here
By opting-in, you agree to receive email communications from The New Optometrist. You will stay up-to-date with optometry content, news, events and sponsors information.
You can view our privacy policy here