Share Our Blog
Author
10 minutes
How useful is artificial intelligence (AI) in medical research?
Artificial intelligence is one of the key technologies spearheading the Fourth Industrial Revolution. As its presence in medical research expands, so too does science’s scope for tackling the most pressing medical challenges of our time.
Though still in the early days of application, the advent of artificial intelligence (AI) technology is causing something of a paradigm shift in the field of clinical research. By integrating machine learning (ML) — a data-driven branch of AI that automates analytical model building — to clinical workflows, researchers can carry out tasks with more accuracy and at greater speed, enabling front-line healthcare workers to give more effective treatment to patients.
Clinical decision-making has always relied on statistical insights. In the past, patterns within data were characterised as mathematical equations. Today, however, AI gives scientists the ability to uncover complex associations within datasets that cannot be uncovered through traditional, equation-based statistical analysis.
Unlike a human clinician, data-rich computer algorithms capable of human-like intelligence are able to observe and process vast amounts of information with high degrees of accuracy. In one Stanford study, for example, AI was able to classify images of skin lesions as benign lesions or malignant skin cancers with the same level of accuracy as board-certified dermatologists. AI can even bring clarity to areas that clinicians disagree on, such as identifying tuberculosis on chest radiographs.
Aside from improving clinical decision-making and optimizing innovation, AI can make the research process far more cost-effective — and the key figures within healthcare are standing up and taking notice.
However, while AI is certainly causing a stir in the life sciences sector, the excitement is matched by an air of skepticism. Data aggregation remains a major challenge, for example, while algorithms also need to be more transparent in order to meet rigorous regulations for drug discovery. In this article, we look at the extent to which AI integration is driving innovation and accelerating the research process.
Current applications of AI in medical research
While there’s no universally agreed-on definition for “artificial intelligence”, the term broadly refers to computing technologies that can perform tasks which normally require human intelligence — such as visual perception, speech recognition, decision-making, and translation between languages. At present, most applications of AI are fairly limited in scope, performing specific tasks related to pre-defined problems.
In medical research, AI is most commonly employed to analyze and identify patterns in large, complex datasets. Importantly, this data can be analyzed in a significantly faster, precise and more cost-effective way than traditional analytical methods — reducing spend and improving outcomes.
Similarly, AI can be used to trawl through vast troves of scientific literature to find relevant studies, as well as combining different datasets. At the Institute of Cancer Research, for example, researchers have developed a unique canSAR database that is able to combine patients’ clinical and genetic data with independent chemistry, biology, patient and disease information. Once the AI system has collated and “translated” this vast haul of data into a common language, it can then employ machine-learning algorithms to make useful cancer drug-discovery predictions.
Big data has the potential to revolutionize research and development (R&D) in the life sciences sector — and is already making waves. Here are some ongoing applications of AI and machine learning within specific fields of clinical research:
Diagnostics
Medical diagnostics is a category of medical testing designed to detect infections, conditions and diseases. According to recent data from Emerj, one-third of all healthcare AI SaaS companies are focusing partly or exclusively on diagnostics — making it a major area for AI application.
Unsurprisingly, many of the tech giants have been quick to get involved. In 2016, IBM Watson Health launched a partnership initiative with Quest Diagnostics: IBM Watson Genomics, which aims to enable highly personalized cancer treatment by combining cognitive computing with state-of-the-art genomic tumor sequencing.
Meanwhile, Google’s AI subsidiary, DeepMind, recently announced several partnerships in the UK, most notably with Moorfields Eye Hospital in London where technology is being developed to address macular degeneration in aging eyes. In the related field of radiology, DeepMind is also working in conjunction with University College London Hospital (UCLH) to develop machine-learning algorithms that can detect differences in healthy and cancerous tissues — helping to improve radiation treatments.
Personalized/precision medicine
Precision medicine, an approach to patient care that is based on targeted therapies, is another area ripe for AI innovation. Also known as personalized medicine, this medical model incorporates genetics, behaviour, and environment with the aim of tailoring treatment intervention towards a specific patient or group — offering an alternative to the one-size-fits-all approach of traditional medicine.
Doctors are increasingly using AI to develop precision treatments for complex diseases, giving them the capability to analyze huge datasets that were previously too convoluted to gain valuable insights from. Armed with new insights into what makes patients healthy at the individual level, researchers can facilitate the development of new drugs, find new uses for existing drugs, suggest personalized combinations, and even predict disease risk.
In the coming years, we can expect to see an increased use of micro biosensors and devices, as well as mobile apps with more sophisticated remote monitoring capabilities — giving clinicians yet more goldmines of data to work with. Aside from giving patients more autonomy and improving the efficacy of their treatment, the application of AI technologies can galvanize R&D while simultaneously cutting costs.
Drug discovery
Given that it costs US$2.6 billion (£2.14 billion) to develop just one drug and nine out of ten candidate therapies will fail between phase I trials and regulatory approval, the introduction of machine learning to drug discovery is timely.
With this in mind, researchers at the Universities of Cambridge and Manchester developed an AI “robot scientist” dubbed “Eve” to help optimize this time-consuming and costly process. In 2018, Eve discovered that a compound commonly found in soap and toothpaste could prove an effective new weapon in the fight against drug-resistant malaria.
Not only do intelligent technologies such as Eve enable researchers to speed up the drug discovery process, but they also reduce the cost of new drug discovery by cutting down on personnel and hours spent working towards a breakthrough.
Clinical trials
There are a number of ways that machine-learning algorithms can optimize clinical trial research. For one, advanced predictive analytics can enable researchers to identify candidates for clinical trials, drawing on a wide range of data that encompasses social media presence, interaction with their GPs, and how their genetic information compares to a specific target population. The costs of recruiting for clinical trials can be huge, and AI solutions can significantly reduce them.
Another burgeoning application of AI is to improve the safety of the trials. Real-time data access and remote monitoring of participants allows researchers to keep more accurate tabs on biological changes, as well as identifying if a participant is responding to treatment in an adverse manner.
Epidemiology
In the field of epidemiology, statistical modelling using artificial intelligence is being used to predict future disease outbreaks. The epidemiology/tech company, AIME Inc, for example, has developed a tool that provides real-time predictions on the locations and timing of dengue outbreaks. The result of two-and-a-half years of research, it worked with up to 84% accuracy during trials in Malaysia.
Similarly, researchers from the University of Southern California Viterbi School of Engineering have developed an algorithm capable of slowing the spread of communicable disease while also accounting for limited resources and population dynamics over time. The study even proved more proficient at reducing disease incidence than existing health outreach policies.
What are the most disruptive medical device technologies? Click here to find out more.
A multi-billion-dollar industry
The transformative power of AI in healthcare has piqued the interest of both the private and public sectors — triggering a spate of investment in the development of AI. Between 2011 and 2017, 121 health AI and machine-learning companies raised an estimated $2.7 billion (£2.23 billion) in 206 deals. In Silicon Valley, major players such as Google, Microsoft, and IBM continue to make significant investments in AI healthcare solutions, while the number of health-focused AI startups continues to steadily increase. According to McKinsey, big data and machine learning in medical research could generate up to US$100 billion (£82.48 billion) a year.
In May 2018, the then British Prime Minister Theresa May pledged to invest millions of pounds on a new AI strategy plan for early-stage cancer and chronic disease diagnosis — aiming to cut down the number of deaths from prostate, ovarian, lung, and bowel cancer by 10% within 15 years. In August 2019, meanwhile, Health Secretary Matt Hancock announced a £250 million ($346 million) investment to make the NHS a world leader in AI technology.
On the back of these investment drives, several AI-based companies have collaborated with UK universities and hospitals in pioneering schemes to optimize medical research and healthcare provision, including the aforementioned initiative involving DeepMind and Moorfields Eye Hospital.
The challenges of AI integration in medical research
Of course, the adoption of artificial intelligence in clinical research is still in its infancy. While many AI and machine learning applications in medical research have been successfully implemented and continue to yield positive results, a series of obstacles still need to be overcome.
Privacy
In the age of big data, privacy is one of the most pressing issues facing any industry. Because medical data is still personal and hard to access, MedTech startups find it difficult to gain access to patient data that could be used to develop products or make business cases. Many hospitals and research institutions are also wary of cloud platforms and prefer to use their own servers in order to prevent data leaks.
Regulation
Getting AI tech approval from regulators can be challenging. In the European Economic Area (EEA), all algorithms intended for use in healthcare must apply for CE marking, a certification mark that indicates conformity with health, safety, and environmental protection standards. More specifically, the product has to meet the requirements of the EU Medical Device Regulation 2017/745 (MDR) set by the European Commission.
As this towards Data Science guide to AI regulation highlights, product owners need to define the algorithm’s ‘Intended Use’ carefully and clearly — something that can cause confusion about the level of transparency needed.
Transparency
Transparency is one of the biggest issues facing AI developers in healthcare — both in terms of stringent regulation and in terms of front-line health provision. Because some algorithms recommend certain procedures and treatments to doctors, it’s absolutely imperative that doctors are given the level of transparency to understand and explain why a recommendation was made.
For more on transparency and algorithmic decision-making, the research paper Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR offers insights from a lawyer, a computer scientist and an ethicist.
Culture
For front-line health workers such as doctors, particularly those with limited exposure to AI, receiving important medical suggestions from an automated system may feel like an insult to their years of training and accumulated knowledge. To overcome any discord between physician and technology, it’s vital that AI adoption is framed as something that empowers, aids and complements the expertise of human staff. Introducing AI literacy into medical curricula can also help to reduce scepticism or hesitancy.
Recruitment
As new, ever-improving technologies are adopted by healthcare and research institutions, ensuring that people have the necessary technical expertise and competence to deal with large medical datasets is paramount. The life sciences sector, therefore, needs to build a robust skills pipeline to find the talent to meet these unpredictable challenges head-on.
Conclusion
In the near future, AI will have an increasingly prominent hand in various aspects of clinical research, from drug discovery diagnosis to treatment. Other areas set to benefit from further AI integration include medical imaging, echocardiography, neurological screening, and surgery. With such increasing automation across practically every area of research, many in the healthcare sector have voiced concerns that AI will one day take their jobs.
But is such caution unwarranted? According to Yoshua Bengio, one of the founding fathers of artificial intelligence, the answer is “yes.” In an interview with New Scientist, Bengio said, “so many overestimate the intelligence of these systems. AIs are really dumb. They don’t understand the world. They don’t understand humans. They don’t even have the intelligence of a six-month-old.”
Indeed, despite concerns that AI will replace doctors and clinicians, this has not been the case. The opposite has happened: the accurate insights provided by AI are helping to free up doctors’ valuable time. Besides, before they can be introduced to medical research, new technologies need to be able to integrate with existing workflows and gain regulatory approval. Instead of ushering in a dystopian age of machine’s domination over humans, early applications of AI have complemented and optimized the vital work of human researchers and clinicians.
For more industry insights into the ever-changing world of the life sciences sector, stay tuned to the SRG Blog.
Want to Know More?
Subscribe to our newsletter
Stay up to date with SRG
Latest Salary Survey
SRG are industry leaders and work with 3rd party vendors for market intelligence