Discussions seem to be popping up everywhere from industry events to articles in mainstream business magazines about the future of medicine and whether artificial intelligence (AI) and machine learning will displace the work being done by researchers and doctors. A recent interview in The New Yorker even suggested that radiologist training should be halted, since deep learning will be doing a better job than professionals within the next five years. While it’s true that artificial intelligence and computer-based algorithms are making their way into both the lab and clinical practice, the adoption of these new technologies will not replace the work of the researchers themselves. On the contrary: it’ll enable them to become more effective than ever before.

Big data is getting bigger by the minute. Preclinical researchers are focused on developing new hypotheses and ideas that can eventually translate into testing and deployment, which includes gathering an enormous amount of data, understanding and connecting the dots of different pathways, and coming to meaningful conclusions. There is also increasing demand, particularly in oncology, for better prognostic tests and companion diagnostics to inform treatment. Researchers are thus tasked with analyzing overwhelming amounts of data to identify biomarkers and develop robust assays. This is essentially like searching for a needle in a haystack, and is an incredibly time-intensive and challenging process.

Simply put, today’s machines are capable of crunching vast amounts of data and identifying patterns that humans cannot. AI and machine learning thus provides significant opportunities for life sciences research. Essentially, if you take enormous computing power and feed it tremendous amounts of data (e.g., from published research in scientific journals, patient records or other data sets), you get an artificial intelligence network that researchers can interact with in their daily work in a useful way. Researchers can use the network for data testing, plausibility assessments or to come up with new pathway interactions. They can detect biomarkers faster – those needles in the haystack – because the AI network makes it possible to understand what they look like and where to look for them.

AI also supports quality control, enabling researchers to better determine if a discovery is a rare event or whether it has real meaning and can be validated and reproduced. For example, in preclinical drug or test development, AI allows the researcher or the community to bring different information or cohorts together for big data sharing and collaboration, and more effective data mining. AI and machine learning programs can also train computers to decrease their errors rates over time, based on information gathered and mistakes made, while human errors rates essentially stay the same.

For years the industry has been accumulating immense amounts of data, and while advances in cloud technologies have provided a way to store all of that data, with AI and machine learning we now finally have a way to effectively utilize all the data that has been, and is being collected.

Machines need people

Despite all of the potential upsides, there is a lot of anxiety that AI will make jobs across the industry obsolete. However, AI is still in the early stages of development, and in many ways it still can’t match —and may never match — the intuitive intelligence, also known as experience, of researchers.

While AI enables better mining of big data, it still requires humans to set up, train and use the system in an intelligent way. First, the curation of data that are fed into the system should still be guided by researchers, as should the generation of new hypotheses. AI programs can certainly run without any hypotheses and randomly search for data correlations, and this can provide new opportunities; however, the more hypotheses that are entered — for example, whether a drug should have a high safety profile, kill cancer cells, or enforce the growth cells in regenerative medicine — the more robust the output will be since the researchers are coming up with the rationale and guiding the machine to answer the right questions.

To add to this, the same New Yorker article mentioned above, discussed how machines can help answer research questions quickly, efficiently and accurately, but they cannot explain why something happens, which is perhaps the most important part of research. The explanation of why is what powers medical advances, and human experts are needed for that level of analysis.

AI is also prone to overfitting, which can result in inconclusive or wrong information. For example, last year Nature wrote about AI algorithms that can be easy to fool, such as the Google Deep Dream program that was designed to recognize birds. The program did recognize birds, but would also see any picture of a flower, face or building and incorrectly interpret it as a kind of bird. Computer-aided detection programs have no built-in mechanism to learn, and machine-learning programs need to continuously learn in order to be useful, so researchers need to be cautious about the information the programs provide. They need to take the same approach they would to reading something on the Internet — just because something’s online, doesn’t mean it’s true. AI can provide a lot more information than human interpretation alone, but a critical view is still required of anything suggested by a machine learning system.

The good news is that, while AI and machine learning is likely to become an integrated part of life science research, it won’t require a major shift in the way researchers work. These programs will likely come in the form of an interface running in the background of their current system, similar to a computer operating system — users will be aware that it’s there, but we won’t all need to be experts in it. It will be used in a very natural, unobtrusive way, becoming as ingrained in daily work as Microsoft Excel or any other software tool. 

If big data and machine learning reach the level of maturity for routine application, researchers will have more time and be able to obtain answers to their research questions more quickly. AI will thus facilitate faster, more informed and more accurate research than ever before, but it will complement and assist, not replace, researchers. While it will change the way research is conducted, AI will make good researchers even better.