It may not feel like it less than a year after the public launch of OpenAI’s ChatGPT and the non-stop chatter around which jobs will be replaced by machines in the near future – but science is slowing down.
At least, that’s the theory reached by a paper published in the journal Nature early this year.
The paper, a collaboration between the social scientists Michael Park, Erin Leahey and Russell Funk, analysed 60 years of data on disruptive papers and patents. It counter-intuitively concluded that the pace of scientific and technological breakthroughs has decelerated over that period.
It is less likely now than in the mid-twentieth century that any one paper or patent will be ‘highly disruptive’ — defined as changing the direction of an entire scientific field.
The dominant theory as to why scientific innovation might be slowing down is that as more has been discovered, what remains is further out of reach. In other words, we’ve figured out a lot of the “easy” stuff already and what remains out of reach is increasingly challenging.
There is a reasonable argument that this is simply because the foundational principles of more disciplines are now well established, meaning it is natural that further progress is more incremental. And that as long as the incremental gains are regular, the pace of progress can remain significant.
It is also the case that while the growing number of papers and patents being published and registered over the period of analysis meant the percentage of them dropped – the absolute number recognised as highly disruptive remained constant.
That should mean the absolute pace of scientific innovation also remains constant. But it could be about to be supercharged by one of its own most recent products – AI based on large-language models (LLMs) like ChatGPT.
How is AI expected to power a new leap forward in scientific innovation?
There are two main categories of ways AI is expected to make an increasing impact in the R&D that leads disruptive scientific discoveries.
- Exponentially multiplying the speed and quantity of the research that human scientists would have done anyway.
- Powering new approaches to scientific discovery.
AI automation will transform inefficient manual R&D
A big part of achieving scientific breakthroughs is tedious, repetitive lab work. For example, recreating almost the same conditions hundreds to thousands of times – but with each example representing a tiny variation on the others.
These processes often require the painstaking manual labour of tens to hundreds of research assistants over significant stretches of time, which can become very expensive.
There are other research projects that wouldn’t rely on groundbreaking new techniques or science to reach conclusions and uncover new, potentially groundbreaking knowledge – but haven’t been practically possible to do until now due to their prohibitive scale.
Ross King, an AI researcher at the University of Cambridge created Adam, the first machine to autonomously discover novel scientific knowledge when it did experiments on the relationship between genes and enzymes in yeast metabolism in 2009.
Adam was succeeded by Eve, another machine with, naturally, more sophisticated software. Eve is able to plan, perform and analyse drugs-discovery experiments. Eve’s machine learning uses mathematical models to predict the biological effects of chemical structures. Her robot arms then allow hypothesis generated by the AI to be tested.
Scientific America describes Eve back in 2015, when the system discovered that triclosan, an antimicrobial compound used in toothpaste, can inhibit an essential mechanism in malaria-causing parasites:
“Its computer server controls two robot arms that dance amidst equipment for dispensing liquids into plastic plates containing large numbers of wells. The plates are used in screening tests for potentially useful drug compounds.”
Both AI and robotics have advanced since 2015, making robot scientists increasingly viable as a means for research labs to multiply the number of experiments they can carry out.
Over coming years, lab automation powered by AI and robotics will become standard, at least for well-funded labs. That will exponentially increase the amount of data analysis and experiment-based science that can be done. Even that pure quantity is expected to lead to a re-acceleration of the number of ‘disruptive’ new breakthroughs achieved.
A recent article by The Economic explains:
“AI-driven systems could help by doing laboratory work more quickly, cheaply and accurately than humans. Unlike people, robots can work around the clock. And just as computers and robots have enabled large-scale projects in astronomy (such as huge sky surveys, or automated searching for exoplanets), robot scientists could tackle big problems in systems biology, say, that would otherwise be impractical because of their scale.”
As Dr King points out:
“We don’t need radically new science to do that, we just need to do lots of science.”
AI will allow labs to do lots of science, quickly.
AI will unlock new approaches to R&D
The age of AI will not only allow scientists to command (comparatively) cheap armies of robot lab assistants and analysts, exponentially multiplying the quantity of research that can be done and the size of data sets analysed. It will also facilitate entirely new approaches to to scientific discovery and innovation.
Literature-based discovery (LBD) is one obvious use case of the new generation of large language models represented by ChatGPT. LBDs systems use AI to make previously unnoticed connections and correlations between literature on unrelated research papers and other scientific literature.
The first LBD systems were developed as far back as the 80s when Don Swanson of the University of Chicago developed one and fed it with the MEDLINE database of medical journals. It was able to connect to separate observations in unconnected papers, that fish oil reduced blood viscosity and that the circulatory disorder Raynaud’s disease is related to blood viscosity.
That resulted in a hypothesis, subsequently verified, that fish oil might be a treatment for the condition.
Today’s LLMs are hugely better at natural-language processing and can also be trained on huge bodies of scientific literature – and data. Sophisticated AI models are already being used in areas such as materials science to identify, for example, materials their “chemical intuition” suggests should represent desired properties.
Researchers are also using AI models to identify potential blind spots in the direction and scope of research. One way this is being approached by scientists is for them to ask AI systems trained on scientific papers and data to suggest discoveries that are scientifically plausible but unlikely to be made in future years. Modern LLMs are getting good enough to spot research directions worth pursuing – but which haven’t been.
Another direction being explored is for AI to suggest collaborations between scientists who do not know of each other. LLMs can spot potentially complementary work, which might be in a different field or discipline but contain common threads.
Challenges remain
While the use of AI in R&D is becoming increasingly commonplace, there are still bottlenecks to it realising its potential.
One of these bottlenecks is a lack of standardisation in the format scientific data is recorded and stored in – which is often not machine-readable. Standardisation of lab equipment would also allow for greater automation and the use of robotics.
Scientists across fields and disciplines will also have to gain more understanding of how AI tools and models can turbocharge their work and develop a culture of using the technology. Most initiatives to leverage AI in scientific research still come from AI researchers – who report resistance, sometimes even hostility, from peers as a common reaction to attempts at collaboration.
AI’s R&D potential is clear
The emergence of scientific journals led to an acceleration of scientific innovation and discovery by making it much easier for scientists to access information and build on each other’s work. Modern research laboratories made experimentation at scale a possibility.
AI, specifically LLMs, mean these two previous catalysts to scientific innovation can be both exponentially scaled and combined in new ways. If the pace of scientific discovery has slowed in recent years it looks like we can expect it to move up a few AI-powered gears over the years ahead.