AI can be an engine for innovation, providing adaptable tools that accelerate progress across science and engineering. ELISE researchers are at the forefront of developing next-generation AI tools to support research and innovation, and applying those tools in critical domains.
Scientific progress is vital to develop the innovative solutions that policymakers need in response to today’s grand challenges in health, climate, environment, and more. The deployment of AI in science has already demonstrated its potential as a tool for accelerating discovery and innovation. Successes in areas like protein folding, climate modelling, and astrophysics signal the innovation that AI could help deliver, but are also the foothills of the larger transformation of science that AI could unlock.
The laboratory has long provided a physical hub for research. These physical spaces are increasingly digitised, integrating automated devices to support research. Computational tools are also becoming important enablers of scientific practices, allowing researchers to interrogate large datasets and extract novel insights. Extending these practices opens the possibility of new virtual labs, which leverage AI to accelerate discovery, by enhancing researchers’ ability to progress their ideas. This vision of AI-enabled innovation through virtual laboratories is the focus of work by Samuel Kaski, ELISE Coordinator and co-Director of the ELISE Robust Machine Learning Programme, and his teams at the Finnish Center for Artificial Intelligence and University of Manchester.
Achieving this vision requires AI that can facilitate the scientific method. AI techniques already underpin analytical tools such as simulation, emulation, and digital twins that are embedded in today’s scientific practices. More sophisticated AI methods could increase the power of these simulations, by combining different data sources, integrating pre-existing domain knowledge to increase the scientific relevance of results, and returning information to human users as actionable, explainable insights.
Domains such as structural engineering, drug discovery, and graphic design have adopted AI methods to improve decision-making in design. Further progress could extend such tools, creating AI assistants that function as a bridge between virtual analytical environments and the physical lab or research processes. These AI assistants would collaborate with human researchers, providing insights or recommendations that help them accelerate their research. They would combine the functionality of AI for science tools with a human-in-the-loop approach that helps researchers better leverage these analytical capabilities.
To create these AI research assistants, AI agents need to be able to navigate the uncertainty associated with working alongside human researchers. Recognising that human researchers are often operating in contexts where desired outcomes, research questions, and current knowledge are in flux, virtual AI assistants need to be designed to be guided by user behaviours, learning from their users how to best help tackle a research question. They should be able to respond appropriately to problems they have never encountered before, identify relevant actions in situations where the desired end-results might be unspecified or unclear, and be able to generalise these behaviours from one type of research challenge to another. Recent progress in AI research points to a pathway for developing these AI assistants.
Researchers and designers are often tasked with creating a solution to a challenge they have not encountered before.
They typically know how to approach a task in principle, but lack a specific solution. Many of today’s AI tools are developed through training on large datasets with many examples of how to solve a task, but advances in one- or zero-shot learning are creating AI agents that can learn how to perform a task based on only a few examples, or without having seen any data about the task. A prototype, published in 2022, leveraged these methods to create an AI assistant that could support a human user in making a series of interrelated decisions, demonstrating the effectiveness of combining AI advice and human decision-making.
The goals or focus of research projects often change as new knowledge emerges, new problems arise, or scientific interests shift.
To keep pace with these changes – without disrupting research activity by retraining an AI system – AI assistants need to be capable of identifying user goals even when they are not clearly specified, and adapting to those goals as they change. This implies a form of social intelligence through which AI agents can communicate with their users to better understand their intent and interests, which in turn invokes a type of inverse reinforcement learning through which AI agents observe a user’s activities and use those observations to deduce what they are trying to achieve. A new framework for generative user models creates a basis for such agents, providing a mechanism to model what goals the user might have from their behaviour and plan how best to assist them in response.
For many researchers or designers, human creativity and autonomy are important aspects of their work.
These priorities can be in tension with approaches to AI development that focus on automation of (currently human-performed) tasks. To respond to this desire to have an active role in the design process, AI systems need to be able to work with human users as an active participant, keeping the designer in the loop and providing interfaces that empower the designer in decision-making.
Building on the proof of concept demonstrated by the methodological advances described above, these ideas are being trialled in practice. 2022 saw the first pilot virtual laboratories established in drug design, sustainable mobility, and atmospheric science. Further progress will benefit from broader purpose AI tools that can be developed in one domain and applied in others, alongside further demonstrator projects to show the value of these methods in practice.