An evolving research-policy agenda

ELISE’s Strategic Research Agenda and trends in AI

ELISE’s 2021 Strategic Research Agenda set out the research challenges that need to be addressed to strengthen the technical capabilities of AI; improve its performance in deployment; and align AI development with societal interests. This Agenda sought to bridge between the frontiers of technology development and the EU’s AI policy agendas, recognising that the success of those policy agendas would depend on Europe’s ability to pursue excellent research that both advances foundational AI technologies and applies those technologies to areas of critical social and scientific need.

The research themes and areas of research interest explored in 2021’s Strategic Research Agenda are considered in the following sections. As technical capabilities continue to shift, the intention here is not to provide a comprehensive review of the state of the art under each theme, but to convey a sense of key issues and how the field has progressed in recent years. To illustrate how the research topics explored relate to practical applications, a selection of use cases introduces how ELISE industrial collaborators have deployed machine learning to enhance their work.

Over the two years since the publication of the initial ELISE Strategic Research Agenda, research, practice, and policy have changed at pace. Headline-grabbing advances in technologies such as Large Language Models have re-ignited debates about the opportunities and risks associated with AI, highlighting the potential for rapid advances in technical capabilities. Understandings of how to deploy both known and state-of-the-art technologies have continued to evolve. Legislative and regulatory proposals have grappled with the challenge of stewarding a technology that is dynamic, pervasive across sectors, and associated with both beneficial and harmful uses. From this shifting landscape emerge ten trends in technology and regulation that are shaping the development of AI today.

Progress in ELISE’s cross-cutting themes

The research pursued across the ELISE network is at the forefront of these trends. ELISE’s 14 Research Programmes continue to advance the capabilities of AI technologies – across theory, methods, and application – in turn, creating new questions and research challenges. Progress can be seen in the shifting contours of research and practice across the five cross-cutting themes set out in the first ELISE Strategic Research Agenda.

Trustworthiness & Certification

The development of trustworthy AI that can be safely and effectively deployed to deliver real-world benefits continues to be a cornerstone of AI research and AI policy. Delivering the EU’s seven characteristics of trustworthy AI – human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability – requires theoretical, methodological, and practical advances in AI technologies. Current policy debates highlight the importance of bolstering these technical characteristics with standards, certifications, and guarantees that ensure trustworthiness is not only a principle for development, but is also delivered in practice. Both the technical capabilities to underpin trustworthy AI and the certifications that can prove whether an AI system is trustworthy have been areas of focus for ELISE research.

A variety of methods to deliver on each of the seven characteristics of trustworthy AI now exist, including methods for working with real-world data, explainability, robustness, privacy preservation, and bias-reduction. AI systems can perform well where they are trained on large datasets, where there is a defined task, and where there is a clear understanding of how to validate performance at that task. There remains, however, a gap between these controlled conditions and the real-world environments in which many AI systems are developed and deployed.

The size of the AI models underpinning Large Language Models and their equivalents poses a further challenge to implementation of methods for trustworthiness by design. Bridging this gap requires progress in AI robustness, and new understandings of verification and validation. This in turn requires close collaboration between technologists, practitioners, and policymakers to identify appropriate benchmarks, certifications, and validation procedures that connect technical capabilities to effective risk evaluation in deployment.

Certification of AI systemsFuVex: Automating power grid inspection with AI and drones to prevent wildfires

Areas of Research Interest

Certifying or guaranteeing the performance of AI systems

Verifying and validating machine learning

Improving the robustness of machine learning in deployment

Approach

  • Advance the technical sophistication of core machine learning methods, including deep learning, computer vision, natural language understanding and generation, and semantic, symbolic, and interpretable machine learning.
  • Improve understanding of the principles and techniques that can make machine learning robust, from theory to their application in practice.
  • Create robotic systems that can interact intelligently with the world around them by combining robot learning approaches with machine learning methods, such as reinforcement learning.
  • Explore the role of causal modelling in bridging the gap between observational and interventional learning and understand the principles underlying interactive learning systems.
  • Design models that respond appropriately to situations that were not well-represented in their training data by accurately identifying instances of ‘domain shift’ and advance the use of transfer learning or AutoML techniques to address such scenarios.

Security & Privacy

AI has been both a catalyst for fresh concerns about security and privacy and a source of new tools to tackle these concerns. ELISE’s 2021 Strategic Research Agenda highlighted the importance of further progress in the development of principled methods for achieving privacy and security by design, and of integrating user needs and societal expectations around the safety and reliability of AI systems into the research and practice that underpins their deployment. Research progress has yielded a collection of methods for enhancing the security of AI systems and ensuring they respect fundamental rights in relation to privacy. These include privacy-enhancing technologies, such as differential privacy and federated learning, and advanced methods for attack detection and mitigation, such as adversarial learning.

With new threat vectors emerging and amidst continuing concerns about data privacy – for example, as users submit new data types to Large Language Models – the challenge for the next wave of AI progress is to make these state-of-the-art methods workable in practice and to establish best practices in data stewardship. This requires further work to test that the assumptions on which theoretical developments are based are valid in practice;

to resolve design trade-offs in design that affect overall system performance, such as the ability to deploy advanced methods in differential privacy and cryptography in systems that are computationally efficient; and to better understand what type of security threat is likely to emerge from advances in AI. Across these areas, continued dialogue is needed to understand society’s expectations in relation to data use, privacy, and security.

Synamic Technologies: Using AI to automate cyber security

Areas of Research Interest

Privacy and security by design

Working with practitioners to translate state-of-the-art technology to practice

Better understanding future threats

Approach

  • Advance the theoretical underpinnings and algorithmic capabilities of machine learning, creating more reliable, efficient and usable machine learning systems.
  • Design novel machine learning methods, including methods for differential privacy and adversarial machine learning, to help manage concerns about security and privacy by design, and resolve trade-offs in their implementation.
  • Develop principled methods that demonstrate machine learning systems are robust in deployment (where distributions may shift), and that are robust to adverse circumstances and/or adversarial manipulations, making use of software verification, machine learning verification methods, and causal modelling to help secure these advances.
  • Work with practitioner communities – for example in healthcare – to help develop machine learning systems that manage concerns about security and privacy in practice.

Explainability & Transparency

Concerns about AI as a ‘black box’ in decision-making – with technical and non-technical communities alike unable to understand its workings – have received widespread attention. In the research and policy debates that follow, a collection of ideas about the features needed from AI to ensure its trustworthiness collide.

The term explainable AI is used variably to signal:

  • Whether the use of data or the workings of a model are transparent or interpretable by either system designers or users;
  • Whether it is possible to explain how or why a model or AI system has produced a particular output;
  • Whether an output – or an AI system itself – is reasonable or justifiable as part of a decision-making process;
  • Whether AI is being used in ways that facilitate or diminish accountability for the results of a decision-making process.

Attempts to respond to these questions through innovations in technology and governance have resulted in explainability becoming a popular research topic. Many methods for explainable AI have been developed, in particular methods for increasing model interpretability and for creating counterfactual explanations. These innovations are valuable. However, they become more difficult to deploy in complex systems, and they do not necessarily align with user expectations around explainability, or how humans might approach the task of data interpretation.

In response, further technical developments are needed to ground these methods to theory – both in terms of modelling and human understanding – and to connect these methods to user needs. Aligned progress in fields such as causal AI could also help drive progress, by allowing users to interrogate the relationships or causal factors that result in a particular output or decision. The result should be explainable AI methods that function well at scale; that are human-centric, based on an understanding of what sort of explanation is needed for whom in what contexts; and that deliver interpretability-by-design, by aligning user perspectives with technical functionality.

Siemens: Knowledge graphs for industrial applications of machine learning

Areas of Research Interest

Advancing explainable AI tools and methods

Grounding new methods to theory and practice

Supporting practitioners to implement explainable AI methods that meet stakeholder needs

Approach

  • Develop inherently (or ‘by design’) explainable machine learning methods, including deep learning, and approaches that increase the explainability of machine learning systems, through advances in surrogate modelling methods, visualisation tools, and approaches to encoding existing knowledge.
  • Combine symbolic and data-driven AI methods to develop AI systems that are inherently explainable.
  • Foster collaborations at the interface of machine learning and human-computer interaction to understand how human and algorithmic decision-making interact.
  • Engage with policymakers and legal specialists to explore how machine learning system design can ensure that AI use aligns with the rule of law.
  • Engage with practitioners, users and affected communities to translate new methods to beneficial real-world practice.

Integration Into Existing Systems

Recognising that the difficulties of integrating innovative technologies within existing systems can cause promising AI tools to fail in practice, the field of MLOps focuses on the action required to deploy and maintain machine learning models in production. Effective deployment requires careful consideration of technical, organisational, and regulatory aspects of AI integration, and best practices are emerging across each of these areas. As technologies like Large Language Models offer to increase the accessibility of AI tools to organisations across sectors, the need for effective integration of AI into existing systems – taking into account concerns about safety, explainability, security, and other aspects of trustworthy AI – will become more pressing.

Research in robustness has created methods and approaches that seek to increase the safety and reliability of AI in deployment, creating methods that are better able to respond to real-world challenges. These challenges often fall outside the scope of the data on which a model was trained, and require the ability to adapt to different domains, detect and respond to dataset shift, or transfer learning between domains. A collection of approaches to increasing robustness have been developed, which now need translating into practice. A continuing challenge is how to scale these methods across complex AI systems, comprised of interacting components that mix automated and human elements, taking into account human needs and maintaining overall system functioning. AutoAI offers a route to enhancing the safe and effective deployment of such AI systems through AI-enabled performance monitoring and management. Alongside these technical considerations,

Alongside these technical considerations, understanding of the organisational and regulatory elements of AI integration are also evolving. Organisations deploying AI need to consider the skills needed by those working alongside AI systems, the experiences of users affected by those systems, and the regulatory requirements associated with AI in their sector. With the field developing fast, dynamic ways of responding to issues in deployment are needed, allowing organisations to trial new technologies without causing harm. Proposals for controlled testing environments such as regulatory sandboxes have already been created as part of current European policy developments and offer a way of stewarding new AI tools into deployment.

Saidot: Helping start-ups master AI governance

Areas of Research Interest

Improving performance in deployment through more robust AI methods and AI-enhanced monitoring

Connecting research and practice

Designing effective simulators and emulators

Understanding interactions with human users

Combining data-driven and structural insights

Increasing data availability

Approach

  • Design simulators and emulators that can help explore the consequences of different interventions or model designs, and that can extract insights from the analysis of complex systems, such as those found in earth sciences.
  • Integrate emerging methods for ensuring the robustness of machine learning systems into real-world use cases.
  • Advance methods for embedding knowledge about the physical world in the design of machine learning systems.
  • Develop strategies for testing methods in practice or sandboxing new approaches.
  • Develop new learning strategies to operate in low data-resource environments, advancing research in areas such as one- or few-shot learning (the ability to learn from a small number of data points or examples); transfer learning (using knowledge learned from one task as the basis for performing another); interactive learning (designing agents that learn through their interactions with their environment); reinforcement learning; and the study of the intelligence of living systems (for example, of the role social reasoning plays in influencing decision-making).

Ethical & Societal Interests

AI is a disruptive technology, bringing with it both opportunities to enhance societal wellbeing and potential harms. As interest in the field of AI ethics continues to increase, progress in the technical capabilities of AI systems has created AI tools that can help reduce the risks of those harms emerging. The field of human-centric AI has driven research advances across a variety of areas of societal concern, including algorithmic fairness, explainability and transparency, privacy preservation, and the detection of AI-generated ‘deepfakes’.

The wider challenge associated with AI ethics is how to connect technology development to societal interests, leveraging AI’s capabilities to help tackle areas of critical need while embedding human interests and concerns in its development.

With technology moving at pace, new applications emerging across sectors, and understanding of the risks and benefits of technology use in flux, continued dialogue between research and practice is needed to influence technology development towards beneficial outcomes.

DeepMammo: Using AI to screen breast cancer

Areas of Research Interest

Advancing foundational research to create human-centric AI

Putting ethical AI principles into practice

Designing governance frameworks for trustworthy AI

Developing AI applications in areas of societal interest

Approach

  • Pursue research collaborations that create AI-enabled solutions to challenges in areas of social need, including healthcare and climate policy.
  • Create human-centric AI methods and tools, which can be deployed in alignment with fundamental rights or social expectations around privacy, transparency, safety, and fairness.
  • Advance the foundations and application of explainable AI methods.
  • Build collaborations with policymakers, legal experts and social sciences to understand the ethical implications of advances in AI.
  • Bring together methods from quantum computing and machine learning to design more energy-efficient AI methods and hardware.

A responsive research agenda

Progress in AI has historically come in waves. Over the last ten years, this has yielded AI technologies that are capable of delivering impressive performance when trained on tightly defined tasks. A new wave of technical advances, based on a trend towards large models, is extending these capabilities, creating AI systems that are broader purpose. These advances can deliver plausible results across a wider range of tasks than previously, and innovations in the coming years could extend their capabilities further.

Connecting the research agenda to recent progress in Large Language Models and generative AI

The challenge that faces the field is how to translate this performance into real-world benefits for individuals, organisations, and society. Rapid progress in trustworthiness, security and privacy, explainability, AI integration, and AI ethics has provided a constellation of theoretical, methodological, and operational tools that can help deliver safe and effective AI systems. Continued support to drive further advances in these areas – advancing the technical capabilities or AI, overcoming current limitations, and understanding what works in practice – can help translate this progress into economic and social benefits.

ELISE AI roadmap

ELISE is driving a new wave of research and development to deliver AI ‘made in Europe’. Together, its 14 research programmes create AI methods, techniques, and toolkits that are technically innovative, safe, and effective in deployment, while being aligned with social needs. By combining our research agenda with initiatives to attract top talent to Europe, train the next generation of AI researchers, and enhance local start-up and innovation networks, ELISE is creating a European AI ecosystem of excellence and trust.

AI technologies that are technically advanced

Methods and tools to analyse real-world, multi-modal data; Strengthen core machine learning capabilities, through methodological and theoretical advances, such as techniques to bridge between data-driven and domain knowledge; Interrogate workings of complex systems through advances in simulation, emulation, and causality.

AI technologies that are robust in deployment

AI that is robust under dynamic or uncertain conditions; Human-centric tools that are effective as decision-support; Methods to enhance explainability in decision-making.

AI technologies that align with societal interests

Techniques for trustworthy AI; Deployed AI that is integrated into areas of critical need; AI research and development that engages stakeholder perspectives.

  • Advance the science of artificial intelligence by better understanding the intelligent behaviour of living systems and how this emerges.
  • Strengthen the theoretical underpinnings and algorithmic capabilities of machine learning, creating more reliable, efficient and usable machine learning systems.
  • Design new, energy-efficient machine learning algorithms and hardware implementations, drawing from concepts in quantum physics and statistical physics to develop more powerful machine learning systems.
  • Build bridges between classical AI methods and machine learning to advance further progress in computer vision.
  • Explore the role of causal modelling as a bridge between observational and interventional learning, identifying the principles for interactive learning systems.
  • Push forward the foundations of multimodal learning systems and expand their application.
  • Improve the performance of deep learning systems.
  • Understand the principles for robustness in deployment and develop techniques for machine learning that reliably performs well.
  • Build systems for general-purpose natural language understanding and generation.
  • Improve core machine learning functions, for example through enhanced methods for deep learning, computer vision, natural language understanding and generation, and semantic, symbolic, and interpretable machine learning.
  • Create robotic systems that can interact intelligently with the world around them by combining robot learning approaches with machine learning methods, such as reinforcement learning; and information systems that can better understand human behaviour.
  • Create AI systems to support the delivery of effective public services, for example creating AI systems for healthcare that can monitor patient health, using complex datasets to develop decision-support systems and to foster breakthrough applications in healthcare and biomedicine.
  • Develop AI tools that can contribute to humanity’s response to the climate crisis, increasing understanding of climate extremes, changes to earth systems and potential areas for intervention.
  • Design novel machine learning algorithms that are better aligned with human needs and societal interests, for example taking into account concerns around fairness, privacy, accountability, transparency and autonomy.
Back to top
Looking ahead