The sophistication of the fake text, images, audio, and video that AI can generate is increasing; what follows is a demand for AI-enabled tools to detect these fakes.

While manipulation of images or text is not a new phenomenon in the information environment, AI has increased the sophistication and availability of tools that can generate convincing fake images, audio, text, or video. The resulting capabilities increase the challenges policymakers face in building a trustworthy online information environment, with implications for science and society. In response, technical, regulatory, and societal interventions are needed to allow better detection of fake content, prevent harm arising from such content, and support users to interrogate the trustworthiness of different information sources. AI must play a role in supporting these interventions, for example through services to help detect online fakes. The scale of distribution and ease of creation of fake media requires automated tools in response, akin to the domain of cybersecurity where only automated intelligent responses are rapid enough to counter automated security threats.