Large models are delivering impressive results and will continue to improve; the next wave of progress in foundation models will come from combining different types of foundation models.

Impressive outputs from Large Language Models, which can generate convincingly human text in response to questions from users, have sparked new conversations about progress in AI and its implications for society. Progress in this area has been driven by the creation of very large models. Large foundation models are also being developed for other core AI functions, such as computer vision  and image generation.  These models potentially unlock broader applications of AI tools. However, they also face a variety of deployability issues that are shared across AI technologies. Building on the successes of these models, further progress will come from integrating different types of foundation models, enabling broader problem-solving. Large models intrinsically suffer from limits in transparency. When larger datasets are used to create a system, it is almost impossible to manage or even assess bias, for example. The associated risks are increasingly clear as progress in these models continues.