Rapid progress in AI has been facilitated by the availability of well-curated data that can be used to train machine learning models. While highly effective, these data-intensive methods rely on access to such data, or synthetic equivalents, which may be difficult or undesirable in some domains, for example where privacy is an important concern. To overcome this limitation, researchers have developed a suite of approaches that are allowing AI to learn from less data. These include training methods that bootstrap from similar datasets and learning strategies such as zero- or one-shot learning that generalise from a small number of data points. Strategies to introduce structure or domain knowledge into an AI system can help drive further progress, through encoding domain knowledge in the form of laws or principles, developing causal AI methods, or eliciting knowledge from expert users. The resulting systems would combine the ability to derive insights from data with pre-existing knowledge about the system and interactions with its users. These low-data approaches bring with them their own tensions: prior knowledge added in the place of data can translate into prejudice, while prior assumptions added to the model can translate into losing precision for the problem at hand.