Recent AI agents have shown the incredible ability to generate compelling text and images, and play at a competitive level in complex strategy and racing games. Advances in AI have also given researchers new, powerful tools to further investigate long-standing questions in biology. In this talk I will discuss a study where we applied advances in deep reinforcement learning to better understand biological patch foraging. We show that deep reinforcement learning agents can learn to patch forage adaptively in patterns similar to biological foragers, and approach optimal patch foraging behavior when accounting for temporal discounting. We also show emergent internal dynamics in these agents that resemble single-cell recordings from foraging non-human primates, which complements experimental and theoretical work on the neural mechanisms of biological foraging. To conclude, I will discuss how ecological pressures may be important for building AI, and also discuss ways in which AI might help us to further understand foraging in biological systems.
In addition to its importance in the research on animal behavior, foraging fulfills a significant role in Artificial Intelligence as a testbed for computational models of autonomous agents. Developing believable agents by learning to mimic behavior based on complex real-world tasks is a relevant aspect to demonstrate the capabilities of a model. In the last decades, progress in Machine Learning techniques led to significant breakthroughs in several areas. Notably, Deep Reinforcement Learning (DRL) emerged as a subfield that combines the power of Deep Neural Networks and the dynamicity of trial-and-error learning. Although DRL methods are distinguished for producing remarkable results on games, and more recently to improve the performance of Large Language Models such as ChatGPT, foraging simulation is used as a convenient tool to benchmark agents' behavior. In this talk, we will show how DRL and foraging are connected to form a powerful combination for training skillful autonomous agents. We first introduce (Deep) Reinforcement Learning as a learning paradigm based on reward-punishment feedback. Then, we describe traditional ways of representing foraging in the context of simulating agents computationally. To illustrate the importance of these methods, we present several works on DRL and foraging, which pave the direction that current techniques are following and open new routes to future research. Finally, we present simulation environments that allow not only researchers but also practitioners and enthusiasts to explore foraging scenarios in an accessible way. With this talk, we hope to demonstrate that foraging simulation produces valuable and challenging scenarios for the development of computational agents.