While advances in the performance of AI models have seen enormous successes, a profound understanding of how learning happens inside the models remains to be thoroughly explored. Understanding how AI learns has the potential to help us gain novel insights in science, technology, and other fields, as well as to observe novel causal relationships in various types of data. Interpreting the internal workings of AI models can also shed light on how the human mind works and how we are similar to and different from machines. The answers to these questions have highly consequential implications across disciplines, which is why it is imperative for scholars from a variety of fields to come together and collaborate.
On March 6, 2024, Social Science Matrix hosted a symposium focused on understanding and interpreting AI, an important new frontier in AI research. At the symposium, speakers identified immediate challenges in AI interpretability and explored how the humanities, social sciences, and the tech world can join forces in this highly consequential research. The event was organized by Gašper Beguš, a 2022-2023 Social Science Matrix Faculty Fellow.
Watch the video above or on YouTube.
Panelists
- Joshua Batson of AnthropicAI
- Gašper Beguš, Assistant Professor of Computational Linguistics at UC Berkeley, linguistics lead on studying whale language at Project CETI
- Benjamin Bratton, Professor of Philosophy of Technology and Speculative Design at UC San Diego, Director of the Berggruen Institute’s Antikythera
- Dawn Song, Professor at EECS at UC Berkeley
- Claire Webb, Director of the Berggruen Institute’s Future Humans