AI: Notions + Narratives
Blue, networked brains. Human and machine hands, touching. What stories are we telling about AI, and how do these narratives shape our expectations of an AI world? We invited Google DeepMind to join us for an evening of talks, exploring the work we’ve each been doing to create more diverse, accessible representations.
We’ve long used the same, limited range of narratives for artificial intelligence — shorthands that, while seemingly convenient, communicate very little about the nuances and capabilities of today's AI. Meanwhile, these capabilities are expanding. Novel applications are making the previously abstract tangible. New possibility spaces, and new horizons, are coming into view.
As the technology outpaces the languages we have to describe it, we are left wanting a richer, more diverse lexicon, one that can hold a variety of concepts and mental models, expand our imagination space and help us better grasp the many possible futures. This event grew out of a desire to reconsider the role of visual language, metaphor and narrative in the field of AI, and explore the possibility of new articulations.
The evening was divided in three parts: first, a talk by Iris Cuppen and Amelie Dinh, researchers and strategists at BB; second, a talk by Gaby Pearl and Ross West, designers on the Visualising AI team at Google DeepMind; and lastly, a panel discussion. Below, you’ll find a summary of each talk, together with a downloadable transcript.
Bakken & Bæck: Once Upon a Time in AI
As a studio, we have long worked in AI — building prototypes(both commercial and experimental), developing and designing products, and crafting product narratives. As we've navigated this wave of project work, we’ve also thought more and more about how we're communicating about and around this technology and its applications.
In this talk, Iris and Amelie trace a journey through some of the myths and stories, past and present, that colour today’s AI universe, in ways that are more or less perceptible — from Pandora and her jar filled with misfortune and evil, to Karel Čapek’s drama R.U.R. (in which he first coined the term “robot”), the 1956 Dartmouth Summer Research Project on AI, and through to the masked Shoggoth, Roko’s basilisk… and Microsoft’s Clippy.
With these references, they show how myths and metaphors have made an already opaque technology even harder to grasp, before talking through BB’s approach to communicating AI as a studio. Drawing on the example of three projects — Machine Windows, Sierra and our recent contribution to Visualising AI — BB aims to stay close to the technology, so that we’re able to find the tangible opportunities, necessary workarounds and the genuine challenges of working with AI.
Google DeepMind: Visualising AI
Since 2016, the Visualising AI team at Google DeepMind has set the stage for a new stock of images, commissioning artists from around the world to interpret AI-specific themes, from data labelling to large language models, as well as AI’s connection to other research areas, like neuroscience or biodiversity. Grounded in conversations with Google DeepMind specialists, the images ultimately reflect the artists’ own creative vision, adding new layers and new entry points to the conversation around AI.
Gaby and Ross begin by outlining the problem space: as AI and its applications continue to challenge established systems and processes, the speed and scale of possible change has the ability to exclude entire communities from its trajectory, as other emerging technologies have done in the past.
As a counterpoint, Visualising AI aims to foster inclusion, understanding and accessibility by developing new (visual) languages to communicate ideas about AI. The team does this through a four-step process: selection (of themes and collaborators), direction, education and distribution. Their goal is to open up space for more people to talk about this technology in ways they’re comfortable with — and which don’t fall back into established tropes that create a biased understanding of who the technology is built for.
Panel
How do we navigate the technical morass of AI as a non-technical participant? Who will benefit from a more diversified visual language around AI, and how can this success be measured? Could generative AI generate its own visuals to explain its inner workings? All four speakers are onstage to answer these questions (and more), as part of a moderated panel to end the evening.
Talk transcripts