Listen to this episode on:
Spotify | Apple Podcasts | Amazon
Few things feel worse than being misunderstood. That’s especially the case when you’re persistently misunderstood—when, try as you might to convince someone of something you know to be true, it falls on deaf ears because their mind is already made up to the contrary.
Now, imagine if the future of your reality was being constructed atop this misunderstanding, and if it wasn’t just you who was misunderstood but a vast population of other people like you.
Years ago, the artist and professor Stephanie Dinkins realized this nightmare scenario was in fact playing out with AI, with foundation models being trained on data that replicates the same pernicious biases across race, gender, and other points of difference that contribute to making our society unequal and problematic.
So, she resolved to do something about it.
This week, I am very pleased to talk to Stephanie Dinkins about how her formative relationship with the humanoid robot Bina48 inspired her to build her own homebrew AI system, the powerful work she is doing today through Eric Schmidt and James Manyika’s AI2050 fellowship, and why it’s so important to be intellectually nimble and play the long ball when it comes to increasingly powerful artificial intelligence.