“I call this the emotional alignment design policy. So the idea is that corporations, if they create sentient machines, should create them so that it's obvious to users that they're sentient. And so they evoke appropriate emotional reactions to sentient users. So you don't create a sentient machine and then put it in a bland box that no one will have emotional reactions to. And conversely, don't create a non sentient machine that people will attach to so much and think it's sentient that they...
All content for The Sentience Institute Podcast is the property of Sentience Institute and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
“I call this the emotional alignment design policy. So the idea is that corporations, if they create sentient machines, should create them so that it's obvious to users that they're sentient. And so they evoke appropriate emotional reactions to sentient users. So you don't create a sentient machine and then put it in a bland box that no one will have emotional reactions to. And conversely, don't create a non sentient machine that people will attach to so much and think it's sentient that they...
Oscar Horta of the University of Santiago de Compostela on why we should help wild animals
The Sentience Institute Podcast
1 hour 28 minutes
5 years ago
Oscar Horta of the University of Santiago de Compostela on why we should help wild animals
“We want there to be animals like elephants, who on average have very good lives, rather than animals who tend to have very bad lives… If you have, say, a population of animals who reproduce by laying a million eggs. On average, only two of them would survive… Due to how the life history of animals is in many cases, we are not really speaking here about exceptions but rather about the norm. It's very common for animals to have lives that contain more suffering — sometimes much more suffering ...
The Sentience Institute Podcast
“I call this the emotional alignment design policy. So the idea is that corporations, if they create sentient machines, should create them so that it's obvious to users that they're sentient. And so they evoke appropriate emotional reactions to sentient users. So you don't create a sentient machine and then put it in a bland box that no one will have emotional reactions to. And conversely, don't create a non sentient machine that people will attach to so much and think it's sentient that they...