“If the computer gets smarter than me… is that a problem?” Joe Rockey and Father Boniface Hicks tackle the fear narrative around AI by comparing it to other powerful tools (cars, planes) that required strong safeguards—not panic. They explore why AI lacks moral intuition, how optimization without ethics can harm, and why deepfakes and spoofing demand new habits of verification. The conversation lands in the three lenses: honesty about our fears (self), charity through wiser trust and presence...
All content for Father and Joe is the property of Father Boniface Hicks and Joseph Rockey Jr and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
“If the computer gets smarter than me… is that a problem?” Joe Rockey and Father Boniface Hicks tackle the fear narrative around AI by comparing it to other powerful tools (cars, planes) that required strong safeguards—not panic. They explore why AI lacks moral intuition, how optimization without ethics can harm, and why deepfakes and spoofing demand new habits of verification. The conversation lands in the three lenses: honesty about our fears (self), charity through wiser trust and presence...
Father and Joe E436: Smarter Than Us? AI Fear, Safeguards, and What’s Real
Father and Joe
20 minutes
6 days ago
Father and Joe E436: Smarter Than Us? AI Fear, Safeguards, and What’s Real
“If the computer gets smarter than me… is that a problem?” Joe Rockey and Father Boniface Hicks tackle the fear narrative around AI by comparing it to other powerful tools (cars, planes) that required strong safeguards—not panic. They explore why AI lacks moral intuition, how optimization without ethics can harm, and why deepfakes and spoofing demand new habits of verification. The conversation lands in the three lenses: honesty about our fears (self), charity through wiser trust and presence...
Father and Joe
“If the computer gets smarter than me… is that a problem?” Joe Rockey and Father Boniface Hicks tackle the fear narrative around AI by comparing it to other powerful tools (cars, planes) that required strong safeguards—not panic. They explore why AI lacks moral intuition, how optimization without ethics can harm, and why deepfakes and spoofing demand new habits of verification. The conversation lands in the three lenses: honesty about our fears (self), charity through wiser trust and presence...