Every day, we read something new about Artificial Intelligence - it'll take our jobs, it'll teach our kids, it knows more about us than we do ourselves... but how much of that is hype, and how much is, or will be reality? Part of our problem with AI is that it feels impenetrable and mysterious, especially when even those building it aren't entirely sure how it works. In a new series, Aleks Krotoski (The Digital Human, Radio 4) and Kevin Fong (13 Minutes to the Moon, BBC World Service) set out to 'solve' AI. Or at the very least, to answer our questions on all things artificial intelligence-related. These are the questions that really matter to us - is AI smarter than me? Could AI make me money? Will AI save my life or make me its slave? These questions predate the current frenzy created by the likes of Chat GPT, BARD and LlaMA. They've been in our collective psyche ever since the very first thinking machines. Now these fears and excitement are a reality. This series arrives at a critical moment.
Every day, we read something new about Artificial Intelligence - it'll take our jobs, it'll teach our kids, it knows more about us than we do ourselves... but how much of that is hype, and how much is, or will be reality? Part of our problem with AI is that it feels impenetrable and mysterious, especially when even those building it aren't entirely sure how it works. In a new series, Aleks Krotoski (The Digital Human, Radio 4) and Kevin Fong (13 Minutes to the Moon, BBC World Service) set out to 'solve' AI. Or at the very least, to answer our questions on all things artificial intelligence-related. These are the questions that really matter to us - is AI smarter than me? Could AI make me money? Will AI save my life or make me its slave? These questions predate the current frenzy created by the likes of Chat GPT, BARD and LlaMA. They've been in our collective psyche ever since the very first thinking machines. Now these fears and excitement are a reality. This series arrives at a critical moment.
When a Norwegian man idly asked ChatGPT to tell him something about himself he was appalled to read that according to the chatbot he'd been convicted of murdering two of his children and had attempted to kill a third. Outraged, he contacted Open AI to have the information corrected only to discover that because of how these large language models work its difficult if not impossible to change it. He's now taking legal action with the help of digital civil rights advocate.
Its an extreme example of Large Language Model's propensity to hallucinate and confabulate, ie make stuff up based on what its training data suggests the most likely combination of words, however far from reality that might be.
Aleks Krotoski and Kevin Fong find out exactly what your rights are and whether GDPR (general data protection regulations) are really fit for purpose in the age of genertive AI.
Presenters: Aleks Korotoski & Kevin Fong Producer: Peter McManus Researcher: Jac Phillimore Sound: Gav Murchie