How AGI is going to measure the plausibility on uncertain statements in the real-world scenarios of incomplete information so that, among other things, we too can know what AGI is thinking?
A charge is often laid at the door of the large language models (LLMs) that they rely on probabilistic generation, assuming that this is somehow a bad thing, and that a more deterministic behaviour would somehow be a better idea for the future artificial general intelligence (AGI). Before the advent of the LLMs, almost all practical computer systems followed deterministic logic encoded in their software, but as I will argue in this episode, the future AGI will be neither deterministic nor deductively logical.
This episode is broadly based on a chapter from E.T.Jaynes (2003) Probability theory: The logic of science. Cambridge, UK: Cambridge University Press.
Will artificial intelligence discover God?
We take objects, classes, and relationships for granted, but they are just conventions we’ve agreed to use, not truths carved into reality. In this episode, we explore why these human-centric conventions matter, and why guiding AGI to adopt them may be the difference between an intelligible world model and one we can’t understand at all.
There is no question that the proliferation of the AI throughout the global economy will result in a massive reformat of the job market. A lot of jobs will be lost. A lot of completely new careers will emerge. What can you do about it to find yourself on the right side of the equation?
A Chinese government sponsored cyberattack leveraged the American AI technology and infrastructure against the American government agencies and corporations. Things are about to get worse.
Before humans act, they imagine outcomes. AI doesn’t. Yann LeCun thinks that must change, and his JEPA architecture could be the missing link between powerful neural networks and real-world intelligence driven by internal models of reality.
The advent of large language models (LLMs) fundamentally changed the behaviour of computer systems that we learned to trust over the last several decades of development of computation.
In this special Wtf? on World Models, I will give you a break-down of what this whole hype is about, where it is misplaced and why it is a good kind of hype to have for those who are interested in the artificial general intelligence (AGI).
In this special Wtf? episode we unpack the concept of a latent space. You will see why it is so important for understanding both LLMs and the future systems that will exhibit the capabilities of artificial general intelligence (AGI).
Will the artificial super intelligence force humans into submission? Throughout the human history, a nation that commanded a superior learning capability ultimately prevailed over its opponents. The ability to learn, the key component in our definition of the artificial general intelligence (AGI) will, by that very definition, bring the AGI through super-intelligence to super-power. Will we be able to coexist peacefully?
Can we cure cancer with artificial intelligence? In this episode, we start unpacking the capabilities that an agent with artificial general intelligence (AGI) must possess in order to find the cure for cancer and transform the medicine as we know it.
You have to give it to Sam Altman. He can make even the great and powerful Wizard of Oz blush.
Altman can say something like: “You can choose to deploy five gigawatts of compute to cure cancer or you can choose to offer free education to everybody on Earth.” He then uses the fact that he himself cannot make that moral choice as a justification for him getting his hands on ten gigawatts of compute while leaving him under no obligation to either cure cancer or provide free education to anybody. But can we actually cure cancer with AI?
In our next episode on Monday, we will unpack the capabilities that an agent with artificial general intelligence, or AGI, must possess in order to find the cure for cancer and transform the medicine as we know it.
Thank you for listening and subscribing. I am Alex Chadyuk and This is AGI.
Listen every Monday morning on your favourite podcast platform.
Hallucinating LLMs are a critical step towards artificial general intelligence (AGI). We should not try to fix them but instead build more complex agents that will channel the LLMs’ runaway creativity into self-perpetuating cycles of knowledge discovery.'This Is AGI', a podcast about the path to the artificial general intelligence. Listen every Monday morning on your favourite podcast platform.
Hallucinating LLMs are a critical step towards artificial general intelligence (AGI). We should not try to fix them but instead build more complex agents that will channel the LLMs’ runaway creativity into self-perpetuating cycles of knowledge discovery.'This Is AGI', a podcast about the path to the artificial general intelligence. Listen every Monday morning on your favourite podcast platform.
In this episode of 'This is AGI', we unpack Adrian de Wynter’s large-scale study on how LLMs learn from examples, their limits in generalization, and what this means for the path toward artificial general intelligence.
What is AGI, really? In this episode of This is AGI, we cut through the hype to unpack the elusive definition of artificial general intelligence. We explore how transferability of skill across contexts and goals, focus on capability vs process, and rule-exploitation vs rule-following come together to define what makes intelligence truly general.
In this episode of This is AGI, we grade modern AI on five markers of rationality—precision, consistency, scientific method, empirical evidence, and bias. The mixed scorecard shows impressive strengths but troubling gaps, raising the question: can today’s AI really be called rational, and what does that mean for the road to AGI?