A friendly, intuitive tour of Hidden Markov Models (HMMs). Using the relatable 'full trash bin means he's home' metaphor, we explore how to infer unseen states from noisy observations, learn the model parameters with Baum–Welch, and decode the most likely state sequence with the Viterbi algorithm. You’ll see how forward–backward smoothing combines evidence from past and future, and how these ideas power real-world AI—from speech recognition to gene finding and beyond. Note: This podcast was ...
All content for Intellectually Curious is the property of Mike Breault and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
A friendly, intuitive tour of Hidden Markov Models (HMMs). Using the relatable 'full trash bin means he's home' metaphor, we explore how to infer unseen states from noisy observations, learn the model parameters with Baum–Welch, and decode the most likely state sequence with the Viterbi algorithm. You’ll see how forward–backward smoothing combines evidence from past and future, and how these ideas power real-world AI—from speech recognition to gene finding and beyond. Note: This podcast was ...
Fusion's Midas Touch: Transmuting Mercury into Gold in the Nuclear Age
Intellectually Curious
4 minutes
4 days ago
Fusion's Midas Touch: Transmuting Mercury into Gold in the Nuclear Age
We explore a provocative claim that next‑generation fusion plants could use 14.1 MeV neutrons to transmute mercury-198 into gold while breeding tritium and funding clean energy. This episode breaks down the physics of neutron-induced transmutation, the engineering hurdles of isotope separation and materials compatibility, and the economics of a multi‑product fusion platform that could couple energy with resource cleanup and industrial element synthesis. Note: This podcast was AI-generated, a...
Intellectually Curious
A friendly, intuitive tour of Hidden Markov Models (HMMs). Using the relatable 'full trash bin means he's home' metaphor, we explore how to infer unseen states from noisy observations, learn the model parameters with Baum–Welch, and decode the most likely state sequence with the Viterbi algorithm. You’ll see how forward–backward smoothing combines evidence from past and future, and how these ideas power real-world AI—from speech recognition to gene finding and beyond. Note: This podcast was ...