A friendly, intuitive tour of Hidden Markov Models (HMMs). Using the relatable 'full trash bin means he's home' metaphor, we explore how to infer unseen states from noisy observations, learn the model parameters with Baum–Welch, and decode the most likely state sequence with the Viterbi algorithm. You’ll see how forward–backward smoothing combines evidence from past and future, and how these ideas power real-world AI—from speech recognition to gene finding and beyond. Note: This podcast was ...
All content for Intellectually Curious is the property of Mike Breault and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
A friendly, intuitive tour of Hidden Markov Models (HMMs). Using the relatable 'full trash bin means he's home' metaphor, we explore how to infer unseen states from noisy observations, learn the model parameters with Baum–Welch, and decode the most likely state sequence with the Viterbi algorithm. You’ll see how forward–backward smoothing combines evidence from past and future, and how these ideas power real-world AI—from speech recognition to gene finding and beyond. Note: This podcast was ...
Dive into non-orientable surfaces with the Klein bottle. We explain what it means for a surface to have no inside or outside, why physical models require self-intersections (immersions) in 3D, and how a true Klein bottle must live in four dimensions (R4) to embed without self-crossings. We'll connect to the Mobius strip, discuss boundary vs no boundary, and reveal a striking fact: slicing a Klein bottle yields two mirror Mobius strips. Along the way we touch on cosmology ideas like the Alice ...
Intellectually Curious
A friendly, intuitive tour of Hidden Markov Models (HMMs). Using the relatable 'full trash bin means he's home' metaphor, we explore how to infer unseen states from noisy observations, learn the model parameters with Baum–Welch, and decode the most likely state sequence with the Viterbi algorithm. You’ll see how forward–backward smoothing combines evidence from past and future, and how these ideas power real-world AI—from speech recognition to gene finding and beyond. Note: This podcast was ...