A friendly, intuitive tour of Hidden Markov Models (HMMs). Using the relatable 'full trash bin means he's home' metaphor, we explore how to infer unseen states from noisy observations, learn the model parameters with Baum–Welch, and decode the most likely state sequence with the Viterbi algorithm. You’ll see how forward–backward smoothing combines evidence from past and future, and how these ideas power real-world AI—from speech recognition to gene finding and beyond. Note: This podcast was ...
All content for Intellectually Curious is the property of Mike Breault and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
A friendly, intuitive tour of Hidden Markov Models (HMMs). Using the relatable 'full trash bin means he's home' metaphor, we explore how to infer unseen states from noisy observations, learn the model parameters with Baum–Welch, and decode the most likely state sequence with the Viterbi algorithm. You’ll see how forward–backward smoothing combines evidence from past and future, and how these ideas power real-world AI—from speech recognition to gene finding and beyond. Note: This podcast was ...
Street Fighting Mathematics: Courageous Problem Solving with Rough Answers
Intellectually Curious
5 minutes
5 days ago
Street Fighting Mathematics: Courageous Problem Solving with Rough Answers
Join us as we unpack Sanjoy Mahajan's Street Fighting Mathematics: The Art of Educated Guessing and Opportunistic Problem Solving. We spotlight the first tools—dimensions, easy cases, and lumping—and explain how rough, low-entropy answers can unlock real-world progress far faster than perfect rigor. Through concrete examples like GDP versus market value and the ellipse area test, we show how to think with units, test assumptions on extreme cases, and build robust intuition that fuels action a...
Intellectually Curious
A friendly, intuitive tour of Hidden Markov Models (HMMs). Using the relatable 'full trash bin means he's home' metaphor, we explore how to infer unseen states from noisy observations, learn the model parameters with Baum–Welch, and decode the most likely state sequence with the Viterbi algorithm. You’ll see how forward–backward smoothing combines evidence from past and future, and how these ideas power real-world AI—from speech recognition to gene finding and beyond. Note: This podcast was ...