Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
TV & Film
History
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/df/77/34/df7734e0-d212-274e-e216-b051c675b2de/mza_8403544740418978020.jpg/600x600bb.jpg
How I AI
Claire Vo
44 episodes
4 days ago
How I AI, hosted by Claire Vo, is for anyone wondering how to actually use these magical new tools to improve the quality and efficiency of their work. In each episode, guests will share a specific, practical, and impactful way they’ve learned to use AI in their work or life. Expect 30-minute episodes, live screen sharing, and tips/tricks/workflows you can copy immediately. If you want to demystify AI and learn the skills you need to thrive in this new world, this podcast is for you.
Show more...
Technology
RSS
All content for How I AI is the property of Claire Vo and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
How I AI, hosted by Claire Vo, is for anyone wondering how to actually use these magical new tools to improve the quality and efficiency of their work. In each episode, guests will share a specific, practical, and impactful way they’ve learned to use AI in their work or life. Expect 30-minute episodes, live screen sharing, and tips/tricks/workflows you can copy immediately. If you want to demystify AI and learn the skills you need to thrive in this new world, this podcast is for you.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_episode/43412682/43412682-1760313733516-ced500a3b2e28.jpg
Evals, error analysis, and better prompts: A systematic approach to improving your AI products | Hamel Husain (ML engineer)
How I AI
54 minutes 48 seconds
2 months ago
Evals, error analysis, and better prompts: A systematic approach to improving your AI products | Hamel Husain (ML engineer)

Hamel Husain, an AI consultant and educator, shares his systematic approach to improving AI product quality through error analysis, evaluation frameworks, and prompt engineering. In this episode, he demonstrates how product teams can move beyond “vibe checking” their AI systems to implement data-driven quality improvement processes that identify and fix the most common errors. Using real examples from client work with Nurture Boss (an AI assistant for property managers), Hamel walks through practical techniques that product managers can implement immediately to dramatically improve their AI products.


What you’ll learn:

1. A step-by-step error analysis framework that helps identify and categorize the most common AI failures in your product

2. How to create custom annotation systems that make reviewing AI conversations faster and more insightful

3. Why binary evaluations (pass/fail) are more useful than arbitrary quality scores for measuring AI performance

4. Techniques for validating your LLM judges to ensure they align with human quality expectations

5. A practical approach to prioritizing fixes based on frequency counting rather than intuition

6. Why looking at real user conversations (not just ideal test cases) is critical for understanding AI product failures

7. How to build a comprehensive quality system that spans from manual review to automated evaluation

—

Brought to you by:

GoFundMe Giving Funds—One account. Zero hassle: https://gofundme.com/howiai

Persona—Trusted identity verification for any use case: https://withpersona.com/lp/howiai

—

Where to find Hamel Husain:

Website: https://hamel.dev/

Twitter: https://twitter.com/HamelHusain

Course: https://maven.com/parlance-labs/evals

GitHub: https://github.com/hamelsmu

—

Where to find Claire Vo:

ChatPRD: https://www.chatprd.ai/

Website: https://clairevo.com/

LinkedIn: https://www.linkedin.com/in/clairevo/

X: https://x.com/clairevo

—

In this episode, we cover:

(00:00) Introduction to Hamel Husain

(03:05) The fundamentals: why data analysis is critical for AI products

(06:58) Understanding traces and examining real user interactions

(13:35) Error analysis: a systematic approach to finding AI failures

(17:40) Creating custom annotation systems for faster review

(22:23) The impact of this process

(25:15) Different types of evaluations

(29:30) LLM-as-a-Judge

(33:58) Improving prompts and system instructions

(38:15) Analyzing agent workflows

(40:38) Hamel’s personal AI tools and workflows

(48:02) Lighting round and final thoughts

—

Tools referenced:

• Claude: https://claude.ai/

• Braintrust: https://www.braintrust.dev/docs/start

• Phoenix: https://phoenix.arize.com/

• AI Studio: https://aistudio.google.com/

• ChatGPT: https://chat.openai.com/

• Gemini: https://gemini.google.com/

—

Other references:

• Who Validates the Validators? Aligning LLM-Assisted Evaluation of LLM Outputs with Human Preferences: https://dl.acm.org/doi/10.1145/3654777.3676450

• Nurture Boss: https://nurtureboss.io

• Rechat: https://rechat.com/

• Your AI Product Needs Evals: https://hamel.dev/blog/posts/evals/

• A Field Guide to Rapidly Improving AI Products: https://hamel.dev/blog/posts/field-guide/

• Creating a LLM-as-a-Judge That Drives Business Results: https://hamel.dev/blog/posts/llm-judge/

• Lenny’s List on Maven: https://maven.com/lenny

—

Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email jordan@penname.co.

How I AI
How I AI, hosted by Claire Vo, is for anyone wondering how to actually use these magical new tools to improve the quality and efficiency of their work. In each episode, guests will share a specific, practical, and impactful way they’ve learned to use AI in their work or life. Expect 30-minute episodes, live screen sharing, and tips/tricks/workflows you can copy immediately. If you want to demystify AI and learn the skills you need to thrive in this new world, this podcast is for you.