In this special crossover episode, Luca brings together his two podcasting worlds: the Agile Embedded Podcast with Jeff Gable and the Embedded AI Podcast with Ryan Torvik. What starts as Jeff admitting he's still a "noob" with LLMs turns into a practical deep-dive on how to actually use AI tools without coding yourself off a cliff.
The three explore the real challenges of working with LLMs: managing context windows that behave more like human memory than computer memory, the critical importance of test-driven development (even more so with AI), and why you absolutely cannot let go of the reins. Ryan and Luca share hard-won lessons about prompt engineering, the value of small iterations, and why your static analysis tools aren't going anywhere. They also tackle team-level questions: how code reviews change (or don't), why prototyping becomes a superpower, and what happens to junior engineers learning their craft in an AI-assisted world.
This isn't hype about AI replacing developers - it's three engineers figuring out how to use a powerful but unpredictable tool to write better code faster, while keeping their engineering judgment firmly in the driver's seat.
Key Topics:
Notable Quotes:
"All of us are new to this experience. There's not somebody that went to school back in the 80s and like, I've been doing this for 40 years. Like nobody has that level of experience. So we're all just running around, bumping into things and seeing what works for us." — Ryan
"An LLM is just a token generator. You stick an input in, and it returns an output, and it has no way of judging whether this is correct or valid or useful. It's just whatever it generated. So it's up to you to give it input data that will very likely result in useful output data." — Luca
"The LLM is like the happiest developer that I've ever worked with. Just so excited and happy to do more work than you could ever possibly imagine. Like, oh, my gosh." — Ryan
"Don't ever let go of the reins. They will just sort of slowly slip out of your hands and all of a sudden you find yourself sitting there like a fool with nothing in your hands." — Luca
"I can use LLMs to jumpstart me or bootstrap me from zero to one. And once there's something on the screen that kind of works, I can usually then apply my general programming skill, my general engineering taste to improve it." — Jeff
Resources Mentioned:
In this episode, Ryan and Luca dive into their real-world AI coding workflows, sharing the tricks, tools, and hard-learned lessons from their daily development work. They compare their approaches to using AI agents like Claude Code and discuss everything from prompt management to context hygiene. Luca reveals his meticulous TDD approach with multiple AI instances running in parallel, while Ryan shares his more streamlined VS Code-based workflow.
The conversation covers practical topics like managing AI forgetfulness, avoiding the pitfalls of over-mocking in tests, and the importance of being strict with AI-generated code. They also explore the addictive, game-like nature of AI-assisted coding and why it feels like playing Civilization - always "just one more turn" until the sun comes up. This is an honest look at what actually works (and what doesn't) when coding with AI assistants.
Key Topics:
Notable Quotes:
"I've learned the hard way that you must not do that. I was like, oh, this is really nice. I wrote like 10,000 lines of code this week. You know I'm fantastically productive and then I paid for it by going over those same 10,000 lines for the next three weeks and cleaning up the mess that it had made." — Luca Ingianni
"I must use TDD if I use AI coding. Otherwise it's so easy to get off the rails." — Luca Ingianni
"I don't have to code with the shift key ever again." — Ryan Torvik
"Coding with AI assist just feels exactly the same way for me [as Civilization]. It just sort of sucks you in." — Luca Ingianni
"Make sure that your AI coding agent doesn't tie your shoelaces together. Because it will." — Ryan Torvik
Resources Mentioned:
Connect With Us:
Ryan and Luca explore Retrieval Augmented Generation (RAG) and its practical applications in embedded development. After Ryan's recent discussions at the Embedded Systems Summit, we dive into what RAG actually is: a system that chunks documents, stores them in vector databases, and allows AI to query specific information without hallucinating. While it sounds perfect for handling massive datasheets and documentation, the reality is more complex.
We discuss the critical challenge of chunking - breaking documents into the right-sized pieces for effective retrieval. Too big and searches become useless; too small and you lose context. Luca shares his hands-on experience trying to make RAG work with datasheets, revealing the gap between theory and practice. With modern LLMs offering larger context windows and better document parsing capabilities, we question whether RAG has missed its window of usefulness for most development tasks. The conversation covers when RAG still makes sense (legal contexts, parts catalogs, private LLMs) and explores alternatives like having LLMs use grep and other Unix tools to search documents directly.
Key Topics:
Notable Quotes:
"Data sheets are inaccurate. You still have to engineer this. You cannot just go and let it go." — Ryan
"It's so difficult to get the chunking right. If you make it too big, that's not useful. If you make it too small, then again, it becomes difficult to search for because you're losing too much context." — Luca
"These days, LLMs are good enough at just ad hoc-ing this. You can do away with all of the complexity of vector stores and chunking." — Luca
"We have the hardware. We can actually prove it one way or another. If it doesn't work on hardware, then it's not right." — Ryan
"RAG is quite tempting and quite interesting, but it's deceptively simple unless you have good reason to believe that you can get it working." — Luca
Resources Mentioned:
Connect With Us:
Welcome to the inaugural episode of the Embedded AI Podcast! Ryan and Luca kick things off by exploring the fascinating intersection of AI and embedded systems. From fog computing (AI happening everywhere around us - in coffee shops, traffic lights, and cars) to vibe coding (using AI to generate code through natural language), we dive into what this new landscape looks like for embedded engineers.
We share our real-world experiences with AI in development, including Ryan's struggles getting an STM32 to print "Hello World" using AI-generated code, and Luca's insights from the German Aerospace Conference where traditional AI dominated over LLMs. The conversation reveals both the promise and pitfalls of AI in embedded development, touching on everything from sound-based machine monitoring to the security implications of learning algorithms in the field. Whether you're curious about AI or skeptical about its place in embedded systems, this episode sets the stage for our ongoing exploration of how AI is reshaping our industry.
Key Topics
* [02:30] Fog computing explained - AI processing happening everywhere around us
* [05:15] Vibe coding - using natural language to generate code with AI
* [08:45] Luca's insights from German Aerospace Conference - traditional AI vs LLMs
* [15:20] Sound-based machine monitoring using AI for predictive maintenance
* [22:10] Ryan's STM32 Hello World struggles with AI-generated code
* [28:30] Security implications of learning algorithms in embedded systems
* [35:45] How AI is changing developer roles - from coding to systems engineering
Notable Quotes
> "Fog computing is taking that and flipping it over again. Instead of doing the mass of the computing in a data center, you're doing it on the outside edge on these tiny devices in the forest." — Ryan
> "AI is here to stay, and it is really, really useful and really, really helpful if you apply it well and if you apply it to the right things." — Luca
> "I had a friend yesterday asked me if I thought that I was losing my brain because I've been vibe coding so much. I just don't have to use the shift key anymore." — Ryan
> "How can this thing be so smart and so stupid at the same time?" — Luca
> "Good, bad, or otherwise, we need to get involved in this, or you're going to get swept up and left behind." — Ryan