Mike on LinkedIn
Mike's Blog
Show on Discord
Dreamcast assorted references:
Dreamcast overview https://sega.fandom.com/wiki/Dreamcast
History of Dreamcast development https://segaretro.org/History_of_the_Sega_Dreamcast/Development
The Rise and Fall of the Dreamcast: A Legend Gone Too Soon (Simon Jenner) https://sabukaru.online/articles/he-rise-and-fall-of-the-dreamcast-a-legend-gone-too-soon
The Legacy of the Sega Dreamcast | 20 Years Later https://medium.com/@Amerinofu/the-legacy-of-the-sega-dreamcast-20-years-later-d6f3d2f7351c
Socials & Plugs
The R Podcast https://r-podcast.org/
R Weekly Highlights https://serve.podhome.fm/r-weekly-highlights
Shiny Developer Series https://shinydevseries.com/
Eric on Bluesky https://bsky.app/profile/rpodcast.bsky.social
Eric on Mastodon https://podcastindex.social/@rpodcast
Eric on LinkedIn https://www.linkedin.com/in/eric-nantz-6621617/
Links
James on LinkedIn
Mike on LinkedIn
Mike's Blog
Show on Discord
Trust and Stability: RHEL provides the mission-critical foundation needed for workloads where security and reliability cannot be compromised.
Predictive vs. Generative: Acknowledging the hype of GenAI while maintaining support for traditional machine learning algorithms.
Determinism: The challenge of bringing consistency and security to emerging AI technologies in production environments.
Developer Simplicity: Rama-Llama helps developers run local LLMs easily without being "locked in" to specific engines; it supports Podman, Docker, and various inference engines like Llama.cpp and Whisper.cpp.
Production Path: The tool is designed to "fade away" after helping package the model and stack into a container that can be deployed directly to Kubernetes.
Behind the Firewall: Addressing the needs of industries (like aircraft maintenance) that require AI to stay strictly on-premises.
Red Hat AI: A commercial product offering tools for model customization, including pre-training, fine-tuning, and RAG (Retrieval-Augmented Generation).
Inference Engines: James highlights the difference between Llama.cpp (for smaller/edge hardware) and vLLM, which has become the enterprise standard for multi-GPU data center inferencing.
Mike sits down with Tom Totenberg to discuss disastrous Friday night deployments, selective feature flags, Launch Darkly and more general development goodness.
Mike on X
Mike on BlueSky
Coder on X
Adam's Socials
Adam on LinkedIn
Event Modeling
Understanding Event Modeling
Adaptech
Coder's Socials
Mike on X
Mike on BlueSky
Mike's Blog
Coder on X
Coder on BlueSky
Mike reads your feedback for the month and answers your questions in here. There's a lot in here in particular some juicy AI stuff.
Joey DeVilla of Tampa Tech fame and accordion playing glory joins Mike to discuss the Tampa Tech scene, some Python goodness, a little Rust and much more.
Mike sits down with the the venerable Linux guru Jay LaCroix to talk transitioning to Linux, the state of desktop Linux and a little bit of retro-gaming.
*WARP PROMO CODE *
coderradio
Warp
Zach on X
Mike on X
Mike on BlueSky
Coder on X
Coder on BlueSky
Mike breaks down his highlights from WWDC
Coder's Socials
Mike on X
Mike on BlueSky
Mike's Blog
Coder on X
Coder on BlueSky
Coder's Socials
Mike on X
Mike on BlueSky
Mike's Blog
Coder on X
Coder on BlueSky
Paul's Links
Coder's Socials
Mike on X
Mike on BlueSky
Mike's Blog
Coder on X
Coder on BlueSky
Mike sits down with Github Product Manager to talk AI, vibe coding and dev in general.
CoPilot
Tim on Github
Tim's Blog
Coder's Socials
Mike on X
Mike on BlueSky
Mike's Blog
Coder on X
Coder on BlueSky