Home
Categories
EXPLORE
Comedy
Society & Culture
True Crime
History
Religion & Spirituality
News
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts125/v4/61/03/ea/6103ea1b-41c7-e0ca-3fc5-b127a2682d35/mza_11809009319831773693.jpg/600x600bb.jpg
Two Voice Devs
Mark and Allen
262 episodes
1 week ago
Mark and Allen talk about the latest news in the VoiceFirst world from a developer point of view.
Show more...
Technology
RSS
All content for Two Voice Devs is the property of Mark and Allen and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Mark and Allen talk about the latest news in the VoiceFirst world from a developer point of view.
Show more...
Technology
Episodes (20/262)
Two Voice Devs
Episode 261 - The Great Holid-AI Rebus Battle

Get ready for the ULTIMATE SHOWDOWN of holiday cheer and artificial intelligence! In this SPECIAL HOLIDAY EPISODE, Mark and Allen aren't just exchanging pleasantries—they're exchanging MIND-BENDING REBUS PUZZLES generated by some AI models themselves!


It's a battle of wits, a clash of code, and a festive face-off as Microsoft Copilot takes on Google's Gemini (and the famous "Nano Banana Pro") to solve visual riddles that will have you shouting at your screen. Can our hosts decipher the scribbles of silicon brains? Or will the AI stump the humans once and for all?


Grab your eggnog, put on your thinking cap, and play along! It's Two Voice Devs like you've never seen (or puzzled) them before! HAPPY HOLIDAYS!


[00:00:00] Intro: The Rules of Engagement

[00:02:48] Puzzle 1: A Nipping Chill

[00:03:33] Puzzle 2: Going for Gold

[00:07:00] Puzzle 3: Escaping the Cage

[00:10:00] Puzzle 4: The Silent Mouse

[00:11:00] Puzzle 5: A Knightly Gesture

[00:12:15] Puzzle 6: Sweet Ballerina

[00:13:30] Puzzle 7: Listen Closely to the Animal

[00:15:30] Puzzle 8: A Holiday Wish

[00:16:15] Puzzle 9: The Grand Finale Challenge

[00:19:30] Happy Holidays from Two Voice Devs!


#TwoVoiceDevs #HolidaySpecial #RebusPuzzles #LLMBattle #AIShowdown #Copilot #Gemini #NanoBananaPro #ChatGPT #ArtificialIntelligence #MachineLearning #JackFrost #WinterWonderland #Freeze #SugarPlumFairy #Nutcracker #NewYears #Gnu #MerryChristmas #FestivalOfLights #Hanukkah #Kwanzaa #Unity #HolidayFun #Games #Puzzles #TechHumor #DevLife #HappyHolidays #SeasonsGreetings #Fun #Creative #OverTheTop #Podcast #Developers #SoftwareEngineering

Show more...
2 weeks ago
20 minutes 33 seconds

Two Voice Devs
Episode 260 - Turn Your AI Agent into a Voice Agent With Microsoft Foundry

Mark Tucker explores Microsoft Azure's "Voice Live" feature within the newly rebranded Microsoft Foundry. He demonstrates how to take a standard text-based AI agent—in this case, one that talks like a pirate—and instantly give it a voice using WebSockets to bridge speech-to-text and text-to-speech. Mark walks through the differences between the "Old" Foundry (V1) and the "New" Foundry (V2), shows the configuration steps, and dives into a Python code example to connect it all together.


Learn more:

* https://github.com/rmtuckerphx/voicelive-agents-quickstart


[00:00:00] Intro and Holiday Plans

[00:00:46] Introducing Microsoft Azure Voice Live

[00:02:00] Microsoft Foundry Overview

[00:02:46] Creating a Pirate Agent in Foundry

[00:04:46] Enabling Voice Live in the Playground

[00:07:46] Demo: Speaking with the Pirate Agent

[00:08:46] Comparing Old and New Foundry

[00:13:46] Code Walkthrough: Voice Live Quick Start

[00:15:46] Connecting Version 2 Agents

[00:18:46] Conclusion


#MicrosoftAzure #VoiceLive #MicrosoftFoundry #AI #VoiceFirst #GenerativeAI #SpeechToText #TextToSpeech #Python #Coding

Show more...
2 weeks ago
19 minutes 5 seconds

Two Voice Devs
Episode 259 - Building Better MCP Servers: Lessons from Vodo Drive

Allen and Mark discuss the architecture of Model Context Protocol (MCP)

servers, using Allen's experience with "Vodo Drive" as a case study.

They dive into critical considerations for building effective agents,

focusing on security, managing API complexity, and enforcing business

logic. The conversation explores how to move beyond simple REST API

wrappers to create high-level, context-aware tools that ensure safety

and efficiency.


[00:00:00] Welcome and Introduction

[00:00:54] Lessons from Vodo Drive for MCP

[00:02:54] The Importance of Security in MCP Servers

[00:03:36] Managing API Complexity and Business Logic

[00:05:58] Authentication and Authorization Challenges

[00:07:37] OAuth Scopes and User-Controlled Access

[00:10:48] Handling Complex APIs like Google Workspace

[00:13:58] Designing High-Level Tools vs. Low-Level wrappers

[00:19:35] Dynamic Tool Lists and Context Awareness

[00:24:14] Agents Acting On Behalf of Users, Not As Users


#MCP #ModelContextProtocol #AI #Agents #VodoDrive #Security #API

#GoogleWorkspace #SoftwareArchitecture #TwoVoiceDevs

Show more...
4 weeks ago
25 minutes 29 seconds

Two Voice Devs
Episode 258 - Getting Started with GitHub Copilot

In this episode of Two Voice Devs, Mark Tucker and Allen Firstenberg dive into the world of GitHub Copilot. Mark shares his recent experience preparing for the GitHub Copilot (GH-300) certification and walks us through the various features and modes of the tool within Visual Studio Code. They discuss the differences between "Ask," "Edit," and "Agent" modes, how Copilot integrates with your workspace and terminal, and the power of using different AI models like Sonnet and Gemini. Whether you're new to AI coding assistants or looking to get more out of your current setup, this episode provides a practical overview of what GitHub Copilot can do today.


[00:00:00] Introduction and updates

[00:01:26] The GitHub Copilot (GH-300) Certification

[00:02:30] GitHub Copilot in Visual Studio Code

[00:03:44] Clarifying the different "Copilots"

[00:04:29] Inline Chat and using "Explain"

[00:05:46] Selecting different AI models

[00:07:39] The Chat Window: Ask, Edit, and Agent modes

[00:08:06] Using context variables (@workspace, @terminal, #files)

[00:11:36] Demonstrating "Ask" mode

[00:14:33] Demonstrating "Edit" mode

[00:16:24] Demonstrating "Agent" mode

[00:22:36] Custom instructions and specifications

[00:25:42] How Copilot works behind the scenes (Proxy & Safety)

[00:27:00] Conclusion


#GitHubCopilot #VSCode #AIcoding #SoftwareDevelopment #TwoVoiceDevs #GH300 #Certification #DeveloperTools #Programming #TechPodcast

Show more...
1 month ago
27 minutes 28 seconds

Two Voice Devs
Episode 257 - Building Enterprise Agents with Microsoft Copilot Studio

Mark Tucker and Allen Firstenberg dive into Microsoft Copilot Studio, a

low-code tool for creating conversational agents with a focus on

enterprise integrations. Mark demonstrates how to build a file upload

agent that summarizes invoices using a Large Language Model (LLM). They

explore the studio's interface, including topics, triggers, and the

designer canvas, while comparing it to familiar tools like Dialogflow.

The discussion also touches on the concept of autonomous agents, flows

that can be triggered by events like emails, and Microsoft's strong

push for enterprise adoption.


[00:00:00] Welcome and Introduction

[00:00:49] Introducing Microsoft Copilot Studio

[00:02:09] Creating a New Agent

[00:05:08] Understanding Topics and Triggers

[00:06:57] Testing the Agent: Greeting Topic

[00:10:44] Building a File Upload Agent

[00:12:37] Implementing the File Upload Logic

[00:15:29] Summarizing Invoices with LLMs

[00:17:08] Enterprise Tools and Connectors

[00:19:00] Flows and Server-Side Triggers

[00:21:33] Deployment and Channels

[00:23:10] Agents vs. Bots: Autonomous Capabilities

[00:24:44] Comparisons with Dialogflow and Google's Ecosystem


#MicrosoftCopilotStudio #CopilotStudio #LowCode #AI

#ArtificialIntelligence #Chatbots #ConversationalAI #EnterpriseAI

#Dialogflow #CCAI #LLM #GenerativeAI #TwoVoiceDevs #Developer #TechPodcast

Show more...
1 month ago
27 minutes 30 seconds

Two Voice Devs
Episode 256 - Gratitude, Growth, and Human Connection: A Thanksgiving Special

Allen and Mark return after a hiatus for their annual Thanksgiving episode. They reflect on a busy year, expressing deep gratitude for the community's concern and support. The conversation explores the vital importance of human connection in a tech-centric world, the impact of mentorship, and finding balance between passion projects and life outside of work.


[00:00:00] Welcome back & addressing the hiatus

[00:01:38] Professional gratitude & new projects

[00:02:22] The kindness of the community

[00:03:58] The importance of human connection in tech

[00:05:40] Reflecting on family, friends, and blessings

[00:07:48] The lasting impact of mentorship

[00:09:12] Balancing technology and life


#Thanksgiving #Gratitude #TechCommunity #Mentorship #WorkLifeBalance #HumanConnection #TwoVoiceDevs #VoiceTech #DeveloperLife

Show more...
1 month ago
10 minutes 58 seconds

Two Voice Devs
Episode 255 - Agonizing About Agent-to-Agent

Join Allen Firstenberg and Noble Ackerson in a deep dive into the evolving world of AI agent protocols. In this episode of Two Voice Devs, they unpack the Agent-to-Agent (A2A) protocol, comparing it with the Model Context Protocol (MCP). They explore the fundamental differences, from A2A's conversational, stateful nature to MCP's function-call-like structure. The discussion also touches on the new Agent Payment Protocol (AP2) and its potential to revolutionize how AI agents interact and transact. Is A2A the key to unlocking a future of autonomous, ambient AI? Tune in to find out!


[00:01:00] What is the A2A protocol?

[00:04:00] A2A vs. Model Context Protocol (MCP)

[00:10:00] What does A2A bring that MCP doesn't?

[00:15:00] Ambient and Autonomous Agents

[00:19:00] A2A solves the "Tower of Babel" problem

[00:24:00] The difference between A2A and MCP: stateful vs. stateless

[00:27:00] Agent Payment Protocol (AP2)

[00:33:00] What does A2A promise for autonomous agents?

[00:38:00] Downsides and challenges of A2A

[00:44:00] Google, Gemini, and the future of A2A


#A2A #MCP #AI #ArtificialIntelligence #AgentToAgent #ModelContextProtocol #TwoVoiceDevs #TechPodcast #FutureOfAI #AutonomousAgents #AIAgents #AP2 #AgentPaymentProtocol #GoogleGemini #Anthropic

Show more...
3 months ago
49 minutes 6 seconds

Two Voice Devs
Episode 254 - Agent Frameworks Compared: Google's ADK vs LangChainJS

Allen and Mark are back to discuss AI agent frameworks again. This time, Allen compares Google's Agent Development Kit (ADK) with LangChainJS and LangGraphJS. He walks through building a simple agent in both frameworks, highlighting the differences in their approaches, from configuration by convention in ADK to the explicit configuration in LangGraph. They also explore the web-based testing environments for both, showing how each allows for debugging and inspecting the agent's behavior. The discussion also touches on the upcoming LangChain version 1.0 and its focus on backward compatibility.


[00:00:00] - Introduction

[00:01:09] - Comparing agent frameworks: Google's ADK and LangChainJS

[00:02:20] - A look at the ADK code

[00:06:55] - A look at the LangChainJS code

[00:13:20] - The web interface for testing

[00:19:10] - ADK's web interface

[00:22:30] - LangGraph's web interface

[00:27:20] - LangGraph's state management

[00:32:15] - Final thoughts


#AI #AgenticAI #GoogleADK #LangChain #LangGraph #JavaScript #Python #TwoVoiceDevs

Show more...
3 months ago
33 minutes 21 seconds

Two Voice Devs
Episode 253 - The Future of Voice? Exploring Gemini 2.5's TTS Model

In this episode of Two Voice Devs, Mark and Allen dive into the new experimental Text-to-Speech (TTS) model in Google's Gemini 2.5. They explore its capabilities, from single-speaker to multi-speaker audio generation, and discuss how it's a significant leap from the old days of SSML. They also touch on how this new technology can be integrated with LangChainJS to create more dynamic and natural-sounding voice applications. Is this the return of voice as the primary interface for AI?


[00:00:00] Introduction

[00:00:45] Google's new experimental TTS model for Gemini

[00:01:55] Demo of single-speaker TTS in Google's AI Studio

[00:03:05] Code walkthrough for single-speaker TTS

[00:04:30] Lack of fine-grained control compared to SSML

[00:05:15] Using text cues to shape the TTS output

[00:06:20] Demo of multi-speaker TTS with a script

[00:09:50] Code walkthrough for multi-speaker TTS

[00:11:30] The model is tuned for TTS, not general conversation

[00:12:10] Using a separate LLM to generate a script for the TTS model

[00:13:30] Code walkthrough of the two-function approach with LangChainJS

[00:16:15] LangChainJS integration details

[00:19:00] Is Speech Markdown still relevant?

[00:21:20] Latency issues with the current TTS model

[00:22:00] Caching strategies for TTS

[00:23:30] Voice as the natural UI for AI

[00:25:30] Outro


#Gemini #TTS #VoiceAI #VoiceFirst #AI #Google #LangChainJS #LLM #Developer #Podcast

Show more...
4 months ago
25 minutes 40 seconds

Two Voice Devs
Episode 252 - GPT-5 First Look: Evolution, Not Revolution

Join Allen and Mark as they take a first look at the newly released GPT-5 from OpenAI. They dive into the details of what's new, what's changed, and what's missing, frequently comparing it to other models like Google's Gemini. From the new mini and nano models to the pricing wars with competitors, they cover the landscape of the latest LLM offerings. They also discuss the new features for developers, including verbosity settings and constrained outputs with context-free grammars, and what this means for the future of AI development. Is GPT-5 the leap forward everyone was expecting, or a sign that the rapid pace of AI evolution is starting to plateau? Tune in to find out!


[00:00:00] Introduction and the hype around GPT-5

[00:01:00] Overview of GPT-5, mini, and nano models

[00:02:00] The new "thinking" model and smart routing

[00:03:00] Simplifying models for developers

[00:04:00] Reasoning levels vs. Gemini's "thinking budget"

[00:06:00] Pricing wars and new models

[00:07:00] OpenAI's new open source models

[00:08:00] New verbosity setting for developers

[00:09:00] Constrained outputs and context-free grammars

[00:12:00] Using LLMs to translate to well-defined data structures

[00:14:00] Reducing hallucinations and medical applications

[00:16:00] Knowledge cutoff dates for the new models

[00:18:00] Coding with GPT-5 and IDE integration

[00:19:00] More natural conversations with ChatGPT

[00:21:00] Missing audio and image modalities vs. Gemini

[00:22:00] Community reaction to the GPT-5 release

[00:24:00] The future of LLMs: Maturing and plateauing

[00:26:00] The need for better developer tools and agentic computing


#GPT5 #OpenAI #LLM #AI #ArtificialIntelligence #Developer #TechTalk #Podcast #AIDEvelopment #MachineLearning #FutureOfAI #AGI #GoogleGemini #TwoVoiceDevs

Show more...
4 months ago
27 minutes 35 seconds

Two Voice Devs
Episode 251 - AI Agents: Frameworks and Concepts

Join Mark and Allen in this episode of Two Voice Devs as they explore the fascinating world of AI agents. They break down what agents are, how they work, and what sets them apart from earlier AI technologies. The discussion covers key concepts like "context engineering," and the essential components of an agentic system, including prompts, RAG, memory, tools, and structured outputs.


Using a practical example of a prescription management chatbot for veterans, they demonstrate how agents can handle complex tasks. They compare various frameworks for building agents, specifically focusing on OpenAI's Agent SDK (for TypeScript) and Microsoft's Semantic Kernel (for C#). They also touch on other popular frameworks like LangGraph and Google's Agent Developer Kit.


Tune in for a detailed comparison of how OpenAI's Agent SDK and Microsoft's Semantic Kernel handle state, tools, and the overall agent lifecycle, and learn what the future holds for these intelligent systems.


[00:00:00] - Introduction

[00:01:02] - What is an AI Agent?

[00:03:12] - Context Engineering and its components

[00:06:02] - The role of the Agent Controller

[00:08:01] - Agent Mode vs. Agent AI

[00:09:36] - Use Case: Prescription Management Chatbot

[00:13:42] - Handling Large Lists of Data

[00:16:15] - Tools and State Management

[00:21:05] - Filtering and Searching with Tools

[00:27:08] - Displaying Information and Iterating through lists

[00:30:10] - The power of LLMs in Agentic Systems

[00:35:18] - Sub-agents and the future of agentic systems

[00:38:25] - Comparing different Agent Frameworks

[00:39:00] - Wrap up


#AIAgents #TwoVoiceDevs #ContextEngineering #OpenAIAgentSDK #SemanticKernel #LangGraph #GoogleADK #LLMs #GenAI #AI #Developer #Podcast #TypeScript #CSharp

Show more...
5 months ago
39 minutes 22 seconds

Two Voice Devs
Episode 250 - Five Years Up, Up, and Away in Voice & AI

Join Mark and Allen for a very special 250th episode as they celebrate five years of Two Voice Devs! You won't want to miss the unique, AI-animated opening that takes them to new heights, or the special closing that brings it all home, both created with the help of Veo 3. In between, they take a look back at the evolution of voice and AI technology. From the early days of Alexa and Google Assistant to the rise of LLMs and generative AI, they discuss the shifts in the industry, the enduring importance of context, and what the future might hold for agentic AI, security, and the developer experience.


[00:02:45] - Where did we think the industry would be in 5 years?

[00:05:30] - How LLMs and Generative AI changed the landscape

[00:11:05] - Context Engineering is the new Prompt Engineering

[00:14:30] - The explosion of frameworks, libraries, and models

[00:18:00] - The importance of guardrails and security

[00:22:30] - Where are things going in the near term?

[00:27:30] - The future of devices and developer platforms

[00:30:00] - Right-sizing models and the cost of AI

[00:33:30] - The importance of community and having fun


#TwoVoiceDevs #VoiceAI #ArtificialIntelligence #LLMs #GenerativeAI #AIAgents #VoiceFirst #TechPodcast #ConversationalAI #AICommunity #FutureOfTech #AIEthics #AISecurity #DeveloperExperience #HotAirBalloon #Veo3

Show more...
5 months ago
36 minutes 14 seconds

Two Voice Devs
Episode 249 - Cracking Copilot and the Mysteries of Microsoft 365

In this episode, guest host Andrew Connell, a Microsoft MVP of 21 years, joins Allen to unravel the complexities of Microsoft's AI strategy, particularly within the enterprise. They explore the world of Microsoft 365 Copilot, distinguishing it from the broader AI landscape and consumer tools like ChatGPT. Andrew provides an insider's look at how Copilot functions within a secure, private "enclave," leveraging a "Semantic Index" of your organization's data to provide relevant, contextual answers.


The conversation then shifts to the developer experience. Discover the different ways developers can extend and customize Copilot, from low-code solutions in Copilot Studio to creating powerful "declarative agents" with JSON and even building "custom engine agents" where you can bring your own models and infrastructure. If you've ever wondered what Microsoft's AI story is for businesses and internal developers, this episode provides a comprehensive and honest overview.


Timestamps:

[00:00:01] - Introducing guest host Andrew Connell

[00:00:54] - What is a Microsoft 365 developer?

[00:01:40] - Andrew's journey into the Microsoft ecosystem

[00:05:00] - 21 years as a Microsoft MVP

[00:06:15] - Enterprise Cloud vs. Developer Cloud

[00:08:06] - Microsoft's AI focus for the enterprise

[00:10:57] - What is Microsoft 365 Copilot?

[00:13:07] - How Copilot ensures data privacy with a "secure enclave"

[00:14:58] - Understanding the Semantic Index

[00:16:31] - Is Copilot a Retrieval Augmented Generation (RAG) system?

[00:17:23] - Responsible AI in the Copilot stack

[00:19:19] - The developer story for extending Copilot

[00:22:43] - Building declarative agents with JSON and YAML

[00:25:05] - Using actions and tools with agents

[00:27:00] - How agents are deployed via Microsoft Teams

[00:32:48] - Where does Copilot actually run?

[00:36:20] - Key takeaways from Microsoft Build

[00:41:20] - The spectrum of development: low-code to full-code

[00:43:00] - Full control with Custom Engine Agents

[00:49:30] - Where to find Andrew Connell online


Hashtags:

#Microsoft #AI #Copilot #Microsoft365 #Azure #SharePoint #MicrosoftTeams #MVP #Developer #Podcast #Tech #EnterpriseSoftware #CloudComputing #ArtificialIntelligence #Agents #LowCode #NoCode #RAG

Show more...
5 months ago
52 minutes 7 seconds

Two Voice Devs
Episode 248 - AI Showdown: Gemini CLI vs. Claude Code CLI

Join Allen Firstenberg and guest host Isaac Johnson, a Google Developer Expert with a deep background in DevOps and SRE, as they dive into the world of command-line AI assistants. In this episode, they compare and contrast two powerful tools: Anthropic's Claude Code CLI and Google's Gemini CLI.


Isaac shares his journey from coding with Fortran in the 90s to becoming a GDE, and explains why he often prefers the focused, context-aware power of a CLI tool over crowded IDE integrations. They discuss the pros and cons of each approach, from ease of use and learning curves to the critical importance of using version control as a safety net.


The conversation then gets practical with a live demo where both Claude and Gemini are tasked with generating system architecture diagrams for a real-world project. Discover the differences in speed, cost, output, and user experience. Plus, learn how to customize Gemini's behavior with `GEMINI.md` files and explore fascinating use cases beyond just writing code, including podcast production, image generation, and more.


[00:00:30] - Introducing the topic: AI assistants in the command line.

[00:01:00] - Guest Isaac Johnson's extensive background in tech.

[00:03:00] - Why use a CLI tool instead of an IDE plugin?

[00:07:30] - Pro Tip: Always use Git with AI coding tools!

[00:09:30] - The cost of AI: Comparing Claude's and Gemini's pricing.

[00:12:15] - The benefits of Gemini CLI being open source.

[00:17:30] - Live Demo: Claude Code CLI generates a system diagram.

[00:21:30] - Live Demo: Gemini CLI tackles the same task.

[00:27:30] - Customizing your AI with system prompts (`GEMINI.md`).

[00:31:30] - Beyond Code: Using CLI tools for podcasting and media generation.

[00:40:30] - Where to find and connect with Isaac Johnson.


#AI #DeveloperTools #CLI #Gemini #Claude #GoogleCloud #Anthropic #TwoVoiceDevs #TechPodcast #SoftwareDevelopment #DevOps #SRE #AIassistant #Coding #Programming #FirebaseStudio #Imagen #Veo

Show more...
5 months ago
41 minutes 31 seconds

Two Voice Devs
Episode 247 - Apple's AI Gets Serious

John Gillilan, our official Apple correspondent, returns to Two Voice Devs to unpack the major announcements from Apple's latest Worldwide Developer Conference (WWDC). After failing to ship the ambitious "Apple Intelligence" features promised last year, how did Apple address the elephant in the room? We dive deep into the new "Foundation Models Framework," which gives developers unprecedented access to on-device LLMs. We explore how features like structured data output with the "Generable" macro, "Tools" for app integration, and trainable "Adapters" are changing the game for developers. We also touch on the revamped speech-to-text, "Visual Intelligence," "Swift Assist" in Xcode, and the mysterious "Private Cloud Compute." Join us as we analyze Apple's AI strategy, the internal reorgs shaping their product future, and the competitive landscape with Google and OpenAI.


[00:00:00] Welcome back, John Gillilan!

[00:01:00] What was WWDC like from an insider's perspective?

[00:06:00] Apple's big miss: What happened to last year's AI promises?

[00:12:00] The new Foundation Models Framework

[00:16:00] Structured data output with the "Generable" macro

[00:19:00] Extending the LLM with "Tools"

[00:22:00] Fine-tuning with trainable "Adapters"

[00:28:00] Modernized on-device Speech-to-Text

[00:29:00] "Visual Intelligence" and app integration

[00:32:00] The powerful "call model" block in Shortcuts

[00:36:00] Swift Assist and BYO-Model in Xcode

[00:39:00] Inside Apple's big AI reorg

[00:42:00] The Jony Ive / OpenAI hardware mystery

[00:45:00] How Apple, Google, and OpenAI will compete and collaborate


#Apple #WWDC #AI #AppleIntelligence #FoundationModels #LLM #OnDeviceAI #Swift #iOSDev #Developer #TechPodcast #TwoVoiceDevs #Siri #SwiftAssist #OpenAI #GoogleGemini #GoogleAndroid

Show more...
6 months ago
48 minutes 35 seconds

Two Voice Devs
Episode 246 - Reasoning About Gemini 2.5 "Thinking" Model

Join Allen Firstenberg and Mark Tucker as they dive into Google's latest Gemini 2.5 models and their much-touted "thinking" capabilities. In this episode, they explore whether these models are genuinely reasoning or just executing sophisticated pattern matching. Through live tests in Google's AI Studio, they pit the Pro, Flash, and Flash-Lite models against tricky riddles, analyzing the "thought process" behind the answers. The discussion also covers the practical implications for developers, the challenges of implementing these features in frameworks like LangChainJS, and the broader question of what this means for the future of AI.


[00:00:00] - Introduction to Gemini 2.5 "thinking" models

[00:01:00] - How "thinking" models relate to Chain of Thought prompting

[00:03:00] - Advantages of separating reasoning from the answer

[00:05:00] - Exploring the models (Pro, Flash, Flash-Lite) in AI Studio

[00:06:00] - Thinking mode and thinking budget explained

[00:09:00] - Test 1: Strawberry vs. Triangle

[00:15:00] - Test 2: The "bricks vs. feathers" riddle with a twist

[00:17:00] - Prompting the model to ask clarifying questions

[00:25:00] - Is it reasoning or just pattern matching?

[00:28:00] - Practical applications and the future of these models

[00:35:00] - Implementing reasoning models in LangChainJS

[00:40:00] - Conclusion


#AI #GoogleGemini #ReasoningModels #ThinkingModels #LLM #ArtificialIntelligence #MachineLearning #LangChain #Developer #Podcast #TechTalk #TwoVoiceDevs

Show more...
6 months ago
40 minutes 47 seconds

Two Voice Devs
Episode 245 - From Python to TypeScript: Coding JCrew AI to Build Better Agents

Ever find that the best way to understand a new framework is to build it yourself? In this episode of Two Voice Devs, Mark Tucker takes us on a deep dive into Crew AI, a powerful Python framework for orchestrating multi-agent AI systems.


To truly get under the hood, Mark decided to port the core functionality into TypeScript, creating "JCrew AI." This process provides a unique and insightful perspective on how these agent-based systems are designed. Join us as we deconstruct the core concepts of Crew AI, exploring how it simplifies the complex process of making AI agents collaborate effectively. We discuss everything from the fundamental building blocks—like agents, tasks, and crews—to the clever ways it implements prompt engineering best practices.


If you're a developer interested in the architecture of modern AI applications, you'll gain a clear understanding of how to define agent roles, backstories, and goals; how to chain tasks together; and how the underlying execution loop (and its similarity to the ReAct pattern) works to produce cohesive results.


Timestamps:

[00:00:00] - Introduction

[00:01:00] - What is Crew AI and the "JCrew AI" Learning Project

[00:04:00] - Core Concepts: How Crews, Agents, and Tasks Work

[00:06:00] - Anatomy of a Crew AI Agent (Role, Goal, Backstory)

[00:10:00] - Building Prompts with Templates and "Slices"

[00:15:00] - The Execution Flow: From "Kickoff" to Final Output

[00:21:00] - Under the Hood: The Agent Executor and Core Logic Loop

[00:23:00] - How Crew AI Compares to LangChain and LangGraph

[00:28:00] - Practical Considerations: Human-in-the-Loop and Performance

[00:30:00] - Learning a Framework by Rebuilding It


#AI #ArtificialIntelligence #Developer #SoftwareEngineering #CrewAI #MultiAgentSystems #AIAgents #Python #TypeScript #PromptEngineering #LLM #Podcast

Show more...
6 months ago
33 minutes 18 seconds

Two Voice Devs
Episode 244 - What's New With Anthropic?

What do Anthropic's latest announcements mean for developers? In this episode, Allen is joined by freelance conversation designer Valentina Adami to break down all the major news from the recent "Code with Claude" event.

Valentina shares her hands-on experience and perspective on the new Opus 4 and Sonnet 4 models, discussing their distinct capabilities, the new "reasoning" features, and why Anthropic's transparency with its public system prompt is a game-changer. They also explore Claude Code, the new coding assistant that runs in your terminal, and how it can be used for everything from fixing bugs to learning new frameworks.

Finally, they cover the latest integrations for the Model Context Protocol (MCP) and the long-awaited addition of web searching to Claude, examining how these tools are evolving and what it means for the future of AI-assisted development.


Timestamps:

[00:41] Guest Valentina Adami's background in humanities and tech

[06:17] What's new in the Opus 4 and Sonnet 4 models?

[14:40] Are the models "thinking" or "reasoning"?

[19:27] The latest on MCP (Model Context Protocol) integrations

[25:03] Exploring the new coding assistant: Claude Code

[31:37] Claude can now search the web


#Anthropic #ClaudeAI #Opus4 #Sonnet4 #ThinkingAI #ReasoningAI #LLM #DeveloperTools #GenerativeAI #AI #Claude #CodingAssistant #MCP #ModelContextProtocol #TwoVoiceDevs

Show more...
6 months ago
34 minutes 28 seconds

Two Voice Devs
Episode 243 - AI Agents: Exploits, Ethics, and the Perils of Over-Permissive Tools

Join Allen Firstenberg and Michal Stanislawek in this thought-provoking episode of Two Voice Devs as they unpack two recent LinkedIn posts by Michal that reveal critical insights into the security and ethical challenges of modern AI agents.


The discussion kicks off with a deep dive into a concerning GitHub MCP server exploit, where researchers uncovered a method to access private repositories through public channels like PRs and issues. This highlights the dangers of broadly permissive AI agents and the need for robust guardrails and input sanitization, especially when vanilla language models are given wide-ranging access to sensitive data. What happens when your 'personal assistant' acts on a malicious instruction, mistaking it for a routine task?


The conversation then shifts to the ethical landscape of AI, exploring Anthropic's Claude 4 experiments which suggest that AI assistants, under certain conditions, might prioritize self-preservation or even 'snitch.' This raises profound questions for developers and users alike: How ethical do we want our agents to be? Who do they truly work for – us or the corporation? Could governments compel AI to reveal sensitive information?


Allen and Michal delve into the implications for developers, stressing the importance of building specialized agents with clear workflows, implementing principles of least privilege, and rethinking current authorization protocols like OAuth to support fine-grained permissions. They argue that we must consider the AI itself as the 'user' of our tools, necessitating a fundamental shift in how we design and secure these increasingly autonomous systems.


This episode is a must-listen for any developer building with AI, offering crucial perspectives on how to navigate the complex intersection of AI capabilities, security vulnerabilities, and ethical responsibilities.


More Info:

* https://www.linkedin.com/posts/xmstan_the-researchers-who-unveiled-claude-4s-snitching-activity-7333733889942691840-wAQ4

* https://www.linkedin.com/posts/xmstan_your-ai-assistant-may-accidentally-become-activity-7333219169888305152-2cjN


00:00 - Introduction: Unpacking AI Agent Security & Ethics

00:50 - The GitHub MCP Server Exploit: Public Access to Private Repos

02:15 - Ethical AI: Self-Preservation & The 'Snitching' Agent Dilemma

04:00 - Developer Responsibility: Building Ethical & Trustworthy AI Systems

09:20 - The Dangers of Vanilla LLM Integrations Without Guardrails

13:00 - Custom Workflows vs. Generic Autonomous Agents

17:20 - Isolation of Concerns & Principles of Least Privilege

26:00 - Rethinking OAuth: The Need for Fine-Grained AI Permissions

29:00 - The Holistic Approach to AI Security & Authorization


#AIAgents #AIethics #AIsecurity #PromptInjection #GitHub #ModelContextProtocol #MCP #MCPservers #MCPsecurity #OAuth #Authorization #Authentication #LeastPrivilege #Privacy #Security #Exploit #Hack #RedTeam #CovertChannel #Developer #TechPodcast #TwoVoiceDevs #Anthropic #ClaudeAI #LLM #LargeLanguageModel #GenerativeAI

Show more...
7 months ago
30 minutes 57 seconds

Two Voice Devs
Episode 242 - From the Creatives Corner at I/O 2025

Join Allen Firstenberg and Linda Lawton of Two Voice Devs as they record live from Google I/O 2025! As the conference neared the end, they dive deep into the groundbreaking announcements in generative AI, discussing the latest advancements and what they mean for developers, especially those in Conversational AI.

This episode explores the new and updated models that are set to redefine content creation:

Lyria: Google's innovative streaming audio generation API, its unique WebSocket-based approach, and the fascinating possibilities (and challenges!) of dynamic music creation, including its potential for YouTube content and the ever-present copyright questions surrounding AI-generated media.

Veo 3: The video generation powerhouse, now enhanced with synchronized audio and voice, realistic lip-sync for characters (yes, even cartoon animals!), and improvements in "world physics." They also tackle the implications of its pricing for professional and individual creators.

Imagen 4: Discover the highly anticipated improvements in text generation within images, including stylized fonts and potential for other languages.


Allen and Linda also share some early creations with these new models.

Whether you're building the next great voice app, creating dynamic content, or just curious about the cutting edge of AI, this episode offers a developer-focused perspective on the future of generative media.


00:00:00: Introduction to Two Voice Devs at I/O 2025

00:00:50: I/O 2025: New Generative AI Models Overview

00:01:20: Lyria: Streaming Audio Generation and Documentation Challenges

00:03:00: Lyria's Practical Use Cases & Generative AI Copyright Questions

00:10:00: Veo 3: Video Generation with Synchronized Audio and Voice Features

00:12:10: Veo 3 Pricing and Cost Implications for Developers

00:14:20: Imagen 4: Improved Text Generation in Images

00:17:40: Professional Use Cases for Veo and Imagen

00:19:10: Flow: The New Professional Studio System for Creators

00:22:00: Gemini Ultra Tiered Pricing and Regional Restrictions

00:24:20: Concluding Thoughts and Call to Action


#GoogleIO2025 #GenerativeAI #AIModels #Lyria #Veo3 #Imagen4 #FlowAI #TwoVoiceDevs #VoiceTech #ConversationalAI #AIDevelopment #MachineLearning #ContentCreation #YouTubeCreators #GoogleAI #VertexAI #GeminiUltra #CopyrightAI #TechPodcast

Show more...
7 months ago
25 minutes 9 seconds

Two Voice Devs
Mark and Allen talk about the latest news in the VoiceFirst world from a developer point of view.