Introducing the Techsplainers by IBM podcast, your new podcast for quick, powerful takes on today’s most important AI and tech topics. Each episode brings you bite-sized learning designed to fit your day, whether you’re driving, exercising, or just curious for something new.
This is just the beginning. Tune in every weekday at 6 AM ET for fresh insights, new voices, and smarter learning.
Introducing the Techsplainers by IBM podcast, your new podcast for quick, powerful takes on today’s most important AI and tech topics. Each episode brings you bite-sized learning designed to fit your day, whether you’re driving, exercising, or just curious for something new.
This is just the beginning. Tune in every weekday at 6 AM ET for fresh insights, new voices, and smarter learning.
In this episode of Techsplainers, we dive into full-stack observability, a comprehensive approach that unifies telemetry across infrastructure, applications, and user experiences. Unlike siloed monitoring, full-stack observability provides a single source of truth for system health, enabling faster incident resolution, predictive optimization, and improved operational efficiency. We discuss how it works, including automated service discovery, leading factor analysis, unified dashboards, and AI-driven analytics. You will also learn about its benefits for performance, security, compliance, and business outcomes, as well as challenges like data scale, integration, and privacy. Finally, we explore how machine learning and natural language processing are shaping the future of observability. No matter your role, episode offers a complete guide to why full-stack observability is essential in today’s complex digital environments.
Find more information at https://www.ibm.com/think/podcasts/techsplainers
Narrated by PJ Hagerty
In this episode of Techsplainers, we dive into SRE observability, a critical practice for ensuring site reliability in today’s dynamic, cloud-native environments. Discover how SRE observability goes beyond traditional monitoring by using telemetry data—metrics, logs, and traces—to provide deep visibility into complex systems. We explain how it supports proactive issue detection, faster incident response, and data-driven decision-making. You will also learn about real-world use cases in ecommerce, finance, logistics, and healthcare, as well as emerging trends like AI-driven observability and causal AI. Whether you are an engineer, IT professional, or tech enthusiast, this episode will help you understand how SRE observability optimizes performance, enhances user experience, and drives better business outcomes.
Find more information at https://www.ibm.com/think/podcasts/techsplainers
Narrated by PJ Hagerty
This episode of Techsplainers explains what data accuracy is, why it matters, and how organizations can achieve it. We explore its role as a core dimension of data quality, the benefits of accurate data for decision-making, compliance, AI, and customer satisfaction, and the common causes of inaccuracies—from human error to outdated information and biased data.
Find more information at https://www.ibm.com/think/podcasts/techsplainers
Narrated by Matt Finio
This episode of Techsplainers explains what data integrity is, why it matters, and how organizations can maintain it. We cover the processes and security measures that ensure data remains accurate, complete, and consistent throughout its lifecycle. Learn why data integrity is critical for analytics, compliance, and trust, and explore the five key types of data integrity.
Find more information at https://www.ibm.com/think/podcasts/techsplainers
Narrated by Matt Finio
This episode of "Techsplainers" explains the concept of multi-agent collaboration. It discusses how multi-agent systems, comprising multiple AI agents, coordinate actions in a distributed system to achieve complex tasks. These tasks, once handled only by large language models, now include customer service triage, financial analysis, technical troubleshooting, and more. The podcast details how agents communicate via established protocols to exchange information, assign responsibilities, and coordinate actions. It also highlights the benefits of multi-agent collaboration, such as scalability, fault tolerance, and emergent cooperative behavior, using examples like a fleet of drones searching a disaster site.
Find more information at https://www.ibm.com/think/podcasts/techsplainers
Narrated by Alice Gomstyn
This episode of Techsplainers introduces listeners to the concept of agentic architecture, a framework used for structuring AI agents to automate complex tasks. The podcast explains that agentic architecture is crucial for creating AI agents capable of autonomous decision-making and adapting to dynamic environments. It delves into the four core factors of agency: intentionality (planning), forethought, self-reactiveness, and self-reflectiveness. These four factors underpin AI agents' autonomy. The discussion also contrasts agentic and non-agentic architectures, highlighting the advantages of agentic architectures in supporting agentic behavior in AI agents. The podcast further breaks down different types of agentic architectures – single-agent, multi-agent, and hybrid – detailing their structures, strengths, weaknesses, and best use cases. Finally, it covers three types of agentic frameworks—reactive, deliberative, and cognitive—concluding with a detailed explanation of BDI architectures, a model for rational decision-making in intelligent agents.
Find more information at https://www.ibm.com/think/podcasts/techsplainers
Narrated by Alice Gomstyn
This episode of Techsplainers introduces vibe coding, the practice of using AI tools to generate software code through natural language prompts rather than manual coding. We explore how this approach follows a "code first, refine later" philosophy that prioritizes experimentation and rapid prototyping. The podcast walks through the four-step implementation process: choosing an AI coding assistant platform, defining requirements through clear prompts, refining the generated code, and reviewing before deployment. While highlighting vibe coding's ability to accelerate development and free human creativity, we also examine its limitations—including challenges with technical complexity, code quality, debugging, maintenance, and security concerns. The discussion concludes by examining how vibe coding is driving paradigm shifts in software development through quick prototyping, problem-first approaches, reduced risk with maximized impact, and multimodal interfaces that combine voice, visual, and text-based coding methods to create more intuitive development environments.
Find more information at https://www.ibm.com/think/podcasts/techsplainers
Narrated by Amanda Downie
This episode of Techsplainers explores retrieval augmented generation (RAG), a powerful technique that enhances generative AI by connecting models to external knowledge bases. We examine how RAG addresses critical limitations of large language models—their finite training data and knowledge cutoffs—by allowing them to access up-to-date, domain-specific information in real-time. The podcast breaks down RAG's five-stage process: from receiving a user query to retrieving relevant information, integrating it into an augmented prompt, and generating an informed response. We dissect RAG's four core components—knowledge base, retriever, integration layer, and generator—explaining how they work together to create a more robust AI system. Special attention is given to embedding and chunking processes that transform unstructured data into searchable vector representations. The episode highlights RAG's numerous benefits, including cost efficiency compared to fine-tuning, reduced hallucinations, enhanced user trust through citations, expanded model capabilities, improved developer control, and stronger data security. Finally, we showcase diverse real-world applications across industries, from specialized chatbots and research tools to personalized recommendation engines.
Find more information at https://www.ibm.com/think/podcasts/techsplainers
Narrated by Amanda Downie
This episode of Techsplainers explores vision language models (VLMs), the sophisticated AI systems that bridge computer vision and natural language processing. We examine how these multimodal models understand relationships between images and text, allowing them to generate image descriptions, answer visual questions, and even create images from text prompts. The podcast dissects the architecture of VLMs, explaining the critical components of vision encoders (which process visual information into vector embeddings) and language encoders (which interpret textual data). We delve into training strategies, including contrastive learning methods like CLIP, masking techniques, generative approaches, and transfer learning from pretrained models. The discussion highlights real-world applications—from image captioning and generation to visual search, image segmentation, and object detection—while showcasing leading models like DeepSeek-VL2, Google's Gemini 2.0, OpenAI's GPT-4o, Meta's Llama 3.2, and NVIDIA's NVLM. Finally, we address implementation challenges similar to traditional LLMs, including data bias, computational complexity, and the risk of hallucinations.
Find more information at https://www.ibm.com/think/podcasts/techsplainers
Narrated by Amanda Downie
This episode of Techsplainers explores large language models (LLMs), the powerful AI systems revolutionizing how we interact with technology through human language. We break down how these massive statistical prediction machines are built on transformer architecture, enabling them to understand context and relationships between words far better than previous systems. The podcast walks through the complete development process—from pretraining on trillions of words and tokenization to self-supervised learning and the crucial self-attention mechanism that allows LLMs to capture linguistic relationships. We examine various fine-tuning methods, including supervised fine-tuning, reinforcement learning from human feedback (RLHF), and instruction tuning, that help adapt these models for specific uses. The discussion covers practical aspects like prompt engineering, temperature settings, context windows, and retrieval augmented generation (RAG) while showcasing real-world applications across industries. Finally, we address the significant challenges of LLMs, including hallucinations, biases, and resource demands, alongside governance frameworks and evaluation techniques used to ensure these powerful tools are deployed responsibly.
Find more information at https://www.ibm.com/think/podcasts/techsplainers
Narrated by Amanda Downie
This episode of Techsplainers explores generative AI, the revolutionary technology that creates original content like text, images, video, and code in response to user prompts. We walk through how these systems work in three main phases: training foundation models on massive datasets, tuning them for specific applications, and continuously improving their outputs through evaluation. The podcast traces the evolution of key generative AI architectures—from variational autoencoders and generative adversarial networks to diffusion models and transformers—highlighting how each contributes to today's powerful AI tools. We examine generative AI's diverse applications across industries, from enhancing customer experiences and accelerating software development to transforming creative processes and scientific research. The episode also addresses emerging concepts like AI agents and agentic AI while candidly discussing the technology's challenges, including hallucinations, bias, security vulnerabilities, and deepfakes. Despite these concerns, the episode emphasizes how organizations are increasingly adopting generative AI, with analysts predicting 80% implementation by 2026.
Find more information at https://www.ibm.com/think/podcasts/techsplainers
Narrated by Amanda Downie
This episode of Techsplainers explores model deployment, the crucial phase that brings machine learning models from development into production environments where they can deliver real business value. We examine why deployment is so critical—according to Gartner, only about 48% of AI projects make it to production—and discuss four primary deployment methods: real-time (for immediate predictions), batch (for offline processing of large datasets), streaming (for continuous data processing), and edge deployment (for running models on devices like smartphones). The podcast walks through the six essential steps of the deployment process: planning (preparing the technical environment), setup (configuring dependencies and security), packaging and deployment (containerizing the model), testing (validating functionality), monitoring (tracking performance metrics), and implementing CI/CD pipelines (for automated updates). We also address key challenges organizations face when deploying models, including high infrastructure costs, technical complexity, integration difficulties with existing systems, and ensuring proper scalability to handle varying workloads.
Find more information at https://www.ibm.com/think/podcasts/techsplainers
Narrated by Ian Smalley
This episode of Techsplainers explores AI Model Lifecycle Management, the comprehensive methodology for managing artificial intelligence models throughout their entire lifecycle. We discuss why a structured approach to AI deployment is critical for enterprise success, especially when decisions made by AI systems can significantly impact business outcomes. The podcast outlines the four main stages of the AI pipeline: collect (making data accessible), organize (creating an analytics foundation), analyze (building AI with trust), and infuse (operationalizing AI across business functions). We also examine the essential components of effective AI lifecycle management, including data governance, quality assurance, fairness evaluation, and explainability. The episode concludes by highlighting the key features needed in AI management tools—from ease of model training and deployment at scale to comprehensive monitoring capabilities—using IBM Cloud Pak for Data as an illustrative example of an end-to-end platform designed to increase the throughput of data science activities and accelerate time to value from AI initiatives.
Find more information at https://www.ibm.com/think/podcasts/techsplainers
Narrated by Ian Smalley
This episode of Techsplainers explores the machine learning pipeline—the systematic process of designing, developing, and deploying machine learning models. We break down the entire workflow into three distinct stages: data processing (covering ingestion, preprocessing, exploration, and feature engineering), model development (including algorithm selection, hyperparameter tuning, training approaches, and performance evaluation), and model deployment (addressing serialization, integration, architecture, monitoring, updates, and compliance). The podcast also emphasizes the critical "Stage 0" of project commencement, where stakeholders define clear objectives, success metrics, and potential obstacles before starting technical work. Throughout the discussion, we highlight how each stage contributes to creating effective, high-performing ML models while examining various training methodologies—from supervised and unsupervised learning to reinforcement and continual learning approaches. Special attention is given to model monitoring and maintenance, acknowledging that deployment is not the end but rather the beginning of a model's productive life cycle.
Find more information at https://www.ibm.com/think/podcasts/techsplainers
Narrated by Ian Smalley
This episode of Techsplainers explores the practical implementation of MLOps, diving into the key components that comprise an effective machine learning operations pipeline. We examine the five essential elements: data management (including acquisition, preprocessing, and versioning), model development (covering training, experimentation, and evaluation), model deployment (focusing on packaging and serving), monitoring and optimization (highlighting performance tracking and retraining), and collaboration and governance (emphasizing version control and ethical guidelines). The podcast also investigates how generative AI and large language models are reshaping MLOps practices before explaining the four maturity levels of MLOps implementation—from manual processes to fully automated systems with continuous monitoring and governance. Throughout the episode, we emphasize that organizations should select the appropriate MLOps maturity level based on their specific needs rather than pursuing the most advanced level by default.
Find more information at https://www.ibm.com/think/podcasts/techsplainers
Narrated by Ian Smalley
This episode of Techsplainers introduces MLOps (machine learning operations), a methodology that creates an efficient assembly line for building and running machine learning models. The podcast explains how MLOps evolved from DevOps principles to address the unique challenges of ML development, including resource intensity, time consumption, and siloed teams. We explore the key benefits of MLOps—increased efficiency through automation, improved model accuracy through continuous monitoring, faster time to market, and enhanced scalability and governance. The episode details eight core principles that define effective MLOps practices: collaboration, continuous improvement, automation, reproducibility, versioning, monitoring and observability, governance and security, and scalability. Finally, we examine the key elements of successful MLOps implementation, including the necessary technical and soft skills, essential tools like ML frameworks and CI/CD pipelines, and best practices for model lifecycle management.
Find more information at https://www.ibm.com/think/podcasts/techsplainers.
Narrated by Ian Smalley
This episode of Techsplainers continues the exploration of ransomware, focusing on notorious variants that have caused billions in damages worldwide. The podcast examines landmark ransomware families, including CryptoLocker, which kickstarted modern ransomware attacks; WannaCry, which infected 200,000 computers across 150 countries; and Darkside, responsible for the Colonial Pipeline attack that disrupted 45% of the US East Coast's fuel supply. Listeners will learn about the evolution of ransomware tactics, from standard encryption to AI-powered "Ransomware 3.0" like PromptLock. The discussion also covers ransom payment trends—noting that 63% of victims now refuse to pay—along with law enforcement's stance against payments and potential legal consequences. The episode concludes with essential prevention strategies, including maintaining offline backups, regular patching, employee training, and establishing formal incident response plans that can save organizations nearly $1 million per attack through faster identification.
Find more information at https://www.ibm.com/think/podcasts/techsplainers.
Narrated by Bryan Clark
This episode of Techsplainers demystifies ransomware—malicious software that encrypts victims' data and demands payment for its release. The podcast explains how ransomware has evolved from simple encryption attacks to sophisticated double and triple extortion tactics that threaten data leaks and attacks on business partners. Listeners will learn about different types of ransomware, including encrypting (crypto) ransomware, screen-locking variants, leakware, mobile ransomware, wipers, and scareware. The discussion covers common infection vectors, such as phishing, software vulnerabilities, and credential theft, along with the growing "Ransomware-as-a-Service" business model that allows criminals without technical skills to deploy attacks. The episode walks through the five stages of a typical ransomware attack, from initial access to the final ransom demand, highlighting why these attacks cost victims an average of USD 5.08 million, according to IBM's research.
Find more information at https://www.ibm.com/think/podcasts/techsplainers.
Narrated by Bryan Clark
This episode of Techsplainers explores phishing—the deceptive technique cybercriminals use to trick victims into revealing sensitive information or downloading malware through fraudulent communications. The podcast explains why phishing is the most common data breach vector, accounting for 16% of all breaches and costing organizations an average of USD 4.8 million. Listeners will discover various phishing methods, including bulk email phishing, targeted spear phishing, executive-focused whaling, business email compromise, SMS phishing (smishing), voice phishing (vishing), social media attacks, and QR code scams (quishing). The discussion highlights how generative AI has transformed phishing, enabling scammers to create more convincing messages in minutes instead of hours. The episode concludes with practical advice on spotting phishing red flags—like urgency tactics, unsolicited requests, poor grammar, and spoofed URLs—and implementing preventative measures, such as security awareness training, multi-factor authentication, and advanced threat detection tools.
Find more information at https://www.ibm.com/think/podcasts/techsplainers.
Narrated by Bryan Clark
This episode of Techsplainers explores social engineering—cyber attacks that use psychological manipulation rather than technical hacking to compromise security. The podcast examines how attackers impersonate trusted entities and exploit emotions like fear, greed, and curiosity to trick victims. Listeners will discover various attack methods, including different types of phishing, baiting, tailgating, quid pro quo scams, scareware, and watering hole attacks. The discussion shows how these tactics allow cybercriminals to bypass security controls through human vulnerabilities, illustrated with examples from Nigerian prince scams to fake virus warnings. The episode concludes with practical defense strategies, including security awareness training, multi-factor authentication, and advanced detection technologies to protect against these increasingly sophisticated threats.
Find more information at https://www.ibm.com/think/podcasts/techsplainers.
Narrated by Bryan Clark