Home
Categories
EXPLORE
Society & Culture
Comedy
True Crime
History
News
Business
Sports
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/04/23/4f/04234f17-3ed1-b752-250d-554bac5014d0/mza_11397316299310858090.png/600x600bb.jpg
Techsplainers by IBM
IBM
44 episodes
15 hours ago

Introducing the Techsplainers by IBM podcast, your new podcast for quick, powerful takes on today’s most important AI and tech topics. Each episode brings you bite-sized learning designed to fit your day, whether you’re driving, exercising, or just curious for something new.


This is just the beginning. Tune in every weekday at 6 AM ET for fresh insights, new voices, and smarter learning.

Show more...
Technology
Education,
Business
RSS
All content for Techsplainers by IBM is the property of IBM and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.

Introducing the Techsplainers by IBM podcast, your new podcast for quick, powerful takes on today’s most important AI and tech topics. Each episode brings you bite-sized learning designed to fit your day, whether you’re driving, exercising, or just curious for something new.


This is just the beginning. Tune in every weekday at 6 AM ET for fresh insights, new voices, and smarter learning.

Show more...
Technology
Education,
Business
Episodes (20/44)
Techsplainers by IBM
What is AutoML?

This episode of Techsplainers explores automated machine learning (AutoML), a transformative approach that automates the end-to-end development of machine learning models. We explain how AutoML democratizes AI by enabling non-experts to implement intelligent systems while allowing data scientists to focus on more complex challenges rather than routine tasks. The podcast walks through how AutoML solutions streamline the entire machine learning pipeline—from data preparation and preprocessing to feature engineering, model selection, hyperparameter tuning, validation, and deployment. Particularly valuable is our discussion of automated feature engineering, which can reduce development time from days to minutes while increasing model explainability. We explore four major use cases where AutoML excels: classification tasks like fraud detection, regression problems for forecasting, computer vision applications for image processing, and natural language processing for text analysis. The episode concludes by acknowledging AutoML's limitations, including potentially high costs for complex models, challenges with interpretability, risks of overfitting, limited control over model design, and continued dependence on high-quality training data.


Find more information at https://www.ibm.com/think/podcasts/techsplainers


Narrated by Ian Smalley

Show more...
2 days ago
11 minutes

Techsplainers by IBM
What is data labeling?

This episode of Techsplainers explores data labeling, the critical preprocessing stage where raw data is assigned contextual tags to make it intelligible for machine learning models. We examine how this process combines software tools with human-in-the-loop participation to create the foundation for AI applications like computer vision and natural language processing. The podcast compares five distinct approaches to data labeling: internal labeling (using in-house experts), synthetic labeling (generating new data from existing datasets), programmatic labeling (automating the process through scripts), outsourcing (leveraging external specialists), and crowdsourcing (distributing micro-tasks across many contributors). We also discuss the tradeoffs involved—while proper labeling significantly improves model accuracy and performance, it's often expensive and time-consuming. The episode concludes by sharing best practices like consensus measurement, label auditing, and active learning techniques that help organizations optimize their data labeling processes for maximum efficiency and accuracy across various use cases from image recognition to sentiment analysis.


Find more information at https://www.ibm.com/think/podcasts/techsplainers


Narrated by Ian Smalley

Show more...
3 days ago
10 minutes

Techsplainers by IBM
What is access management?

This episode of Techsplainers explores Identity and Access Management (IAM), the cybersecurity discipline that controls who can access what in digital systems. We examine IAM's four foundational pillars—administration, authentication, authorization, and auditing—and how they work together to secure modern organizations. The episode details essential IAM capabilities, including directory services, authentication tools like multi-factor authentication and single sign-on, various access control methods, and specialized functions for privileged accounts and non-human users. With 30% of cyber attacks involving stolen credentials and non-human identities now outnumbering human users 10:1 in enterprises, IAM has evolved from basic IT functionality to a critical security foundation. The discussion concludes by examining emerging trends like identity fabrics that unite disparate systems and how AI is both challenging and enhancing IAM capabilities.


Find more information at https://www.ibm.com/think/podcasts/techsplainers


Narrated by Bryan Clark

Show more...
4 days ago
11 minutes

Techsplainers by IBM
What is authentication?

This episode of Techsplainers introduces authentication, the cybersecurity process that verifies a user's identity before granting access to systems or data. The episode distinguishes authentication (proving who you are) from authorization (determining what you're allowed to do) and explores the four main authentication factors: something you know (passwords), something you have (security tokens), something you are (biometrics), and something you do (behavioral patterns). Modern authentication approaches are examined, including single sign-on (SSO), multi-factor authentication (MFA), adaptive authentication that uses AI to assess risk in real-time, and passwordless authentication using cryptographic passkeys. Technical standards like SAML, OAuth, and Kerberos are also explained. With account hijacking involved in 30% of cyber attacks, according to IBM's X-Force Threat Intelligence Index, strong authentication proves critical for security, access control, and regulatory compliance across industries like healthcare and finance.


Find more information at https://www.ibm.com/think/podcasts/techsplainers


Narrated by Bryan Clark

Show more...
5 days ago
5 minutes

Techsplainers by IBM
What is full-stack observability?

In this episode of Techsplainers, we dive into full-stack observability, a comprehensive approach that unifies telemetry across infrastructure, applications, and user experiences. Unlike siloed monitoring, full-stack observability provides a single source of truth for system health, enabling faster incident resolution, predictive optimization, and improved operational efficiency. We discuss how it works, including automated service discovery, leading factor analysis, unified dashboards, and AI-driven analytics. You will also learn about its benefits for performance, security, compliance, and business outcomes, as well as challenges like data scale, integration, and privacy. Finally, we explore how machine learning and natural language processing are shaping the future of observability. No matter your role, episode offers a complete guide to why full-stack observability is essential in today’s complex digital environments.


Find more information at https://www.ibm.com/think/podcasts/techsplainers


Narrated by PJ Hagerty

Show more...
6 days ago
12 minutes

Techsplainers by IBM
What is SRE observability?

In this episode of Techsplainers, we dive into SRE observability, a critical practice for ensuring site reliability in today’s dynamic, cloud-native environments. Discover how SRE observability goes beyond traditional monitoring by using telemetry data—metrics, logs, and traces—to provide deep visibility into complex systems. We explain how it supports proactive issue detection, faster incident response, and data-driven decision-making. You will also learn about real-world use cases in ecommerce, finance, logistics, and healthcare, as well as emerging trends like AI-driven observability and causal AI. Whether you are an engineer, IT professional, or tech enthusiast, this episode will help you understand how SRE observability optimizes performance, enhances user experience, and drives better business outcomes.


Find more information at https://www.ibm.com/think/podcasts/techsplainers


Narrated by PJ Hagerty

Show more...
1 week ago
10 minutes

Techsplainers by IBM
What is data accuracy?

This episode of Techsplainers explains what data accuracy is, why it matters, and how organizations can achieve it. We explore its role as a core dimension of data quality, the benefits of accurate data for decision-making, compliance, AI, and customer satisfaction, and the common causes of inaccuracies—from human error to outdated information and biased data.


Find more information at https://www.ibm.com/think/podcasts/techsplainers


Narrated by Matt Finio

Show more...
1 week ago
9 minutes

Techsplainers by IBM
What is data integrity?

This episode of Techsplainers explains what data integrity is, why it matters, and how organizations can maintain it. We cover the processes and security measures that ensure data remains accurate, complete, and consistent throughout its lifecycle. Learn why data integrity is critical for analytics, compliance, and trust, and explore the five key types of data integrity.


Find more information at https://www.ibm.com/think/podcasts/techsplainers


Narrated by Matt Finio

Show more...
1 week ago
15 minutes

Techsplainers by IBM
What is multi-agent collaboration?

This episode of "Techsplainers" explains the concept of multi-agent collaboration. It discusses how multi-agent systems, comprising multiple AI agents, coordinate actions in a distributed system to achieve complex tasks. These tasks, once handled only by large language models, now include customer service triage, financial analysis, technical troubleshooting, and more. The podcast details how agents communicate via established protocols to exchange information, assign responsibilities, and coordinate actions. It also highlights the benefits of multi-agent collaboration, such as scalability, fault tolerance, and emergent cooperative behavior, using examples like a fleet of drones searching a disaster site.


Find more information at https://www.ibm.com/think/podcasts/techsplainers


Narrated by Alice Gomstyn

Show more...
1 week ago
13 minutes

Techsplainers by IBM
What is a multi-agent system?

This episode of Techsplainers introduces listeners to the concept of agentic architecture, a framework used for structuring AI agents to automate complex tasks. The podcast explains that agentic architecture is crucial for creating AI agents capable of autonomous decision-making and adapting to dynamic environments. It delves into the four core factors of agency: intentionality (planning), forethought, self-reactiveness, and self-reflectiveness. These four factors underpin AI agents' autonomy. The discussion also contrasts agentic and non-agentic architectures, highlighting the advantages of agentic architectures in supporting agentic behavior in AI agents. The podcast further breaks down different types of agentic architectures – single-agent, multi-agent, and hybrid – detailing their structures, strengths, weaknesses, and best use cases. Finally, it covers three types of agentic frameworks—reactive, deliberative, and cognitive—concluding with a detailed explanation of BDI architectures, a model for rational decision-making in intelligent agents.


Find more information at https://www.ibm.com/think/podcasts/techsplainers


Narrated by Alice Gomstyn

Show more...
1 week ago
14 minutes

Techsplainers by IBM
What is vibe coding?

This episode of Techsplainers introduces vibe coding, the practice of using AI tools to generate software code through natural language prompts rather than manual coding. We explore how this approach follows a "code first, refine later" philosophy that prioritizes experimentation and rapid prototyping. The podcast walks through the four-step implementation process: choosing an AI coding assistant platform, defining requirements through clear prompts, refining the generated code, and reviewing before deployment. While highlighting vibe coding's ability to accelerate development and free human creativity, we also examine its limitations—including challenges with technical complexity, code quality, debugging, maintenance, and security concerns. The discussion concludes by examining how vibe coding is driving paradigm shifts in software development through quick prototyping, problem-first approaches, reduced risk with maximized impact, and multimodal interfaces that combine voice, visual, and text-based coding methods to create more intuitive development environments.


Find more information at https://www.ibm.com/think/podcasts/techsplainers


Narrated by Amanda Downie

Show more...
2 weeks ago
7 minutes

Techsplainers by IBM
What is retrieval augmented generation (RAG)?

This episode of Techsplainers explores retrieval augmented generation (RAG), a powerful technique that enhances generative AI by connecting models to external knowledge bases. We examine how RAG addresses critical limitations of large language models—their finite training data and knowledge cutoffs—by allowing them to access up-to-date, domain-specific information in real-time. The podcast breaks down RAG's five-stage process: from receiving a user query to retrieving relevant information, integrating it into an augmented prompt, and generating an informed response. We dissect RAG's four core components—knowledge base, retriever, integration layer, and generator—explaining how they work together to create a more robust AI system. Special attention is given to embedding and chunking processes that transform unstructured data into searchable vector representations. The episode highlights RAG's numerous benefits, including cost efficiency compared to fine-tuning, reduced hallucinations, enhanced user trust through citations, expanded model capabilities, improved developer control, and stronger data security. Finally, we showcase diverse real-world applications across industries, from specialized chatbots and research tools to personalized recommendation engines.


Find more information at https://www.ibm.com/think/podcasts/techsplainers


Narrated by Amanda Downie

Show more...
2 weeks ago
10 minutes

Techsplainers by IBM
What are vision language models (VLMs)?

This episode of Techsplainers explores vision language models (VLMs), the sophisticated AI systems that bridge computer vision and natural language processing. We examine how these multimodal models understand relationships between images and text, allowing them to generate image descriptions, answer visual questions, and even create images from text prompts. The podcast dissects the architecture of VLMs, explaining the critical components of vision encoders (which process visual information into vector embeddings) and language encoders (which interpret textual data). We delve into training strategies, including contrastive learning methods like CLIP, masking techniques, generative approaches, and transfer learning from pretrained models. The discussion highlights real-world applications—from image captioning and generation to visual search, image segmentation, and object detection—while showcasing leading models like DeepSeek-VL2, Google's Gemini 2.0, OpenAI's GPT-4o, Meta's Llama 3.2, and NVIDIA's NVLM. Finally, we address implementation challenges similar to traditional LLMs, including data bias, computational complexity, and the risk of hallucinations.


Find more information at https://www.ibm.com/think/podcasts/techsplainers


Narrated by Amanda Downie

Show more...
2 weeks ago
10 minutes

Techsplainers by IBM
What are large language models (LLMs)?

This episode of Techsplainers explores large language models (LLMs), the powerful AI systems revolutionizing how we interact with technology through human language. We break down how these massive statistical prediction machines are built on transformer architecture, enabling them to understand context and relationships between words far better than previous systems. The podcast walks through the complete development process—from pretraining on trillions of words and tokenization to self-supervised learning and the crucial self-attention mechanism that allows LLMs to capture linguistic relationships. We examine various fine-tuning methods, including supervised fine-tuning, reinforcement learning from human feedback (RLHF), and instruction tuning, that help adapt these models for specific uses. The discussion covers practical aspects like prompt engineering, temperature settings, context windows, and retrieval augmented generation (RAG) while showcasing real-world applications across industries. Finally, we address the significant challenges of LLMs, including hallucinations, biases, and resource demands, alongside governance frameworks and evaluation techniques used to ensure these powerful tools are deployed responsibly.


Find more information at https://www.ibm.com/think/podcasts/techsplainers


Narrated by Amanda Downie

Show more...
2 weeks ago
10 minutes

Techsplainers by IBM
What is generative AI?

This episode of Techsplainers explores generative AI, the revolutionary technology that creates original content like text, images, video, and code in response to user prompts. We walk through how these systems work in three main phases: training foundation models on massive datasets, tuning them for specific applications, and continuously improving their outputs through evaluation. The podcast traces the evolution of key generative AI architectures—from variational autoencoders and generative adversarial networks to diffusion models and transformers—highlighting how each contributes to today's powerful AI tools. We examine generative AI's diverse applications across industries, from enhancing customer experiences and accelerating software development to transforming creative processes and scientific research. The episode also addresses emerging concepts like AI agents and agentic AI while candidly discussing the technology's challenges, including hallucinations, bias, security vulnerabilities, and deepfakes. Despite these concerns, the episode emphasizes how organizations are increasingly adopting generative AI, with analysts predicting 80% implementation by 2026.


Find more information at https://www.ibm.com/think/podcasts/techsplainers


Narrated by Amanda Downie


Show more...
2 weeks ago
10 minutes

Techsplainers by IBM
What is model deployment?

This episode of Techsplainers explores model deployment, the crucial phase that brings machine learning models from development into production environments where they can deliver real business value. We examine why deployment is so critical—according to Gartner, only about 48% of AI projects make it to production—and discuss four primary deployment methods: real-time (for immediate predictions), batch (for offline processing of large datasets), streaming (for continuous data processing), and edge deployment (for running models on devices like smartphones). The podcast walks through the six essential steps of the deployment process: planning (preparing the technical environment), setup (configuring dependencies and security), packaging and deployment (containerizing the model), testing (validating functionality), monitoring (tracking performance metrics), and implementing CI/CD pipelines (for automated updates). We also address key challenges organizations face when deploying models, including high infrastructure costs, technical complexity, integration difficulties with existing systems, and ensuring proper scalability to handle varying workloads.


Find more information at https://www.ibm.com/think/podcasts/techsplainers


Narrated by Ian Smalley

Show more...
3 weeks ago
9 minutes

Techsplainers by IBM
What is AI lifecycle management?

This episode of Techsplainers explores AI Model Lifecycle Management, the comprehensive methodology for managing artificial intelligence models throughout their entire lifecycle. We discuss why a structured approach to AI deployment is critical for enterprise success, especially when decisions made by AI systems can significantly impact business outcomes. The podcast outlines the four main stages of the AI pipeline: collect (making data accessible), organize (creating an analytics foundation), analyze (building AI with trust), and infuse (operationalizing AI across business functions). We also examine the essential components of effective AI lifecycle management, including data governance, quality assurance, fairness evaluation, and explainability. The episode concludes by highlighting the key features needed in AI management tools—from ease of model training and deployment at scale to comprehensive monitoring capabilities—using IBM Cloud Pak for Data as an illustrative example of an end-to-end platform designed to increase the throughput of data science activities and accelerate time to value from AI initiatives.


Find more information at https://www.ibm.com/think/podcasts/techsplainers


Narrated by Ian Smalley

Show more...
3 weeks ago
4 minutes

Techsplainers by IBM
What is a machine learning pipeline?

This episode of Techsplainers explores the machine learning pipeline—the systematic process of designing, developing, and deploying machine learning models. We break down the entire workflow into three distinct stages: data processing (covering ingestion, preprocessing, exploration, and feature engineering), model development (including algorithm selection, hyperparameter tuning, training approaches, and performance evaluation), and model deployment (addressing serialization, integration, architecture, monitoring, updates, and compliance). The podcast also emphasizes the critical "Stage 0" of project commencement, where stakeholders define clear objectives, success metrics, and potential obstacles before starting technical work. Throughout the discussion, we highlight how each stage contributes to creating effective, high-performing ML models while examining various training methodologies—from supervised and unsupervised learning to reinforcement and continual learning approaches. Special attention is given to model monitoring and maintenance, acknowledging that deployment is not the end but rather the beginning of a model's productive life cycle.


Find more information at https://www.ibm.com/think/podcasts/techsplainers


Narrated by Ian Smalley

Show more...
3 weeks ago
15 minutes

Techsplainers by IBM
Part 2: What is MLOps?

This episode of Techsplainers explores the practical implementation of MLOps, diving into the key components that comprise an effective machine learning operations pipeline. We examine the five essential elements: data management (including acquisition, preprocessing, and versioning), model development (covering training, experimentation, and evaluation), model deployment (focusing on packaging and serving), monitoring and optimization (highlighting performance tracking and retraining), and collaboration and governance (emphasizing version control and ethical guidelines). The podcast also investigates how generative AI and large language models are reshaping MLOps practices before explaining the four maturity levels of MLOps implementation—from manual processes to fully automated systems with continuous monitoring and governance. Throughout the episode, we emphasize that organizations should select the appropriate MLOps maturity level based on their specific needs rather than pursuing the most advanced level by default.


Find more information at https://www.ibm.com/think/podcasts/techsplainers


Narrated by Ian Smalley

Show more...
3 weeks ago
15 minutes

Techsplainers by IBM
Part 1: What is MLOps?

This episode of Techsplainers introduces MLOps (machine learning operations), a methodology that creates an efficient assembly line for building and running machine learning models. The podcast explains how MLOps evolved from DevOps principles to address the unique challenges of ML development, including resource intensity, time consumption, and siloed teams. We explore the key benefits of MLOps—increased efficiency through automation, improved model accuracy through continuous monitoring, faster time to market, and enhanced scalability and governance. The episode details eight core principles that define effective MLOps practices: collaboration, continuous improvement, automation, reproducibility, versioning, monitoring and observability, governance and security, and scalability. Finally, we examine the key elements of successful MLOps implementation, including the necessary technical and soft skills, essential tools like ML frameworks and CI/CD pipelines, and best practices for model lifecycle management.


Find more information at https://www.ibm.com/think/podcasts/techsplainers.


Narrated by Ian Smalley

Show more...
3 weeks ago
13 minutes

Techsplainers by IBM

Introducing the Techsplainers by IBM podcast, your new podcast for quick, powerful takes on today’s most important AI and tech topics. Each episode brings you bite-sized learning designed to fit your day, whether you’re driving, exercising, or just curious for something new.


This is just the beginning. Tune in every weekday at 6 AM ET for fresh insights, new voices, and smarter learning.