Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
History
Sports
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts115/v4/d4/55/cf/d455cf89-59e0-b63c-4c24-2c2f67959f2c/mza_4927627977372643136.jpg/600x600bb.jpg
The Pure Report
Pure Storage
266 episodes
2 weeks ago
It’s all about Data Pipelines. Join Pure Storage Field Solution Architect Chad Hendron and Solutions Director Andrew Silifant for a deep dive into the evolution of data management, focusing on the Data Lakehouse architecture and its role in the age of AI and ML. Our discussion looks at the Data Lakehouse as a powerful combination of a data lake and a data warehouse, solving problems like "data swamps” and proprietary formats of older systems. Viewers will learn about technological advancements, such as object storage and open table formats, that have made this new architecture possible, allowing for greater standardization and multiple tooling functions to access the same data. Our guests also explore current industry trends, including a look at Dremio's 2025 report showing the rapid adoption of Data Lakehouses, particularly as a replacement for older, inefficient systems like cloud data warehouses and traditional data lakes. Gain insight into the drivers behind this migration, including the exponential growth of unstructured data and the need to control cloud expenditure by being more prescriptive about what data is stored in the cloud versus on-premises. Andrew provides a detailed breakdown of processing architectures and the critical importance of meeting SLAs to avoid costly and frustrating pipeline breaks in regulated industries like banking. Finally, we provide practical takeaways and a real-world case study. Chad shares a customer success story about replacing a large, complex Hadoop cluster with a streamlined Dremio and Pure Storage solution, highlighting the massive reduction in physical space, power consumption, and management complexity. Both guests emphasize the need for better governance practices to manage cloud spend and risk. Andrew underscores the essential, full-circle role of databases—from the "alpha" of data creation to the "omega" of feature stores and vector databases for modern AI use cases like Retrieval-Augmented Generation (RAG). Tune in to understand how a holistic data strategy, including Pure’s Enterprise Data Cloud, can simplify infrastructure and future-proof your organization for the next wave of data-intensive workloads. To learn more, visit https://www.purestorage.com/solutions/ai/data-warehouse-streaming-analytics.html Check out the new Pure Storage digital customer community to join the conversation with peers and Pure experts: https://purecommunity.purestorage.com/ 00:00 Intro and Welcome 03:15 Data Lakehouse Primer 08:31 Stat of the Episode on Lakehouse Usage 10:50 Challenges with Data Pipeline access 13:58 Assessing Organization Success with Data Cleaning 16:07 Use Cases for the Data Lakehouse 20:41 Case Study on Data Lakehouse Use Case 24:11 Hot Takes Segment
Show more...
Technology
RSS
All content for The Pure Report is the property of Pure Storage and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
It’s all about Data Pipelines. Join Pure Storage Field Solution Architect Chad Hendron and Solutions Director Andrew Silifant for a deep dive into the evolution of data management, focusing on the Data Lakehouse architecture and its role in the age of AI and ML. Our discussion looks at the Data Lakehouse as a powerful combination of a data lake and a data warehouse, solving problems like "data swamps” and proprietary formats of older systems. Viewers will learn about technological advancements, such as object storage and open table formats, that have made this new architecture possible, allowing for greater standardization and multiple tooling functions to access the same data. Our guests also explore current industry trends, including a look at Dremio's 2025 report showing the rapid adoption of Data Lakehouses, particularly as a replacement for older, inefficient systems like cloud data warehouses and traditional data lakes. Gain insight into the drivers behind this migration, including the exponential growth of unstructured data and the need to control cloud expenditure by being more prescriptive about what data is stored in the cloud versus on-premises. Andrew provides a detailed breakdown of processing architectures and the critical importance of meeting SLAs to avoid costly and frustrating pipeline breaks in regulated industries like banking. Finally, we provide practical takeaways and a real-world case study. Chad shares a customer success story about replacing a large, complex Hadoop cluster with a streamlined Dremio and Pure Storage solution, highlighting the massive reduction in physical space, power consumption, and management complexity. Both guests emphasize the need for better governance practices to manage cloud spend and risk. Andrew underscores the essential, full-circle role of databases—from the "alpha" of data creation to the "omega" of feature stores and vector databases for modern AI use cases like Retrieval-Augmented Generation (RAG). Tune in to understand how a holistic data strategy, including Pure’s Enterprise Data Cloud, can simplify infrastructure and future-proof your organization for the next wave of data-intensive workloads. To learn more, visit https://www.purestorage.com/solutions/ai/data-warehouse-streaming-analytics.html Check out the new Pure Storage digital customer community to join the conversation with peers and Pure experts: https://purecommunity.purestorage.com/ 00:00 Intro and Welcome 03:15 Data Lakehouse Primer 08:31 Stat of the Episode on Lakehouse Usage 10:50 Challenges with Data Pipeline access 13:58 Assessing Organization Success with Data Cleaning 16:07 Use Cases for the Data Lakehouse 20:41 Case Study on Data Lakehouse Use Case 24:11 Hot Takes Segment
Show more...
Technology
https://i1.sndcdn.com/artworks-vwyg7Ed5yQ02gzGh-Zy9yyA-t3000x3000.png
Accelerating Enterprise AI Inference with Pure KVA
The Pure Report
29 minutes 38 seconds
1 month ago
Accelerating Enterprise AI Inference with Pure KVA
In this episode, we sit down with Solution Architect Robert Alvarez to discuss the technology behind Pure Key-Value Accelerator (KVA) and its role in accelerating AI inference. Pure KVA is a protocol-agnostic, key-value caching solution that, when combined with FlashBlade data storage, dramatically improves GPU efficiency and consistency in AI environments. Robert—whose background includes time as a Santa Clara University professor, NASA Solution Architect, and work at CERN—explains how this innovation is essential for serving an entire fleet of AI workloads, including modern agentic or chatbot interfaces. Robert dives into the massive growth of the AI Inference market, driven by the need for near real-time processing and low-latency AI applications. This trend makes the need for a solution like Pure KVA critical. He details how KVA removes the bottleneck of GPU memory and shares compelling benchmark results: up to twenty times faster inference with NFS and six times faster with S3, all over standard Ethernet. These performance gains are key to helping enterprises scale more efficiently and reduce overall GPU costs. Beyond the technical deep dive, the episode explores the origin of the KVA idea, the unique Pure IP that enables it, and future integrations like Dynamo and the partnership with Comet for LLM observability. In the popular “Hot Takes” segment, Robert offers his perspective on blind spots IT leaders might have in managing AI data and shares advice for his younger self on the future of the data management space. To learn more about Pure KVA, visit purestorage.com/launch. Check out the new Pure Storage digital customer community to join the conversation with peers and Pure experts: https://purecommunity.purestorage.com/ 00:00 Intro and Welcome 02:21 Background on Our Guest 06:57 Stat of the Episode on AI Inferencing Spend 09:10 Why AI Inference is Difficult at Scale 11:00 How KV Cache Acceleration Works 14:50 Key Partnerships Using KVA 20:28 Hot Takes Segment
The Pure Report
It’s all about Data Pipelines. Join Pure Storage Field Solution Architect Chad Hendron and Solutions Director Andrew Silifant for a deep dive into the evolution of data management, focusing on the Data Lakehouse architecture and its role in the age of AI and ML. Our discussion looks at the Data Lakehouse as a powerful combination of a data lake and a data warehouse, solving problems like "data swamps” and proprietary formats of older systems. Viewers will learn about technological advancements, such as object storage and open table formats, that have made this new architecture possible, allowing for greater standardization and multiple tooling functions to access the same data. Our guests also explore current industry trends, including a look at Dremio's 2025 report showing the rapid adoption of Data Lakehouses, particularly as a replacement for older, inefficient systems like cloud data warehouses and traditional data lakes. Gain insight into the drivers behind this migration, including the exponential growth of unstructured data and the need to control cloud expenditure by being more prescriptive about what data is stored in the cloud versus on-premises. Andrew provides a detailed breakdown of processing architectures and the critical importance of meeting SLAs to avoid costly and frustrating pipeline breaks in regulated industries like banking. Finally, we provide practical takeaways and a real-world case study. Chad shares a customer success story about replacing a large, complex Hadoop cluster with a streamlined Dremio and Pure Storage solution, highlighting the massive reduction in physical space, power consumption, and management complexity. Both guests emphasize the need for better governance practices to manage cloud spend and risk. Andrew underscores the essential, full-circle role of databases—from the "alpha" of data creation to the "omega" of feature stores and vector databases for modern AI use cases like Retrieval-Augmented Generation (RAG). Tune in to understand how a holistic data strategy, including Pure’s Enterprise Data Cloud, can simplify infrastructure and future-proof your organization for the next wave of data-intensive workloads. To learn more, visit https://www.purestorage.com/solutions/ai/data-warehouse-streaming-analytics.html Check out the new Pure Storage digital customer community to join the conversation with peers and Pure experts: https://purecommunity.purestorage.com/ 00:00 Intro and Welcome 03:15 Data Lakehouse Primer 08:31 Stat of the Episode on Lakehouse Usage 10:50 Challenges with Data Pipeline access 13:58 Assessing Organization Success with Data Cleaning 16:07 Use Cases for the Data Lakehouse 20:41 Case Study on Data Lakehouse Use Case 24:11 Hot Takes Segment