How does PlayStation run real time at massive scale. I sat down with Bahar Pattarkine from PlayStation the team to unpack how they use Apache Flink across telemetry and player experiences.
What we covered:
-- Why they chose Flink and what problem it solved first
-- Running 15,000+ events per second, launch peaks, regional latency SLOs, and avoiding hot partitions across titles
-- Phasing the move from Kafka consumers to a unified Flink pipeline without double processing during cutover
-- How checkpointing and async I/O keep latency low during spikes or failures
-- Privacy controls and regional rules enforced in real time
-- What Flink simplified in their pipelines and the impact on cost and ops
#data #ai #streaming #Flink #Playstation #Ververica #realtimestreaming #theravitshow
#EVOLVE25 London was a clear signal that Big Data is entering its third era, and it is about outcomes, not buzzwords. I sat down with Sergio Gago, CTO, Cloudera and we went straight to the shift everyone is feeling. This era is defined by convergence. The work now is to bring on-prem and cloud together so teams can move fast, stay compliant, and keep costs in check. That is where the real AI wins will come from.
Here is what we covered in the interview I am publishing next:
- What truly defines the third era of Big Data and how it differs from the last decade
- Why convergence matters now for performance, cost, and control
- Where Cloudera wins today, and where it chooses not to compete
- How a unified data foundation raises trust in AI
- The new Cloudera + Dell “AI-in-a-box” approach for private, trusted AI
- A five-year view of on-prem, cloud, and AI working together
- Cloudera’s vision to support this shift end to end
If you care about building trustworthy AI on real enterprise data, this conversation will be useful.
#data #ai #EVOLVE25 #cloudera #theravitshow
Sovereign cloud in Europe just moved from idea to action. At EVOLVE London I sat down with Christopher Royles from Cloudera. Cloudera has been named a launch partner for the new AWS European Sovereign Cloud, and we unpacked what this means for builders, leaders, and regulators across EMEA.
Here is what we covered in the interview I am publishing next:
- What “sovereign by design” means in practice for data, control planes, encryption, and operational access
- How this model helps organizations meet strict European requirements while keeping teams productive
- How it connects to Cloudera’s Private AI strategy so enterprises can run AI on their terms
- Where sovereign cloud demand is strongest across EMEA and why industries like public sector, financial services, and healthcare are leaning in
- A practical path to comply without stalling innovation, from architecture choices to operating models
If you care about trusted AI, data control, and real-world compliance in Europe, this conversation will be useful. I will share the full interview with Chris next.
#data #ai #sovereign #cloudera #theravitshow
Live from POSSIBLE by Teradata with Sumeet Arora, Chief Product Officer. had a blast chatting with him about five things: Trends. New announcements. Data and AI trends inside the enterprise. Real use cases. What is next for the industry.
New announcements --
- Autonomous Customer Intelligence. Turning customer signals into timely actions across the journey.
- NewtonX research on AI for customer experience. Where leaders are investing and where gaps still exist.
Trends in Data and AI
- Moving from pilots to production with tighter links between trusted data and AI.
- Clear governance and cost control built in from the start.
Use cases
- Retention and growth with real-time signals and simple next best actions.
- Smarter personalization without copying data all over the place.
Future
- Faster paths from idea to impact.
- Smaller, focused teams shipping measurable outcomes.
#data #ai #possible2025 #teradata #theravitshow
WOW! It was so nice to meet the man himself and even interview him on The Ravit Show at Possible 2025, Steve McMillan, President and CEO of Teradata. Getting this time on camera felt special.
We kept it human and future focused:
- His favorite Possible moment and what it revealed about the crowd
- The one belief he hopes people take home and act on
- When ideas strike for him, morning or night
- The place he goes to dream bigger
- What is still on his “possible” bucket list
Between the lines, you will hear the direction Teradata is setting:
-- Moving from snapshots to signals so decisions fire in real time
-- A cleaner path from data products to activation across the stack
-- Agents that don’t just chat but lift outcomes like CLV
-- Guardrails and services that help teams run this at scale
-- A builder mindset with new tools on the wa
If you care about where enterprise CX is heading, you will want to hear this one. I have also shared the link to all announcements from Possible 2025!!!!
#data #ai #possible2025 #teradata #theravitshow
Breaking down trust in AI, not just talking about it. I just sat with Manisha Khanna, Global Product Marketing Leader for AI at SAS, to unpack the SAS–IDC Data and AI Pulse. The core theme is simple. Trust drives ROI.
Key takeaways:
- Trustworthy AI leaders outperform because they do the basics well. Data lineage, access control, model monitoring, and clear ownership.
- Order matters. Fix data quality and governance first, then productize, then scale. Skipping steps is how pilots stall.
- Guardrails in SAS Viya make “safe by default” real. Clear policies, repeatable workflows, and measurable outcomes.
- Agentic AI readiness is not a tool choice. It is about reliable data, governed actions, and feedback loops that teams can audit.
Why this matters:
Enterprises keep chasing bigger models while the wins come from cleaner foundations. If you want impact, make trust a requirement, not a marketing line.
Watch it, share it with your team, and pressure test your own roadmap against these basics.
#data #ai #agenticai #sas #theravitshow
Agents are here. Governance decides who wins.
I just published my interview with my friend Marinela Profi from SAS. Marinela breaks down agentic AI in a way leaders can use today. Clear. Practical. Actionable.
What we covered:
- What makes agentic AI different and why enterprises should care
- Autonomy levels, decisioning, orchestration, and the human-AI balance
- Where teams go wrong: hype vs readiness, data maturity, governance, and missing orchestration
- Real use cases on Viya across banking, insurance, and manufacturing
- Why “LLMs are not agents” and how to combine deterministic and probabilistic methods with governance
- What a CIO or CDO should do now to move from pilots to production
Marinela is sharp and grounded. She touched the important points and kept it real for enterprise teams.
The interview is live. Watch it and share your takeaways.
AI without observability is guesswork.
I har a blast chatting with Patrick Lin, SVP and GM of Observability at Splunk on The Ravit Show. We get straight into how teams keep AI reliable and how leaders turn telemetry into business results.
What we cover:
• .conf25 updates in Splunk Observability
• AI Agentic Monitoring and AI Infrastructure Monitoring
• How a unified experience with Splunk AppDynamics and Splunk Observability Cloud helps teams ship faster with fewer surprises
• Why observability is now a growth lever, not just a safety net
• Fresh insights from the State of Observability 2025 report
My take:
• The nervous system of AI is observability
• Signal quality beats signal volume
• OpenTelemetry works best when tied to business context
• When SecOps and Observability work together, incidents become learning moments
If you care about reliable AI, faster recovery, and clear impact on productivity and revenue, this one will help.
#data #ai #conf2025 #splunk #splunkconf25 #SplunkSponsored #theravitshow
I sat down with Jeff Baxter, NetApp to discuss about the announcements from INSIGHT to go deep on the new announcements today and why they matter for teams building with AI.
We covered what this means in practice for customers. The NetApp AI Data Engine moves data to the right place at the right time, manages metadata, lineage, and versioning so work is reproducible, and adds AI powered ransomware detection so teams can ship with confidence. It runs as one platform across on premises and all major clouds, so hybrid stays simple and cost aware.
Highlights from our conversation:
• From data to innovation. A single data platform that reduces handoffs and cuts wait time for data scientists and engineers.
• Reproducible AI by design. Metadata, lineage, and versioning are first class so you can rerun, compare, and promote models with clarity.
• Security that keeps pace with AI. AI powered ransomware detection plus built in controls to protect sensitive data without slowing teams down.
• One control plane. On prem and across major clouds with consistent operations, cost visibility, and policy enforcement.
My take:
• This is about operational discipline, not hype. Reproducibility and lineage are the difference between a demo and a dependable AI program
• Security has to be native to the data platform. If it is bolted on later, teams hesitate and AI work stalls
• Hybrid is the real world. A unified approach across on prem and clouds reduces complexity and keeps options open
If you are scaling AI and want fewer blockers between data and outcomes, this will help.
“Explore NetApp AI solutions”
AI solutions page: https://www.netapp.com/artificial-intelligence/?utm_campaign=cross-aiml-multi-all-ww-digi-spp-ravit_/_baxter_yt_interview-1760561150310&utm_source=youtube&utm_medium=video&utm_content=video
#data #ai #insight2025 #netapp #theravitshow
Ransomware is getting faster. Your recovery needs to be faster.
I had a blast chatting with Ryan Howard from BMC Software AMI Security on building cyber resiliency for mainframe environments. Simple, practical, and focused on what actually works in the real world.
What we covered:
* What cyber resiliency really means for mainframes
* The core controls that matter: prevention, detection, isolation, recovery
* Common blockers teams hit when rolling out new controls
* A real incident walkthrough and how recovery stayed on track
Who should watch:
* Security leaders who own mainframe risk
* Infra and ops teams running mission-critical workloads
* Anyone tightening their ransomware playbook
Watch the interview now and share it with your team!!!!
#cyberresilience #ransomware #mainframe #security #incidentresponse #bmc #theravitshow
What happens when AI agents become your teammates on the mainframe?
I sat down with Anthony DiStauro from BMC on The Ravit Show to explore how agentic AI is moving from hype to real work. We unpacked the building blocks, the use cases, and what this shift means for teams who keep mission-critical systems running.
Highlights we covered:
- The teammate you didn’t hire: where agents plug in first across monitoring, remediation, change checks, and capacity tuning.
- The basics in plain English: AI Agents, Agentic Workflows, and MCP Servers, and how they connect to form an execution layer that can act, not just alert.
- Why now: falling hype, rising adoption as teams want safer automation with clear guardrails.
- From mundane to strategic: operators focusing on performance engineering, cost optimization, and resilience design while agents handle the repetitive loops.
- Capturing know-how: using agents to encode runbooks, tacit fixes, and tribal knowledge so it survives turnover.
- Five-year picture: proactive, self-healing mainframes where agents predict drift, test changes, and roll back safely.
- Humans + agents: trust comes from transparency, audit trails, and clear handoffs.
- Invisible infrastructure: agentic workflows that hum in the background and surface only when needed.
- From dashboards to decisions: moving beyond graphs to actions with approval gates for high-risk steps.
- Future talent: a shorter learning curve for newcomers, making mainframe roles more attractive.
If you care about reliability, cost, and speed on the mainframe, this is the next chapter.
#data #ai #mainframe #bmc #theravitshow
What if your mainframe could talk back and guide the fix? I spoke to Liat Sokolov, Product Manager for AI solutions and an AI Evangelist at BMC Software. We explored how teams move from manuals and dashboards to real conversations with the platform. Liat breaks down why “guided resolution” beats “just answers” by giving the next right step in context, cutting time to recovery and reducing errors.
We dug into the GenAI knowledge expert that keeps hard-won expertise in house as veterans retire. It becomes a coach for new developers and operators, helping them ramp faster and avoid costly mistakes.
We also separated conversational AI from AI Agents and showed why enterprises need both. Picture agents coordinating across dev, ops, and cloud to roll out a change with checks, traceability, and rollback.
That is how you modernize with confidence. Liat also explained why the mainframe can be the most explainable platform in the AI era, which matters for trust and safety.
We finished with a practical path forward. Start with one high-value workflow, capture the expert playbook, pilot a conversational assistant with guardrails, then add agents as you prove value.
If you care about making the mainframe simpler, safer, and faster, this interview is worth your time.
#data #ai #mainframe #bmc #theravitshow
What happens when enterprise-grade AI visualization meets on-prem reality? I sat down with Leo Brunnick, Chief Product Officer at Cloudera on The Ravit Show, to talk about a major shift: bringing Cloudera Data Visualization to on-prem environments.
We got deep into:
-- Why now? What pushed Cloudera to extend this capability beyond the cloud
-- How AI Visual and natural language querying are finally breaking barriers for non-technical users - right at the source
-- The actual features that make this visualization layer powerful—not just dashboards, but intelligent, explainable insights
-- Real business impact: we talked through use cases where organizations are solving high-stakes problems by giving their teams access to AI-powered visualizations on-prem
-- And most importantly—where this is all headed. Leo shared a vision that includes GenAI, real-time visualization, and enabling large enterprises to move faster, smarter, and more transparently with their data
-- The future of enterprise BI isn’t about choosing between cloud or on-prem. It’s about bringing AI to wherever the data lives
If you're navigating complex environments or looking to scale AI-driven insights inside the firewall, this conversation is worth your time.
#data #dataviz #ai #cloudera #theravitshow
During GraphSummit London, Neo4j put $100M to become the default knowledge layer for agentic systems. I spoke with Sudhir Hasbe, President & Chief Product Officer at Neo4j to break it down.
What we covered
* The $100M push. Why Neo4j is betting on graph as the knowledge layer for agentic systems
* Graph Intelligence. Turning disconnected data into explainable context your agents can trust
* Aura Agent. Build, test, and deploy agents on your graph data in minutes. Early access now. GA in Q4
* MCP Server for Neo4j. A cleaner path to add graph memory to the agents you already run
* Use cases. Fast wins in healthcare R&D, procurement and supply chain, and financial operations
* What’s next. GA timelines, first milestones, and how customers will measure impact
Why it matters
Most pilots fail without context and memory. Graphs give agents structure, reasoning, and traceability. That is how you ship production outcomes, not demos.
#Neo4j #GraphSummit #GenAI #AgenticAI #GraphIntelligence #TheRavitShow
I had a blast at GraphSummit by Neo4j yesterday in London. I spoke to Michael Hunger on The Ravit Show and we went deep on Neo4j’s $100M GenAI push and what it means right now. Neo4j Aura Agent: Create Your Own GraphRAG Agent in Minutes — https://bit.ly/3KTjxlJ
We discussed about these developments in the interview
• How this investment helps teams move past stalled pilots and get real results on production data
• Aura Agent explained in plain language, with the first two use cases to try for fast wins
• MCP Server for Neo4j and how it lets existing agents plug into graph memory with natural language and text to query
• What “default knowledge layer” looks like on day one, including how to keep results explainable and traceable
• Timelines and signals to watch as Aura Agent and the MCP Server move to GA in Q4
The interview is live now. If reliability, speed, and explainability are on your roadmap, you will find this useful.
#data #ai #neo4j #graphs #theravitshow
I had a blast at Neo4j's GraphSummit in London. I also got a chance to speak with Jesús Barrasa, AI Field CTO about the following topics -- - Customer Use Cases, - Challenges Enterprise Leaders Face - New Book about Graphs and more#Neo4j #GraphSummit #Infinigraph #GenAI #AgenticAI #GraphIntelligence #TheRavitShow
AI at massive scale needs a graph engine that does not blink at 100 TB. Enter Infinigraph.
I had a blast at GraphSummit, London, I spoke to Ivan Zoratti, VP Product Management, Neo4j to dig into the Infinigraph announcements and what they unlock for real workloads.
What we covered
* What Infinigraph is and who needs it now
* Property sharding in plain terms. How data spreads across nodes without losing graph semantics
* One engine for ops and analytics. How HTAP stays fast without starving either side
* Migration path for current Neo4j users. What carries over and what to plan for
* Where it shines at 100 TB and up. Boundaries, guardrails, and real results on speed and cost
* Timelines to ship and what this means for agentic AI next
Why it matters
Bigger graphs with lower latency change what agents can do. If you want real-time reasoning on live data, the storage and compute model must scale without falling apart.
Watch if you care about scale, cost, and a clean path from today’s Neo4j to what is coming next.
#Neo4j #GraphSummit #Infinigraph #GenAI #AgenticAI #GraphIntelligence #TheRavitShow
When someone builds the foundation of modern streaming, every insight counts.
At Data Streaming Summit 2025, I spoke with Sijie Guo, Co-founder and CEO of StreamNative, about how real-time data is reshaping the way companies move and act on information. Sijie shared how StreamNative continues to evolve Apache Pulsar’s mission — giving teams the ability to process, store, and serve data in motion with performance and simplicity.
We talked about what’s next for the streaming world. Over the next year, Sijie expects deeper convergence between streaming and AI, where real-time pipelines become intelligent enough to drive automated decision-making across industries.
He also emphasized how this summit stood out for its openness — not just a product showcase, but a true ecosystem of technologies working together. His favorite track? The Streaming Lakehouse discussions, where unifying data and streaming meets real-world scalability.
The conversation captures where the future of streaming is heading — and how StreamNative is helping enterprises get there faster.
#data #ai #datastreaming #streamnative #theravitshow
Real-time data isn’t just fast—it’s collaborative. At Data Streaming Summit 2025, I spoke with Rayees Pasha, CPO, RisingWave, a key partner of StreamNative, about how streaming ecosystems are evolving together instead of in silos.
Rayess shared how RisingWave’s real-time database helps teams run complex analytics directly on live data, and how their partnership with StreamNative brings true interoperability to customers.
We discussed the future of streaming over the next year, where streaming meets AI, and intelligent pipelines start driving automated, data-driven actions. He also appreciated the summit’s open and multi-technology approach, which encourages collaboration rather than competition.
His favorite moments came from the AI + Stream Processing track, where discussions focused on turning streaming data into real-time intelligence.
This conversation captures how openness and partnership are powering the next wave of streaming innovation.
#data #ai #streamnative #datastreamingsummit #theravitshow
Real time without the noise.
At Data Streaming Summit 2025, I spoke to Matteo Merli, Co-founder and CTO at StreamNative. We talked about why streaming matters now and what his team is building to make data in motion simple, scalable, and open. Matteo expects the next 12 months to be about streaming meeting AI in practical ways. Think instant enrichment, faster feedback loops, and agents that act safely on live context.
He liked the summit’s open, multi-technology format. It mirrors how real systems get built. His favorite threads connected Architectural Innovations with AI plus Stream Processing and the Streaming Lakehouse story.
Catch the conversation to hear Matteo’s take on where streaming is headed and why openness across the ecosystem will decide who moves fastest.
#data #ai #streaming #datastreamingsummit #theravitshow