Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
History
Sports
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/1c/ab/2c/1cab2c59-a0cd-4305-2bf5-e3396c9e98fa/mza_6399775579318533652.jpg/600x600bb.jpg
Future of Data Security
Qohash
30 episodes
2 weeks ago
Show more...
Technology
RSS
All content for Future of Data Security is the property of Qohash and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Show more...
Technology
https://pbcdn1.podbean.com/imglogo/ep-logo/pbblog19369388/Nic_Chavezalh1s.png
EP 23 — IBM's Nic Chavez on Why Data Comes Before AI
Future of Data Security
31 minutes
2 months ago
EP 23 — IBM's Nic Chavez on Why Data Comes Before AI
When IBM acquired Datastax, they inherited an experiment that proved something remarkable about enterprise AI adoption. Project Catalyst gave everyone in the company — not just engineers — a budget to build whatever they wanted using AI coding assistants. Nic Chavez, CISO of Data & AI, explains why this matters for the 99% of enterprise AI projects currently stuck in pilot purgatory: technical barriers for creating useful tools have collapsed. As a member of the World Economic Forum’s CISO reference group, Nic has visibility into how the world’s largest organizations approach AI security. The unanimous concern is that employees are accidentally exfiltrating sensitive data into free LLMs faster than security teams can deploy internal alternatives. The winning strategy isn’t blocking external AI tools, but deploying better internal options that employees actually want to use. Topics discussed: - Why less than 1% of enterprise AI projects move from pilot to production. - How vendor push versus customer pull dynamics create misalignment with overall enterprise strategy. - The emergence of accidental data exfiltration as the primary AI security risk when employees dump confidential information into free LLMs. - How Project Catalyst democratized AI development by giving non-technical employees budgets to build with coding assistants, proving the technical barrier for useful tool creation has dropped dramatically. - The strategy of making enterprise AI ”the cool house to hang out at” by deploying internal tools better than external options. - Why the velocity gap between attackers and enterprises in AI deployment comes down to procurement cycles versus instant hacker decisions for deepfake creation. - How the World Economic Forum’s Chatham House rule enables CISOs from the world’s largest companies to freely exchange ideas about AI governance without attribution concerns. - The role of LLM optimization in preventing super intelligence trained on poison data by establishing data provenance verification. - Why Anthropic’s copyright settlement signals the end of the “ask forgiveness not permission” approach to training data sourcing. - How edge intelligence versus cloud centralization decisions depend on data freshness requirements and whether streaming updates from vector databases can supplement local models.
Future of Data Security