Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
News
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/91/58/e9/9158e97f-fc34-79e4-6376-1e54dd1ac88f/mza_1617936603711272258.jpg/600x600bb.jpg
Digital Disruption with Geoff Nielson
Info-Tech Research Group
42 episodes
2 days ago
The Next Industrial Revolution is Already Here Digital Disruption is where industry leaders and experts share insights on leveraging technology to build the organizations of the future. As intelligent technologies reshape our lives and our livelihoods, we speak with the thinkers, the doers and innovators who will help us predict and harness this disruption. Join us as we explore how to adapt to and harness digital transformation.
Show more...
Technology
RSS
All content for Digital Disruption with Geoff Nielson is the property of Info-Tech Research Group and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The Next Industrial Revolution is Already Here Digital Disruption is where industry leaders and experts share insights on leveraging technology to build the organizations of the future. As intelligent technologies reshape our lives and our livelihoods, we speak with the thinkers, the doers and innovators who will help us predict and harness this disruption. Join us as we explore how to adapt to and harness digital transformation.
Show more...
Technology
Episodes (20/42)
Digital Disruption with Geoff Nielson
Go All In on AI: The Economist’s Kenneth Cukier on AI's Experimentation Era

If AI is becoming a “playground” for experimentation, are today’s organizations bold enough to explore it or are they still too afraid to try?

On this episode of Digital Disruption, we are joined by Kenneth Cukier, Deputy Executive Editor at The Economist and bestselling author.

Kenneth Cukier is the Deputy Executive Editor at The Economist. He is the author of several books on technology and society, notably “Framers” on the power of mental models and the limitations of AI, with Viktor Mayer-Schönberger and Francis de Vericourt, as well as “Big Data: A Revolution That Transforms How We Live, Work and Think” with Viktor. It was a NYT bestseller translated into over 20 languages, and sold over two million copies worldwide. It won the National Library of China’s Wenjin Book Award and was a finalist for the FT Business Book of the Year. Kenn also coauthored a follow-on book, “Learning with Big Data: The Future of Education”. He has been a frequent commentator on CBS, CNN, NPR, the BBC and was a member of the World Economic Forum’s global council on data-driven development.


Kenneth has spent decades at the intersection of AI, journalism, business strategy, and global policy. In this conversation, he sits down with Geoff to share candid insights on how AI is reshaping organizations, leadership, economics, and the future of work. He breaks down the real state of AI, what’s hype, what’s real, and what it means for workers, leaders, and companies. Kenneth explains how AI is shifting from automating tasks to expanding the frontier of knowledge, why today’s multi-trillion-dollar AI investment wave is both overhyped and underhyped, and how everything from healthcare to management is poised to transform. This episode explores why most companies should treat AI as a “playground” for experimentation, how The Economist is using generative AI behind the scenes, the human skills needed to stay competitive, and why great leadership now requires enabling curiosity, psychological safety, and responsible innovation. Kenneth also unpacks the growing “AI-lash,” the limits of GDP as a measure of progress, and why the organizations that learn fastest, not the ones that simply know the most, will win the future.


In this episode:

00:00 Intro

05:00 AI Today: Overhyped, underhyped, or both?

10:00 From Big Data to LLMs: How we got here

15:00 The $3 trillion AI wave: What it really signals

20:00 Automation vs. knowledge expansion

25:00 Inside The Economist: How they actually use Generative AI

30:00 Why “more content” isn’t a strategy

35:00 Leadership in the age of AI: Curiosity, judgment, culture

40:00 The skills humans must keep and why they matter more now

45:00 The rise of the “AI-lash” and public skepticism

50:00 GDP, progress, and what we’re measuring wrong

55:00 Why the fastest learners win the future

1:01:00 What can this technology really do?


Connect with Kenneth:

Connect with Kenneth:

Website: http://www.cukier.com/

LinkedIn: https://www.linkedin.com/in/kenneth-cukier-9ab56335/

X: https://x.com/kncukier


Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcast

Follow us on YouTube: https://www.youtube.com/@InfoTechRG

Show more...
2 days ago
1 hour 5 minutes 34 seconds

Digital Disruption with Geoff Nielson
Is AI Eroding Identity? Future of Work Expert on How AI is Taking More than Jobs

What does the future of work really look like when AI, identity, and culture collide?


On this episode of Digital Disruption, we’re joined by Dr. Anne-Marie Imafidon, Chair of the Institute for the Future of Work.


Anne-Marie is a leading voice in the tech world, known for her work as a trustee at the Institute for the Future of Work and as the temporary Arithmetician on Channel 4’s Countdown. A former child prodigy who passed A-level computing at 11 and earned a Master’s in Maths and Computer Science from Oxford by 20, she has since spoken globally for companies including Facebook, Amazon, Google and Mastercard. She hosts the acclaimed Women Tech Charge podcast and is a sought-after presenter who has interviewed figures such as Jack Dorsey and Sir Lewis Hamilton. Anne-Marie has received multiple Honorary Doctorates, serves on several national boards, and continues to champion diversity and innovation in tech. Her latest book, She’s In CTRL, was published in 2022.


Dr. Anne-Marie joins Geoff to break down how AI, big data, quantum, and the wider “Fourth Industrial Revolution” are transforming jobs, workplaces, identity, culture, and society. From redefining long-held beliefs about “jobs for life,” to the cultural fractures emerging between companies, workers, and society, Dr. Anne-Marie goes deep on what’s changing, what still isn’t understood, and what leaders must do right now to avoid being left behind. This conversation dives into why most AI use cases are still limited to fraud detection and customer service, and the hidden cultural blockers preventing real transformation. She emphasizes the danger of hype cycles, and how to stay focused on real value and how to build organizations that can experiment, learn, and make “high-quality mistakes.”


In this episode:

00:00 Intro

00:31 The Future of Work: What’s changing now

02:32 Generational identity, legacy jobs & why work is no longer “for life”

04:36 Work identity crisis & fragmentation of modern careers

07:45 Rethinking digital transformation & the fourth industrial revolution

11:36 Why the institute avoids the AI hype & looks beyond it

13:39 AI Hype vs. reality

17:50 High-quality mistakes

21:06 Tech design failures

23:18 Culture, customers & building organizations that reflect the real world

29:04 Destroying the “Einstein Myth” & rewriting who tech is for

39:37 First-principles thinking

50:34 Norms, unintended consequences & system-level change

55:32 When will the dust settle? ai timelines, disruption & what’s next

57:28 Closing thoughts


Connect with Dr. Ann-Marie:

LinkedIn: https://www.linkedin.com/in/aimafidon/

Instagram: https://www.instagram.com/notyouraverageami/


Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcast

Follow us on YouTube: https://www.youtube.com/@InfoTechRG

Show more...
1 week ago
58 minutes 34 seconds

Digital Disruption with Geoff Nielson
How AI Will Save Humanity: Creator of The Last Invention Explains

When intelligence becomes abundant, what happens to humanity’s purpose?


Andy Mills, the co-founder of The New York Times’ The Daily and creator of The Last Invention, joins us on this episode of Digital Disruption.


Andy is a reporter, editor, podcast producer, and co-founder of Longview. His most recent series, The Last Invention, explores the AI revolution, from Alan Turing’s early ideas to today’s fierce debates between accelerationists, doomers, and those focused on building the technology safely. Before that, he co-created The Daily at The New York Times and produced acclaimed documentary series including Rabbit Hole, Caliphate, and The Witch Trials of J.K. Rowling. A former fundamentalist Christian from Louisiana and Illinois, Andy now champions curiosity, skepticism, and the transformative power of listening to people with different perspectives, values that shape his award-winning journalism across politics, terrorism, culture wars, technology, and science.


Andy sits down with Geoff to break down the real debate shaping the future of AI. From the “doomers” warning of existential risk to the accelerationists racing toward AGI, Andy maps out the three major AI camps influencing policy, economics, and the future of human intelligence. This conversation explores why some researchers fear AGI, why others believe it will save humanity, how job loss and automation could reshape society, and why 2025 is becoming an “AI 101 moment” for the public. Andy also shares what he’s learned after years investigating OpenAI, Anthropic, xAI, and the people behind the AGI race.


If you want clarity on AGI, existential risk, the future of work, and what it all means for humanity, this is an episode you won’t want to miss.


In this episode:

00:00 Intro

01:00 The three camps of AI: doom, acceleration, scouts

05:00 Why skeptics aren’t driving the AI debate

07:00 Job loss, productivity & “good” vs. “bad” disruption

09:00 Existential risk & why scientists are sounding alarms

12:00 The origins of doomers and accelerationists

17:00 How AI debates escalated after ChatGPT

22:00 Why 2025 is an AI “101 moment” for the public

24:00 The tech stack wars: OpenAI, Anthropic, xAI

28:00 Why leaders joined the AI race

30:00 The accelerationist mindset

33:00 Contrarians, symbolists & the forgotten history of AI

39:00 Big Tech, branding & why AI CEOs avoid open conflict

42:00 The closed group chats of AI’s elite builders

46:00 Sci-Fi narratives vs. real-world intelligence risks

52:00 The AI bubble & why adoption is unlike any tech before

01:00:00 Are we entering a wright-brothers-to-moon-landing era?

01:10:00 What AGI means for capitalism, work & purpose

01:18:00 Why public debate needs to start now

01:20:00 What happens next


Connect with Andy:

Website: https://www.andymills.work/about



Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcast

Follow us on YouTube: https://www.youtube.com/@InfoTechRG

Show more...
2 weeks ago
1 hour 21 minutes 25 seconds

Digital Disruption with Geoff Nielson
AGI Is Here: AI Legend Peter Norvig on Why it Doesn't Matter Anymore

Are we chasing the wrong goal with Artificial General Intelligence, and missing the breakthroughs that matter now


On this episode of Digital Disruption, we’re joined by former research director at Google and AI legend, Peter Norvig.


Peter is an American computer scientist and a Distinguished Education Fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI). He is also a researcher at Google, where he previously served as Director of Research and led the company’s core search algorithms group. Before joining Google, Norvig headed NASA Ames Research Center’s Computational Sciences Division, where he served as NASA’s senior computer scientist and received the NASA Exceptional Achievement Award in 2001.He is best known as the co-author, alongside Stuart J. Russell, of Artificial Intelligence: A Modern Approach — the world’s most widely used textbook in the field of artificial intelligence.


Peter sits down with Geoff to separate facts from fiction about where AI is really headed. He explains why the hype around Artificial General Intelligence (AGI) misses the point, how today’s models are already “general,” and what truly matters most: making AI safer, more reliable, and human-centered. He discusses the rapid evolution of generative models, the risks of misinformation, AI safety, open-source regulation, and the balance between democratizing AI and containing powerful systems. This conversation explores the impact of AI on jobs, education, cybersecurity, and global inequality, and how organizations can adapt, not by chasing hype, but by aligning AI to business and societal goals. If you want to understand where AI actually stands, beyond the headlines, this is the conversation you need to hear.


In this episode:

00:00 Intro

01:00 How AI evolved since Artificial Intelligence: A Modern Approach

03:00 Is AGI already here? Norvig’s take on general intelligence

06:00 The surprising progress in large language models

08:00 Evolution vs. revolution

10:00 Making AI safer and more reliable

12:00 Lessons from social media and unintended consequences

15:00 The real AI risks: misinformation and misuse

18:00 Inside Stanford’s Human-Centered AI Institute

20:00 Regulation, policy, and the role of government

22:00 Why AI may need an Underwriters Laboratory moment

24:00 Will there be one “winner” in the AI race?

26:00 The open-source dilemma: freedom vs. safety

28:00 Can AI improve cybersecurity more than it harms it?

30:00 “Teach Yourself Programming in 10 Years” in the AI age

33:00 The speed paradox: learning vs. automation

36:00 How AI might (finally) change productivity

38:00 Global economics, China, and leapfrog technologies

42:00 The job market: faster disruption and inequality

45:00 The social safety net and future of full-time work

48:00 Winners, losers, and redistributing value in the AI era

50:00 How CEOs should really approach AI strategy

52:00 Why hiring a “PhD in AI” isn’t the answer

54:00 The democratization of AI for small businesses

56:00 The future of IT and enterprise functions

57:00 Advice for staying relevant as a technologist

59:00 A realistic optimism for AI’s future


Connect with Peter:

LinkedIn: https://www.linkedin.com/in/pnorvig/


Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG

Show more...
3 weeks ago
1 hour 6 minutes 8 seconds

Digital Disruption with Geoff Nielson
Why AI is Failing: Ex-Google Chief Cassie Kozyrkov Debunks "AI-first"

Is “AI-first” the future of business or just another tech buzzword?


On this episode of Digital Disruption, we’re joined by former Google Chief Decision Scientist and CEO of Kozyr, Cassie Kozyrkov.


Cassie is best known for founding the field of Decision Intelligence and serving as Google’s first Chief Decision Scientist, where she helped lead the company’s AI-first transformation. A sought-after advisor and keynote speaker, Cassie has guided organizations including Gucci, NASA, Meta, Spotify, Salesforce, and GSK on AI strategy. She combines deep technical expertise with theater-trained charisma to make complex concepts engaging and actionable for executive and general audiences alike delighting audiences in over 40 countries across all seven continents, including stages at the UN, WEF, Web Summit, and SXSW.


Cassie sits down with Geoff to unpack the hidden cost of the “AI-first” hype, the dangers of AI infrastructure debt, and why real AI readiness starts with people, not technology. She reveals how leaders can architect their organizations for innovation, build human-in-the-loop systems, and create cultures that embrace experimentation instead of fearing mistakes.


Cassie exposes why 95% of organizations fail to achieve measurable ROI from AI and how leaders can finally bridge the AI value gap. This conversation dives into why AI success isn’t about tools, it’s about leadership, measurement, and mindset.


Most organizations chasing “AI transformation” see no measurable ROI not because the technology fails, but because leaders are still measuring value the old way. Generative AI success is hard to quantify when there isn’t a single “right answer,” yet many businesses keep trying to apply outdated metrics to a completely new paradigm.


In this video:

00:00 Intro

00:44 The Generative AI Value Gap: Why 95% get no ROI

02:20 The paradox of AI productivity

05:38 Why measuring AI value is harder than we think

12:04 Leadership abdication: “Just sprinkle AI on everything”

15:10 AI infrastructure debt explained

20:17 What real AI readiness looks like (beyond tech)

23:42 Humans as part of AI infrastructure

28:00 Why “AI-first” isn’t one-size-fits-all

33:31 Building human judgment into AI systems

36:19 The risks of scaling too fast

41:34 Automation vs augmentation: where leaders go wrong

44:00 The “do the work” approach to AI success

48:35 The recipe for an AI-ready organization

53:40 Guardrails, governance, and security in AI systems

57:00 Thinking probabilistically: a new mindset for leaders

1:03:20 The human side of AI transformation

1:06:45 Leading through uncertainty


Connect with Cassie:

Website: https://www.kozyr.com/about

LinkedIn: https://www.linkedin.com/in/kozyrkov/

X: https://x.com/decisionleader

YouTube: https://www.youtube.com/c/Kozyrkov



Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcast

Follow us on YouTube: https://www.youtube.com/@InfoTechRG

Show more...
1 month ago
1 hour 26 minutes 20 seconds

Digital Disruption with Geoff Nielson
How AI-Ready Leaders Will Replace You: Erik Qualman Explains

Why is adaptability the real superpower for leaders in the digital age?


On this episode of Digital Disruption, we’re joined by Erik Qualman, a digital leadership expert, best-selling author, and motivational speaker.


Erik is a 5x #1 Bestselling Author and Keynote Speaker who has inspired audiences in over 55 countries and reached 50 million people. Voted the #2 Most Likeable Author in the World behind J.K. Rowling, his work Socialnomics has been featured on 60 Minutes, in The Wall Street Journal, and used by organizations from the National Guard to NASA. A professor of Digital Leadership at Northwestern University, Qualman’s research and courses are studied at 500+ universities worldwide. Through his animation studio, he has partnered with brands like Disney, Oreo, Chase, and Cartier. A former MIT and Harvard edX professor and honorary doctorate recipient, Qualman is also the creator of the bestselling board game Kittycorn.


Erik joins Geoff Nielson to break down what it really means to be AI-ready. He reveals why the leaders who know how to leverage AI and adapt fast will replace those who don’t. He explains why AI is overhyped in the short term but underhyped in the long term, and how the most successful leaders of the next decade will blend Flintstones-level human connection with Jetsons-era innovation. Erik explains why adaptability and emotional intelligence (EQ) are the new competitive edge in the age of artificial intelligence. This conversation explores how AI can remove friction, save time, and ironically help us become more human, while also exploring the guardrails needed for responsible tech adoption. Erik also shares lessons from advising some of the world’s top brands including Facebook, Disney, and Sony and explains why the future favors those who fail fast, fail forward, and fail better.


In this video:

00:00 Intro

02:00 The “Flintstones First” approach to digital leadership

04:40 How AI helps us become more human

06:15 Winners, losers, and adaptability in the AI era

08:30 Emotional intelligence and leadership in a tech-driven world

11:00 The need for guardrails in AI and social media

13:00 Teaching AI and digital leadership at Northwestern

15:00 How technology is transforming the classroom

17:45 The 70/30 rule: what changes vs. what never will

19:00 Core advice for leaders and digital innovators

21:30 Avoiding hype: testing new tech like AI and Clubhouse

23:00 Lessons from Montblanc and the origins of “Digital Leadership”

25:00 The Disney+ story: digital transformation done right

27:00 Building a culture of “fail fast, fail forward, fail better”

30:00 Balancing the Flintstones and the Jetsons


Connect with Erik:

Website: https://equalman.com/

LinkedIn: https://www.linkedin.com/in/qualman/

X: https://x.com/equalman

YouTube: @equalman



Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcast

Follow us on YouTube: https://www.youtube.com/@InfoTechRG

Show more...
1 month ago
50 minutes 49 seconds

Digital Disruption with Geoff Nielson
The AI Market Must Crash: Ed Zitron on Why the Bubble Will Burst

Could AI’s biggest impact be economic, not technological?


On this episode of Digital Disruption, we’re joined by the founder of EZPR and host of Better Offline podcast, Ed Zitron.


Ed is a technology writer, public relations expert, and podcaster known for his critical takes on the tech industry and its biggest players. His work has appeared in leading outlets including The Atlantic, Business Insider, and TechCrunch. He is the author of the popular newsletter Where’s Your Ed At, launched in 2020, where he explores the intersection of technology, business, and culture. Ed also hosts the Better Offline podcast, delving into the realities of the tech industry and the ripple effects of the AI boom. With his candid insights and thoughtful commentary, Ed has become a trusted voice and sought-after speaker within the tech community.


One of the most outspoken critics of the AI boom, Ed Zitron joins Geoff to cut through the noise and talk about the truth behind generative AI. Ed breaks down why he believes AI “doesn’t work,” what’s really driving the trillion-dollar hype, and why big tech, media, and investors may be steering straight into the next Enron moment. This conversation unpacks why large language models fall short, how Microsoft’s AI Copilot has failed to deliver, and how corporate opportunism and investor “vibes” are fueling one of the biggest speculative bubbles in tech history. They also explore the “Enron-like” risks in the AI hardware race, the potential fallout for retail investors and startups, and tackle one of tech’s most misunderstood narratives, the myth of AI-driven job loss, revealing who’s really being replaced.


In this episode:

00:00 Intro

00:36 “AI doesn’t work”

02:05 The limits of LLMs

04:33 Microsoft Copilot and the illusion of productivity

07:05 The AI job myth: Who’s really being replaced?

10:00 CEOs, opportunism, and the false narrative of AI efficiency

12:00 The Salesforce example: Lies, hype, and failure to deliver

14:00 What AI can actually do

18:00 The trust problem

19:45 Media complacency and tech industry collusion

22:00 Microsoft, Nvidia, and false growth

25:00 The Enron parallels

28:30 Why investors are rewarding bad behavior

31:00 Who gets hurt when the AI bubble bursts?

35:00 Unsustainable startups and rising model costs

38:00 The coming collapse of AI infrastructure

40:00 What business leaders should do now to avoid being burned

44:30 The harsh truth about ChatGPT

49:00 What real innovation looks like: Batteries, EVs, AR, and more

54:00 The future of work beyond AI hype


Connect with Ed:

LinkedIn: https://www.linkedin.com/in/edzitron/

X: https://x.com/edzitron

Instagram: instagram.com/edzitron




Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcast

Follow us on YouTube: https://www.youtube.com/@InfoTechRG

Show more...
1 month ago
58 minutes 52 seconds

Digital Disruption with Geoff Nielson
Godfather of AGI on Why Big Tech Innovation is Over

Is the AI arms race between tech giants and nations pushing us toward a dangerous future?


On this episode of Digital Disruption, we’re joined by the founder of SingularityNET and the pioneering mind behind the term Artificial General Intelligence (AGI), Dr. Ben Goertzel.


Dr. Ben Goertzel is a leading figure in artificial intelligence, robotics, and computational finance. Holding a Ph.D. in Mathematics from Temple University, he has been a pioneer in advancing both the theory and practical applications of AI, particularly in the pursuit of artificial general intelligence (AGI) a term he helped popularize. He currently leads the SingularityNET Foundation, TrueAGI, the OpenCog Foundation, and the AGI Society, and has organized the Artificial General Intelligence Conference for over fifteen years. A co-founder and principal architect of OpenCog, an open-source project to build human-level AI, Dr. Goertzel’s work reflects a singular mission: to develop benevolent AGI that advances humanity’s collective good.


Dr. Goertzel sits down with Geoff to share his insights on the accelerating progress toward AGI, what it truly means, and how it could reshape human life, work, and consciousness. He discusses the role of Big Tech in shaping AI’s direction and how corporate incentives, and commercialization are both driving innovation and limiting true AGI research. From DeepMind and OpenAI to decentralized AI networks, Dr. Goertzel reveals where the real breakthroughs might happen. The conversation also explores the ethics of AI, the dangers of fake democratization and false compassion, and why humanity must shape AI’s evolution with empathy and awareness.


In this episode:

00:00 Intro

00:21 What is Artificial General Intelligence (AGI)?

01:10 The pace of AI progress and the hype cycle

05:44 The path from human-level AGI to superintelligence

09:20 How close are we to AGI?

13:08 Transformer vs. multi-agent systems

14:05 Which AI labs might strike AGI gold? (DeepMind, OpenAI, Anthropic)

17:07 Big Tech’s innovator’s dilemma and why true AGI may come elsewhere

20:20 Predictive coding

22:59 Why Big Tech resists new AI training paradigms

29:16 Imagining life after AGI: optimism, transhumanism, and choice

33:29 Navigating the transition from AGI to ASI

37:55 Decentralized vs. centralized control of AGI

43:20 Who (or what) will be in control

47:19 Risks of power concentration in early AGI development

51:01 Who should own and guide AGI?

53:06 Why we need participatory governance for intelligent systems

54:47 The danger of fake compassion and false democratization

1:00:50 Finding meaning in the age of intelligent machines

1:04:13 How AGI could help humanity focus on inner growth

1:07:20 – Learning how to learn: the last human advantage


Connect with Dr. Goertzel:

LinkedIn: https://www.linkedin.com/in/bengoertzel/

X: https://x.com/bengoertzel




Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcast

Follow us on YouTube: https://www.youtube.com/@InfoTechRG

Show more...
1 month ago
1 hour 9 minutes 22 seconds

Digital Disruption with Geoff Nielson
Have I Been Pwned Founder Troy Hunt talks Breaches, Ransomware & Online Safety

How are AI and automation shaping both the attack and defense sides of cybersecurity?


On this episode of Digital Disruption, we’re joined by the founder and CEO of Have I Been Pwned, Troy Hunt.


Troy Hunt is an Australian security researcher and the founder of the data breach notification service, Have I Been Pwned. With a background in software development specializing in information security, Troy is a regular conference speaker and trainer. He frequently appears in the media, collaborates with government and law enforcement agencies, and has appeared before the U.S. Congress as an expert witness on the impact of data breaches. Troy also serves as a Microsoft Regional Director (an honorary title) and regularly blogs at troyhunt.com from his home on Australia’s Gold Coast.


Troy sits down with Geoff to share eye-opening insights on the evolving threat landscape of 2025 and beyond. Despite the rise of AI and automation, Troy emphasizes that many of today’s most damaging data breaches and ransomware attacks still stem from basic human error and social engineering. He explains how ransomware has shifted from encrypting files to threatening data disclosure, making it harder for organizations to manage risk and justify ransom payments. The conversation also touches on how breach fatigue and apathy have led many individuals and businesses to underestimate cybersecurity risks, even as incidents rise globally. He also highlights how AI tools are being weaponized by both defenders and attackers and argues that cybersecurity isn’t about perfect protection but about finding equilibrium: balancing usability, education, and risk mitigation.


In this episode:

00:00 Intro

01:15 Why human weakness beats AI

02:00 Young hackers and the rise of scattered spider

04:00 From hacktivists to career criminals

05:00 Ransomware’s new tactics

07:30 Should companies pay the ransom?

10:20 Can you ever be fully protected? Defense vs. response

11:20 How to convince boards cybersecurity is worth the money

14:20 Breach fatigue and public apathy

18:00 Reframing what ‘sensitive data’ really means

20:00 Passwords, reuse, and the real risk equation

24:00 Biometrics, face ID & the future of authentication

26:30 Threat Modeling 101

27:30 Barriers to cyber preparedness

29:30 How Have I Been Pwned works

32:00 The Future of Data Breaches

38:00 Microsoft’s Role in the Security Ecosystem

40:30 AI Hype vs. reality in cybersecurity

43:00 When AI helps hackers

52:00 Why transparency still matters after every breach

54:00 Accepting risk, building resilience


Connect with Troy:

Website: https://www.troyhunt.com/

LinkedIn: https://www.linkedin.com/in/troyhunt/

X: https://x.com/troyhunt



Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcast

Follow us on YouTube: https://www.youtube.com/@InfoTechRG

Show more...
1 month ago
57 minutes 48 seconds

Digital Disruption with Geoff Nielson
Design Expert: AI, Entrepreneurship, and the Future of Digital Experiences

What does the future of digital experiences look like when AI, accessibility, and entrepreneurship collide?


On this episode of Digital Disruption, we’re joined by serial tech entrepreneur, accessibility advocate, and co-founder of Global Accessibility Awareness Day (GAAD), Joe Devon.


As Chair of the GAAD Foundation, Joe strives to disrupt the culture of technology and digital product development by embedding accessibility as a core requirement. Inspired by his 2011 blog post highlighting the need for mainstream accessibility knowledge among developers, GAAD has grown into an annual event observed on the third Thursday of May, promoting digital access and inclusion for over one billion people with disabilities worldwide. He also co-hosts the Accessibility and Gen.AI Podcast, exploring the intersection of accessibility and artificial intelligence.


Joe sits down with Geoff to explore how AI startups are reshaping the digital landscape, from code accessibility to the rise of small business innovation. He shares the story of how one blog post led to a global accessibility movement, why AI-driven tools could either democratize or centralize technology, and how the entrepreneurial spirit will define the next decade. From robotics fused with large language models to AI coding assistants generating billions of lines of code, this conversation dives into the challenges, risks, and opportunities for entrepreneurs and digital leaders navigating this transformation.


In this episode:

00:00 Intro

00:23 The “ChatGPT moment” for robotics

01:23 The mission behind Global Accessibility Awareness Day

02:29 How AI shifts the accessibility conversation

03:08 Why accessibility matters for everyone

06:17 Usability and empathy in digital product design

12:28 How AI can unlock inclusion and personalization

14:45 Aphantasia, hyperphantasia & diverse human abilities

17:34 AI and the future of sign language translation

19:23 How to work with disability communities

23:59 Advice for leaders getting started with inclusive design

25:13 AI coding tools revolutionizing software development

29:27 Can AI accessibility become the new standard?

30:06 How GAAD became a global movement

35:24 Entrepreneurship vs. 9-to-5 in an AI-powered economy

45:07 Lessons from the early internet and RSS’s decline

47:04 The debate on Universal Basic Income (UBI)

50:26 Joe’s father’s influence and the accessibility journey

52:34 From a blog post to real change in banking

57:13 The rise of AI influencers vs. the value of real humans

58:36 Advice for those unsure about entrepreneurship


Connect with Joe:

LinkedIn: https://www.linkedin.com/in/joedevon/

X: https://x.com/joedevon




Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcast

Follow us on YouTube: https://www.youtube.com/@InfoTechRG

Show more...
2 months ago
1 hour 16 seconds

Digital Disruption with Geoff Nielson
Deloitte's Chief Futurist on AI, Job Loss, and the Art of Thinking

What happens when AI becomes as good at thinking as humans and what skills remain uniquely ours?


On this episode of Digital Disruption, we’re joined by Mike Bechtel, Chief Futurist at Deloitte.


Mike began his career at Accenture Labs, where his team helped Fortune 500 clients put emerging technologies to profitable use. Twelve years and twelve U.S. patents later, he was named the firm’s first Global Innovation Director, tasked with creating the strategy, processes, and culture to foster company-wide intrapreneurship. At Deloitte, Mike and his team focus on making sense of what’s new and next in technology, with the goal of helping today’s leaders arrive at their preferred futures ahead of schedule. He also serves as an adjunct professor at the University of Notre Dame, where he teaches corporate innovation. In 2013, Mike co-founded and served as Managing Director of Ringleader Ventures, a venture capital firm investing in early-stage startups that had—intentionally or not—built simple solutions to complex corporate challenges.


Mike sits down with Geoff to talk about the future of AI and what it means for our work, creativity, and humanity. He shares an optimistic vision of a world where automation elevates human potential, allowing people to focus on creativity, innovation, and connection. This conversation challenges how we think about AI, AI art, the future of work, and the role of human skills. Mike draws on his experience advising leaders and students to show how we can prepare for a future where AI isn’t just a tool but a partner in thinking. He shares insights on the ethical challenges of outsourcing thought, why intent matters in how organizations use AI, and why AI is less about replacing jobs and more about automating the “muck” so humans can focus on the magic. Mike also challenges us to consider: what happens to writing, philosophy, and ethics when machines can master technical tasks faster than ever?


In this episode:

00:00 Intro

01:00 Why AI isn’t new (but why this moment matters)

04:10 Automation as a Trojan Horse for elevation

05:30 Best practices get automated, next practices get built

09:50 Expertise vs. curiosity: Why human skills win

15:00 From fear to opportunity

17:00 What clients really want in the AI era

19:20 Automating muck to unlock magic

22:00 The revenge of the humanities & synthetic thinking

23:00 Writing = thinking: Why we can’t outsource human thought

36:15 Why people still matter

42:00 Beyond AI: Blockchain, trust & cryptographic futures

47:30 Deepfakes, truth, and the math we can trust

1:05:50 The future of IT

1:08:00 AI Reshaping corporate teams & skills

1:14:40 Guidance for the next generation

1:19:30 Staying ahead of AI

1:21:45 Key takeaways



Connect with Mike:

Website: https://mikebech.tel/bio

LinkedIn: https://www.linkedin.com/in/mikebechtel/

X: https://x.com/mikebechtel




Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcast

Follow us on YouTube: https://www.youtube.com/@InfoTechRG

Show more...
2 months ago
1 hour 22 minutes 16 seconds

Digital Disruption with Geoff Nielson
Boom or Bust? Top AI Investor Reveals the Future of AI Startups

Which strategies separate the AI startups that thrive from those that die?


Today on Digital Disruption, we’re joined by Jeremiah Owyang, a venture capitalist at Blitzscaling Ventures.


Jeremiah, a longtime Silicon Valley native, leads investments in early-stage AI startups at Blitzscaling Ventures. He focuses on startups with the potential for rapid scale and enduring leadership in valuable markets. He also organizes the popular Llama Lounge: The AI Startup Event Series. As a speaker, Jeremiah forecasts how early-stage technology will reshape business and society and advises audiences on how to turn disruption into advantage.


Jeremiah sits down with Geoff to unpack the reality of Silicon Valley’s AI gold rush. With more than 38,000 AI startups competing for attention, thin technical moats, and the looming threat of consolidation, founders face more pressure than ever to stand out. He shares insider insights on why most AI startups are vulnerable to being wiped out by a single update from @OpenAI or Google, which industries and roles are most at risk of automation, and why critical thinking remains one of humanity’s most valuable advantages. The conversation also explores how Gen Alpha, the first AI-native generation, will grow up and work differently than any before, as well as the rise of AI agents and their impact on everyday work.


In this video:

00:00 Intro

01:45 The AI startup explosion

04:30 Why most AI startups will fail without real advantages

07:10 One OpenAI update could destroy your startup overnight

10:25 Thin technical advantages and the search for moats

13:40 What VCs look for in AI startups

17:05 The rise of Lean AI startups

20:15 AI agents and the automation of entry-level jobs

23:50 The skills humans still need

27:10 Gen Alpha: The first AI-native workforce

30:20 AI’s role in corporate strategy and decision-making

34:00 The Risks of over-reliance on AI in business

37:25 From Gold Rush to shakeout

41:10 How CEOs can future-proof their companies in the AI era

45:30 Humanoid robots, agents, and the next wave of disruption

49:15 Winners and losers in the AI economy



Connect with Jeremiah:

Website: https://web-strategist.com/blog/about/

LinkedIn: https://www.linkedin.com/in/jowyang/

Instagram: https://www.instagram.com/jowyang/

X: https://x.com/jowyang




Visit our website: https://www.infotech.com/

Follow us on YouTube: https://www.youtube.com/@InfoTechRG

Show more...
2 months ago
1 hour 1 minute 26 seconds

Digital Disruption with Geoff Nielson
Will AI Replace Humans? Dr. Ayesha Khanna Explains the Risks

What risks come with AI systems that can lie, cheat, or manipulate?


Today on Digital Disruption, we’re joined by Dr. Ayesha Khanna, CEO of Addo AI.


Dr. Khanna is a globally recognized AI expert, entrepreneur, and CEO of Addo, helping businesses leverage AI for growth. With 20+ years in digital transformation, she advises Fortune 500 CEOs and serves on global boards, including Johnson Controls, NEOM Tonomus, and L’Oréal’s Scientific Advisory Board. A graduate of Harvard, Columbia, and the London School of Economics, she spent a decade on Wall Street advising on information analytics. A thought leader in AI, Dr. Khanna has been recognized as a groundbreaking entrepreneur by Forbes, named to Edelman’s Top 50 AI Creators (2025), and featured in Salesforce’s 16 AI Influencers to Know (2024). Committed to diversity in tech, she founded the charity 21C Girls, which taught thousands of students the basics of AI and coding in Singapore, and currently provides scholarships for mid-career women through her education company Amplify.


Ayesha sits down with Geoff to discuss how artificial intelligence is disrupting industries, reshaping the economy, and redefining the future of jobs. This conversation explores why critical thinking will be the most important skill in an AI-driven workplace, how businesses can use AI to scale innovation instead of getting stuck in “pilot purgatory,” and what risks organizations must prepare for, including bias, data poisoning, cybersecurity threats, and manipulative reasoning models. Ayesha shares insights from her work with governments and Fortune 500 companies on building national AI strategies, creating governance frameworks, and balancing innovation with responsibility. The conversation dives into how AI and jobs intersect, whether automation will replace or augment workers and why companies need to focus on growth, reskilling, and strategic automation rather than layoffs. They also discuss the rise of the Hybrid Age, where humans and AI coexist in every part of life, and what it means for society, relationships, and the global economy.


In this video:

00:00 Intro

00:43 The future of AI and the next 5 years

02:16 The biggest AI risks

05:25 Fake alignment & governance

09:08 Why AI pilots fail

15:30 What successful companies do

23:14 AI and jobs: Automation, reskilling, and why critical thinking matters most

29:39 The Hybrid Age

37:09 AI and society: relationships with AI, human agency, and ethical concerns

46:13 Global AI strategies

54:00 Overhyped narratives and what people get wrong about AI and jobs

56:27 The Skills Gap opportunity

58:31 The importance of risk frameworks, critical thinking, and optimism


Connect with Dr. Khanna

Website: https://www.ayeshakhanna.com/

LinkedIn: https://www.linkedin.com/in/ayeshakhanna/

X: (21) Dr. Ayesha Khanna (@ayeshakhanna1) / X



Visit our website: https://www.infotech.com/

Follow us on YouTube: https://www.youtube.com/@InfoTechRG

Show more...
2 months ago
58 minutes 53 seconds

Digital Disruption with Geoff Nielson
The Lazy Generation? Is AI Killing Jobs or Critical Thinking

Can automation and critical thinking coexist in the future of education and work?


Today on Digital Disruption, we’re joined by Bryan Walsh the Senior Editorial Director at Vox.


At Vox, Bryan leads the Future Perfect and climate teams and oversees the podcasts Unexplainable and The Gray Area. He also serves as editor of Vox’s Future Perfect section, which explores the policies, people, and ideas that could shape a better future for everyone. He is the author of End Times: A Brief Guide to the End of the World (2019), a book on existential risks including AI, pandemics, and nuclear war though, as he notes, it’s not all that brief. Before joining Vox, Bryan spent 15 years at Time magazine as a foreign correspondent in Hong Kong and Tokyo, an environment writer, and international editor. He later served as Future Correspondent at Axios. When he’s not editing, Bryan writes Vox’s Good News newsletter and covers topics ranging from population trends and scientific progress to climate change, artificial intelligence, and on occasion children’s television.


Bryan sits down with Geoff to discuss how artificial intelligence is transforming the workplace and what it means for workers, students, and leaders. From the automation of entry-level jobs to the growing importance of human-centered skills, Bryan shares his perspective on the short- and long-term impact of AI on the economy and society. He explains why younger workers may be hit hardest, how education systems must adapt to preserve critical thinking, and why both companies and governments face tough choices in managing disruption. This conversation highlights why adaptability and critical thinking are becoming the most valuable skills and what governments and organizations can do to reduce the social and economic strain of rapid automation.


In this video:

00:00 Intro

01:20 Early adoption of AI: Hype vs. reality

02:16 Automation pressures during economic downturns

03:08 The struggle for new grads entering the workforce

04:37 Is AI wiping out entry-level jobs?

05:40 Why younger workers may be hit hardest

06:28 No clear answers on AI disruption

08:19 The paradox of AI: productivity gains vs. job losses

14:30 Critical thinking, education, and the future of learning

18:00 How AI reshapes global power dynamics

31:57 The workplace of the future: skills that matter most

44:03 Regulation, politics, and the AI economy

48:19 AI, geopolitics, and risks of global instability

57:33 Who bears responsibility for minimizing disruption?

59:01 Rethinking identity beyond work

1:00:22 Journalism in the AI era: threat or amplifier?



Connect with Bryan:

Website: https://www.vox.com/authors/bryan-walsh

LinkedIn: https://www.linkedin.com/in/bryan-walsh-9881b0/

X: https://x.com/bryanrwalsh


Visit our website: https://www.infotech.com/

Follow us on YouTube: https://www.youtube.com/@InfoTechRG

Show more...
3 months ago
1 hour 5 minutes 57 seconds

Digital Disruption with Geoff Nielson
From Dumb to Dangerous: The AI Bubble Is Worse Than Ever

Are we heading toward an AI-driven utopia, or just another tech bubble waiting to burst?

Today on Digital Disruption, we’re joined by Dr. Emily Bender and Dr. Alex Hanna.

Dr. Bender is a Professor of Linguistics at the University of Washington where she is also the Faculty Director of the Computational Linguistics Master of Science program and affiliate faculty in the School of Computer Science and Engineering and the Information School. In 2023, she was included in the inaugural Time 100 list of the most influential people in AI. She is frequently consulted by policymakers, from municipal officials to the federal government to the United Nations, for insight into how to understand so-called AI technologies.

Dr. Hanna is Director of Research at the Distributed AI Research Institute (DAIR) and a Lecturer in the School of Information at the University of California Berkeley. She is an outspoken critic of the tech industry, a proponent of community-based uses of technology, and a highly sought-after speaker and expert who has been featured across the media, including articles in the Washington Post, Financial Times, The Atlantic, and Time.

Dr. Bender and Dr. Hanna sit down with Geoff to discuss the realities of generative AI, big tech power, and the hidden costs of today’s AI boom. Artificial Intelligence is everywhere but how much of the hype is real, and what’s being left out of the conversation? This discussion dives into the social and ethical impacts of AI systems and why popular AI narratives often miss the mark. Dr. Bender and Dr. Hanna share their thoughts on the biggest myths about generative AI and why we need to challenge them and the importance of diversity, labor, and accountability in AI development. They’ll answer questions such as where AI is really heading and how we can imagine better, more equitable futures and what technologists should be focusing on today.


In this video:

0:00 Intro

1:45 Why language matters when we talk about “AI”

4:20 The problem with calling everything “intelligence”

7:15 How AI hype shapes public perception

10:05 Separating science from marketing spin

13:30 The myth of AGI: Why it’s a distraction

16:55 Who benefits from AI hype?

20:20 Real-world harms: Bias, surveillance & labor exploitation

24:10 How data is extracted & who pays the price

28:40 The invisible labor behind AI systems

32:15 Diversity, power, and accountability in AI

36:00 Why focusing on “doom scenarios” misses the point

39:30 AI in business and risks leaders should actually care about

43:05 What policymakers should prioritize now

47:20 The role of regulation in responsible AI

50:10 Building systems that serve people, not profit

53:15 Advice for CIOs and tech leaders

55:20 Gen AI in the workplace



Connect with Dr. Bender and Dr. Hanna

Website: https://thecon.ai/authors/

Dr. Bender LinkedIn: https://www.linkedin.com/in/ebender/

Dr. Hanna LinkedIn: https://www.linkedin.com/in/alex-hanna-ph-d/



Visit our website: https://www.infotech.com/

Follow us on YouTube: https://www.youtube.com/@InfoTechRG

Show more...
3 months ago
57 minutes 3 seconds

Digital Disruption with Geoff Nielson
Siri Creator: How Apple & Google Got AI Wrong

What does the future of AI assistants look like and what’s still missing?


Today on Digital Disruption, we’re joined by Adam Cheyer, Co-Founder of Siri.


Adam is an inventor, entrepreneur, engineering executive, and a pioneer in AI and computer human interfaces. He co-founded or was a founding member of five successful startups: Siri (sold to Apple, where he led server-side engineering and AI for Siri), Change.org (the world’s largest petition platform), Viv Labs (acquired by Samsung, where he led product engineering and developer relations for Bixby), Sentient (massively distributed machine learning), and GamePlanner.AI (acquired by Airbnb, where he served as VP of AI Experience). Adam has authored more than 60 publications and 50 patents. He graduated with highest honors from Brandeis University and received the “Outstanding Masters Student” award from UCLA’s School of Engineering.


Adam sits down with Geoff to discuss the evolution of conversational AI, design principles for next-generation technology, and the future of human–machine interaction. They explore the future of AI, augmented reality, and collective intelligence. Adam shares insider stories about building Siri, working with Steve Jobs, and why today’s generative AI tools like ChatGPT are both amazing and frustrating. Adam also shares his predictions for the next big technological leap and how collective intelligence could transform how we solve humanity’s most difficult challenges.


In this video:

0:00 Intro

1:08 Why today’s AI both amazes and frustrates

3:50 The 3 big missing pieces in current AI systems

8:28 What Siri got right and what it missed

11:30 The “10+ Theory”: Paradigm shifts in computing

14:18 Augmented Reality as the next big breakthrough

19:43 Design lessons from building Siri

25:00 Iteration vs. first impressions: How to launch transformational products

30:20 Beginner, intermediate, and expert user experiences in AI

33:40 Will conversational AI become like “Her”?

35:45 AI maturity compared to the early internet

37:34 Magic, technology, and creating “wow” moments

43:55 What’s hype vs. what’s real in AI today

47:01 Where the next magic will happen: AR & collective intelligence

50:51 The role of DARPA, Stanford, and government funding in Siri’s success

54:49 Advice for leaders building the future of digital products

57:13 Balance the hype



Connect with Adam:

Website: http://adam.cheyer.com/site/home?page=about

LinkedIn: https://www.linkedin.com/in/adamcheyer/

Facebook: https://www.facebook.com/acheyer



Visit our website: https://www.infotech.com/

Follow us on YouTube: https://www.youtube.com/@InfoTechRG

Check out other episodes of Digital Disruption: https://youtube.com/playlist?list=PLIImliNP0zfxRA1X67AhPDJmlYWcFfhDT&feature=shared

Show more...
3 months ago
57 minutes 32 seconds

Digital Disruption with Geoff Nielson
Next-Gen Tech Expert: This is AI's ENDGAME

Are we ready for a future where human and machine intelligence are inseparable?


Today on Digital Disruption, we’re joined by best-selling author and founding partner of digital strategy firm, Future Point of View (FPOV), Scott Klososky .


Scott’s career has been built at the intersection of technology and humanity; he is known for his visionary insights into how emerging technologies shape organizations and society. He has advised leaders across Fortune 500 companies, nonprofits, and professional associations, guiding them in integrating technology with strategic human effort. A sought-after speaker and author of four books—including Did God Create the Internet? Scott continues to help executives around the world prepare for the digital future.


Scott sits down with Geoff to discuss the cutting edge of human-technology integration and the emergence of the "organizational mind." What happens when AI no longer supports organizations but becomes a synthetic layer of intelligence within them? He talks about real-world examples of this transformation already taking place, reveals the ethical and existential risks AI poses, and offers practical advice for business and tech leaders navigating this new era. This conversation dives deep into autonomous decision-making to AI regulation and digital governance, and Scott breaks down the real threats of digital reputational damage, AI misuse, and the growing surveillance culture we’re all a part of.


In this episode:

00:00 Intro

00:24 What is an ‘Organizational Mind?’

03:44 How fast is this becoming real?

05:00 Early insights from building an organizational mind

07:02 The human brain analogy: AI mirrors us

08:12 What does it mean for AI to “wake up”?

09:51 AI awakening without consciousness

11:03 Should we be worried about conscious AI?

11:59 Accidents, bad actors, and manipulation

15:42 Can we prevent these AI risks?

18:28 Regulatory control and the role of governments

20:03 Cat and Mouse: Can AI hide from auditors?

23:02 The escalating complexity of AI threats

27:00 Will nations have organizational minds?

29:12 Autonomous collaboration between AI nations

35:36 Bringing AI tools together

36:31 Knowledge, agents, personas & oversight

40:11 Why early adopters will have the edge

41:00 Are we in another AI bubble?

45:01 Scott’s advice for business & tech leaders

47:12 Why use-cases alone aren’t enough




Connect with Scott:

LinkedIn: https://www.linkedin.com/in/scottklososky/

X: https://x.com/sklososky


Visit our website: https://www.infotech.com/

Follow us on YouTube: https://www.youtube.com/@InfoTechRG

Show more...
3 months ago
50 minutes 55 seconds

Digital Disruption with Geoff Nielson
Roman Yampolskiy: How Superintelligent AI Could Destroy Us All

Is this a wake-up call for anyone who believes the dangers of AI are exaggerated?


Today on Digital Disruption, we’re joined by Roman Yampolskiy, a leading writer and thinker on AI safety, and associate professor at the University of Louisville. He was recently featured on podcasts such as PowerfulJRE by Joe Rogan.


Roman is a leading voice in the field of Artificial Intelligence Safety and Security. He is the author of several influential books, including AI: Unexplainable, Unpredictable, Uncontrollable. His research focuses on the critical risks and challenges posed by advanced AI systems. A tenured professor in the Department of Computer Science and Engineering at the University of Louisville, he also serves as the founding director of the Cyber Security Lab.


Roman sits down with Geoff to discuss one of the most pressing issues of our time: the existential risks posed by AI and superintelligence. He shares his prediction that AI could lead to the extinction of humanity within the next century. They dive into the complexities of this issue, exploring the potential dangers that could arise from both AI’s malevolent use and its autonomous actions. Roman highlights the difference between AI as a tool and as a sentient agent, explaining how superintelligent AI could outsmart human efforts to control it, leading to catastrophic consequences. The conversation challenges the optimism of many in the tech world and advocates for a more cautious, thoughtful approach to AI development.


In this episode:

00:00 Intro

00:45 Dr. Yampolskiy's prediction: AI extinction risk

02:15 Analyzing the odds of survival

04:00 Malevolent use of AI and superintelligence

06:00 Accidental vs. deliberate AI destruction

08:10 The dangers of uncontrolled AI

10:00 The role of optimism in AI development

12:00 The need for self-interest to slow down AI development

15:00 Narrow AI vs. Superintelligence

18:30 Economic and job displacement due to AI

22:00 Global competition and AI arms race

25:00 AI’s role in war and suffering

30:00 Can we control AI through ethical governance?

35:00 The singularity and human extinction

40:00 Superintelligence: How close are we?

45:00 Consciousness in AI

50:00 The difficulty of programming suffering in AI

55:00 Dr. Yampolskiy’s approach to AI safety

58:00 Thoughts on AI risk



Connect with Roman:

Website: https://www.romanyampolskiy.com/

LinkedIn: https://www.linkedin.com/in/romanyam/

X: https://x.com/romanyam


Visit our website: https://www.infotech.com/

Follow us on YouTube: https://www.youtube.com/@InfoTechRG

Show more...
4 months ago
1 hour 13 minutes 25 seconds

Digital Disruption with Geoff Nielson
Ex-OpenAI Lead Zack Kass Reveals the Societal Impact of AI

As AI becomes more capable, how should our social systems evolve in response?


Today on Digital Disruption, we’re joined once again by Zack Kass, an AI futurist and former Head of Go-To-Market at OpenAI. As a leading expert in applied AI, he harnesses its capabilities to develop business strategies and applications that enhance human potential.


Zack has been at the forefront of AI and played a key role in early efforts at commercializing AI and large language models, channeling OpenAI’s innovative research into tangible business solutions. Today, Zack is dedicated to guiding businesses, nonprofits, and governments through the fast-changing AI landscape. His expertise has been highlighted in leading publications, including Fortune, Newsweek, Entrepreneur, and Business Insider.


Zack sits down with Geoff to explore the philosophical implications of AI and its impact on everything from nuclear war to society’s struggle with psychopaths and humanity itself. This conversation raises important questions about the evolving role of AI in shaping our world and the ethical considerations that come with it. Zack discusses how AI may empower low-resource bad actors, transform local communities, and influence future generations. The episode touches on a wide range of themes, including the meaning of life, AI’s role in global conflict, its effects on personal well-being, and the societal challenges it presents. This conversation isn’t just about AI, it’s about humanity’s ongoing exploration of fear, freedom, happiness, and the future.



In this episode:

00:00 Intro

00:21 AI's exponential growth and speed of change

02:03 The expanding scientific frontier

03:19 Roger Bannister effect and AI inspiration

04:00 Societal vs. technological thresholds

06:00 The danger of low-resource bad actors

09:00 Psychopaths, crime, and the role of policy

12:00 Freedom vs. security

14:45 The risk of bias and broken justice systems

18:00 The role of AI in decision-making

20:00 Why we tolerate human error but not machine error

20:36 Breaking the fear cycle in a negative attention economy

22:12 Tech-driven optimism

23:55 Finding Happiness

25:32 Community, nature, and meaningful human connection

27:00 The problem with the “more is more” mindset

28:30 Narratives, new media, and information overload

31:09 The Power of local change and good news

33:06 Gen Z, Gen Alpha, and the next wave of innovation



Connect with Zack:

Website: https://zackkass.com/

X: ⁠https://x.com/iamthezack

LinkedIn: ⁠https://www.linkedin.com/in/zackkass/

YouTube: ⁠https://www.youtube.com/ ⁨@ZackKassAI⁩


Visit our website: https://www.infotech.com/

Follow us on YouTube: https://www.youtube.com/@InfoTechRG

Show more...
4 months ago
35 minutes 27 seconds

Digital Disruption with Geoff Nielson
Pulitzer-Winning Journalist: This is Why Big Tech is Betting $300 Billion on AI

What role should government, regulation, and society play in the next chapter of Big Tech and AI.

Today on Digital Disruption, we’re joined by Pulitzer Prize–winning investigative reporter, Gary Rivlin.

Gary has been writing about technology since the mid-1990s and the rise of the internet. He is the author of AI Valley and 9 previous books, including Saving Main Street and Katrina: After the Flood. His work has appeared in the New York Times, Newsweek, Fortune, GQ, and Wired, among other publications. He is a two-time Gerald Loeb Award winner and former reporter for the New York Times. He lives in New York with his wife, theater director Daisy Walker, and two sons.

Gary sits down with Geoff to discuss the unchecked power of Big Tech and the evolving role of AI as a political force. From the myth of the benevolent tech founder to the real-world implications of surveillance, misinformation, and election interference, he discusses the dangers of unregulated tech influence on policy and the urgent need for greater transparency, ethical responsibility, and accountability in emerging technologies. This conversation highlights the role of venture capital in fueling today’s tech giants, what history tells us about the future of digital disruption, and whether regulation can truly govern AI and platform power.


In this episode:

00:00 Intro

02:45 The early promise of Silicon Valley

06:30 What changed in tech: From innovation to power

10:55 The role of venture capital in shaping Big Tech

15:40 Tech disruption vs. systemic control

20:15 The shift from public good to private gain

24:50 How Big Tech wields power over democracy

29:30 Can AI be regulated in time?

33:45 Lessons from tech history

38:20 Government’s role in tech oversight

43:05 Gary’s thoughts on tech accountability

47:30 Future risks of an unchecked tech industry

51:10 Hope for the next generation of innovators

55:00 Tech is at the center of politics

58:00 What should change?

1:09:00 Journalists using AI are more powerful


Connect with Gary:

Website: https://garyrivlin.com/

LinkedIn: https://www.linkedin.com/in/gary-rivlin/

Visit our website: https://www.infotech.com/

Follow us on YouTube: https://www.youtube.com/@InfoTechRG

Show more...
4 months ago
1 hour 11 minutes 34 seconds

Digital Disruption with Geoff Nielson
The Next Industrial Revolution is Already Here Digital Disruption is where industry leaders and experts share insights on leveraging technology to build the organizations of the future. As intelligent technologies reshape our lives and our livelihoods, we speak with the thinkers, the doers and innovators who will help us predict and harness this disruption. Join us as we explore how to adapt to and harness digital transformation.