
As artificial intelligence takes on increasingly autonomous roles, the question of legal accountability becomes more urgent. Who is responsible when AI-driven systems cause harm or make biased decisions? In this episode of Trustworthy AI: current areas of research and challenges, our experts explore the complex landscape of AI regulation and certification.
Join Willy Fabritius, Global Head of Strategy & Business Development Information Security at SGS, Tomislav Nad, Lead Innovation Technologist, and Kerstin Waxnegger, Legal Expert and Data Protection Officer at Know-Center GmbH, as they unpack key global regulatory initiatives, including a comparative analysis of the EU AI Act.
Gain clarity on how emerging frameworks are shaping the future of AI governance and what certification means for organizations striving to build trustworthy AI systems.
Whether you are a professional navigating AI compliance or simply interested in the evolving relationship between law and technology, this episode offers valuable insights to deepen your understanding of AI accountability in today’s fast-changing environment.
About our “Trustworthy AI: current areas of research and challenges” series:The need for trustworthy Artificial Intelligence systems is recognized by many organizations, from governments, to industries and academia. As AI systems become more widely used by both organizations and individuals, it is important to establish trust in them. To establish this trust, numerous white papers, proposals and standards have been published and are still in development to educate organizations on the need for and uses of AI systems. Join us for our series as our experts discuss a variety of topics related to building trust and understanding of AI systems.