Ethics in AI Seminar - presented by the Institute for Ethics in AI Chair: Peter Millican, Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford University
What role should the technical AI community play in questions of AI ethics and those concerning the broader impacts of AI? Are technical researchers well placed to reason about the potential societal impacts of their work?
What does it mean to conduct and publish AI research responsibly?
What challenges does the AI community face in reaching consensus about responsibilities, and adopting appropriate norms and governance mechanisms?
How can we maximise the benefits while minimizing the risks of increasingly advanced AI research?
AI and related technologies are having an increasing impact on the lives of individuals, as well as society as a whole. Alongside many current and potential future benefits, there has been an expanding catalogue of harms arising from deployed systems, raising questions about fairness and equality, privacy, worker exploitation, environmental impact, and more. In addition, there have been increasing incidents of research publications which have caused an outcry over ethical concerns and potential negative societal impacts. In response, many are now asking whether the technical AI research community itself needs to do more to ensure ethical research conduct, and to ensure beneficial outcomes from deployed systems. But how should individual researchers and the research community more broadly respond to the existing and potential impacts from AI research and AI technology? Where should we draw the line between academic freedom and centering societal impact in research, or between openness and caution in publication? Are technical researchers well placed to grapple with issues of ethics and societal impact, or should these be left to other actors and disciplines? What can we learn from other high-stakes, ‘dual-use’ fields? In this seminar, Rosie Campbell, Carolyn Ashurst and Helena Webb will discuss these and related issues, drawing on examples such as conference impact statements, release strategies for large language models, and responsible research innovation in practice.
Speakers
Rosie Campbell leads the Safety-Critical AI program the Partnership on AI . She is currently focused on responsible publication and deployment practices for increasingly advanced AI, and was a co-organizer of the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, Rosie was the Assistant Director of the Center for Human-Compatible AI (CHAI) , a technical AI safety research group at UC Berkeley working towards provably beneficial AI. Before that, Rosie worked as a research engineer at BBC R and D, a multidisciplinary research lab based in the UK. There, she worked on emerging technologies for media and broadcasting, including an award-winning project exploring the use of AI in media production. Rosie holds a Master’s in Computer Science and a Bachelor’s in Physics, and also has academic experience in Philosophy and Machine Learning. She co-founded a futurist community group in the UK to explore the social implications of emerging tech, and was recently named one of ‘100 Brilliant Women to follow in AI Ethics.’
Dr Carolyn Ashurst
Carolyn is a Senior Research Scholar at the Future of Humanity Institute and Research Affiliate with the Centre for the Governance of AI . Her research focuses on improving the societal impacts of machine learning and related technologies, including topics in AI governance, responsible machine learning, and algorithmic fairness. Her technical fairness research focuses on using causal models to formalise incentives for fairness related behaviours. On the question of responsible research and publication, Carolyn recently co-authored A Guide to Writing the NeurIPS Impact Statement , Institutionalizing Ethics in AI through Broader Impact requirements , and co-organised the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, she worked as a data and research scientist in various roles within government and finance. She holds an MMath and PhD from the University of Bath.
Dr Helena Webb
Helena is a Senior Researcher in the Department of Computer Science at Oxford. She is an interdisciplinary researcher and specialises in projects that bridge social science and computational analysis. She is interested in the ways that users interact with technologies in different kinds of settings and how social action both shapes and is shaped by innovation. She works on projects that seek to identify mechanisms for the improved design, responsible development and effective regulation of technology. Whilst at Oxford she has worked on projects relating to, amongst others, harmful content on social media, algorithm bias, resources in STEM education, and responsible robotics. Helena is the Research Lead at the newly formed Responsible Technology Institute in the Department of Computer Science. She also co convenes student modules in the Department on Computers in Society and Ethics and Responsible Innovation.
Chair
Professor Peter Millican
Peter is Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford. He has researched and published over a wide range, including Early Modern Philosophy, Epistemology, Ethics, Philosophy of Language and of Religion, but has a particular focus on interdisciplinary connections with Computing and AI. He founded and oversees the Oxford undergraduate degree in Computer Science and Philosophy, which has been running since 2012.
All content for Ethics in AI is the property of Oxford University and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Ethics in AI Seminar - presented by the Institute for Ethics in AI Chair: Peter Millican, Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford University
What role should the technical AI community play in questions of AI ethics and those concerning the broader impacts of AI? Are technical researchers well placed to reason about the potential societal impacts of their work?
What does it mean to conduct and publish AI research responsibly?
What challenges does the AI community face in reaching consensus about responsibilities, and adopting appropriate norms and governance mechanisms?
How can we maximise the benefits while minimizing the risks of increasingly advanced AI research?
AI and related technologies are having an increasing impact on the lives of individuals, as well as society as a whole. Alongside many current and potential future benefits, there has been an expanding catalogue of harms arising from deployed systems, raising questions about fairness and equality, privacy, worker exploitation, environmental impact, and more. In addition, there have been increasing incidents of research publications which have caused an outcry over ethical concerns and potential negative societal impacts. In response, many are now asking whether the technical AI research community itself needs to do more to ensure ethical research conduct, and to ensure beneficial outcomes from deployed systems. But how should individual researchers and the research community more broadly respond to the existing and potential impacts from AI research and AI technology? Where should we draw the line between academic freedom and centering societal impact in research, or between openness and caution in publication? Are technical researchers well placed to grapple with issues of ethics and societal impact, or should these be left to other actors and disciplines? What can we learn from other high-stakes, ‘dual-use’ fields? In this seminar, Rosie Campbell, Carolyn Ashurst and Helena Webb will discuss these and related issues, drawing on examples such as conference impact statements, release strategies for large language models, and responsible research innovation in practice.
Speakers
Rosie Campbell leads the Safety-Critical AI program the Partnership on AI . She is currently focused on responsible publication and deployment practices for increasingly advanced AI, and was a co-organizer of the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, Rosie was the Assistant Director of the Center for Human-Compatible AI (CHAI) , a technical AI safety research group at UC Berkeley working towards provably beneficial AI. Before that, Rosie worked as a research engineer at BBC R and D, a multidisciplinary research lab based in the UK. There, she worked on emerging technologies for media and broadcasting, including an award-winning project exploring the use of AI in media production. Rosie holds a Master’s in Computer Science and a Bachelor’s in Physics, and also has academic experience in Philosophy and Machine Learning. She co-founded a futurist community group in the UK to explore the social implications of emerging tech, and was recently named one of ‘100 Brilliant Women to follow in AI Ethics.’
Dr Carolyn Ashurst
Carolyn is a Senior Research Scholar at the Future of Humanity Institute and Research Affiliate with the Centre for the Governance of AI . Her research focuses on improving the societal impacts of machine learning and related technologies, including topics in AI governance, responsible machine learning, and algorithmic fairness. Her technical fairness research focuses on using causal models to formalise incentives for fairness related behaviours. On the question of responsible research and publication, Carolyn recently co-authored A Guide to Writing the NeurIPS Impact Statement , Institutionalizing Ethics in AI through Broader Impact requirements , and co-organised the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, she worked as a data and research scientist in various roles within government and finance. She holds an MMath and PhD from the University of Bath.
Dr Helena Webb
Helena is a Senior Researcher in the Department of Computer Science at Oxford. She is an interdisciplinary researcher and specialises in projects that bridge social science and computational analysis. She is interested in the ways that users interact with technologies in different kinds of settings and how social action both shapes and is shaped by innovation. She works on projects that seek to identify mechanisms for the improved design, responsible development and effective regulation of technology. Whilst at Oxford she has worked on projects relating to, amongst others, harmful content on social media, algorithm bias, resources in STEM education, and responsible robotics. Helena is the Research Lead at the newly formed Responsible Technology Institute in the Department of Computer Science. She also co convenes student modules in the Department on Computers in Society and Ethics and Responsible Innovation.
Chair
Professor Peter Millican
Peter is Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford. He has researched and published over a wide range, including Early Modern Philosophy, Epistemology, Ethics, Philosophy of Language and of Religion, but has a particular focus on interdisciplinary connections with Computing and AI. He founded and oversees the Oxford undergraduate degree in Computer Science and Philosophy, which has been running since 2012.
Ethics in AI Colloquium with Adrienne Mayor: Gods and Robots: Myths, Machines, and Ancient Dreams of Technology
Ethics in AI
1 hour 26 minutes
4 years ago
Ethics in AI Colloquium with Adrienne Mayor: Gods and Robots: Myths, Machines, and Ancient Dreams of Technology
Part of the Colloquium on AI Ethics series presented by the Institute of Ethics in AI. This event is also part of the Humanities Cultural Programme, one of the founding stones for the future Stephen A. Schwarzman Centre for the Humanities. What, if anything, can the ancient Greeks teach us about robots and AI? Perhaps the answer is nothing, or nothing so straightforward as a correct 'solution' to the problems thrown up by robots and AI, but instead a way of thinking about them. Join us for a fascinating presentation from Adrienne Mayor, Stanford University, who will discuss her latest book, Gods and Robots: Myths, Machines, and Ancient Dreams of Technology. This book investigates how the Greeks imagined automatons, replicants, and Artificial Intelligence in myths and later designed self-moving devices and robots.
Adrienne Mayor, research scholar in the Classics Department and the History and Philosophy of Science program at Stanford University since 2006, is a folklorist and historian of ancient science who investigates natural knowledge contained in pre-scientific myths and oral traditions. Her research looks at ancient "folk science" precursors, alternatives, and parallels to modern scientific methods. She was a Berggruen Fellow at the Center for Advanced Study in the Behavioral Sciences, Stanford, 2018-2019. Mayor's latest book, Gods and Robots: Myths, Machines, and Ancient Dreams of Technology, investigates how the Greeks imagined automatons, replicants, and Artificial Intelligence in myths and later designed actual self-moving devices and robots. Mayor's 2014 book, The Amazons: Lives and Legends of Warrior Women across the Ancient World, analyzes the historical and archaeological evidence underlying myths and tales of warlike women (Sarasvati Prize for Women in Mythology). Her biography of King Mithradates VI of Pontus, The Poison King, won the Gold Medal for Biography, Independent Publishers' Book Award 2010, and was a 2009 National Book Award Finalist. Mayor’s other books include The First Fossil Hunters (rev. ed. 2011); Fossil Legends of the First Americans (2005); and Greek Fire, Poison Arrows, and Scorpion Bombs: Biological and Chemical Warfare in the Ancient World (2009, rev. ed. forthcoming).
Commentators:
Shadi Bartsch-Zimmer - Helen A. Regenstein Distinguished Service Professor of Classics and the Program in Gender Studies. Professor Bartsch-Zimmer works on Roman imperial literature, the history of rhetoric and philosophy, and on the reception of the western classical tradition in contemporary China. She is the author of 5 books on the ancient novel, Neronian literature, political theatricality, and Stoic philosophy, the most recent of which is Persius: A Study in Food, Philosophy, and the Figural (Winner of the 2016 Goodwin Award of Merit). She has also edited or co-edited 7 wide-ranging essay collections (two of them Cambridge Companions) and the “Seneca in Translation” series from the University of Chicago. Bartsch’s new translation of Vergil’s Aeneid is forthcoming from Random House in 2020; in the following year, she is publishing a new monograph on the contemporary Chinese reception of ancient Greek political philosophy. Bartsch has been a Guggenheim fellow, edits the journal KNOW, and has held visiting scholar positions in St. Andrews, Taipei, and Rome. Starting in academic year 2015, she has led a university-wide initiative to explore the historical and social contexts in which knowledge is created, legitimized, and circulated.
Armand D'Angour is Professor of Classical Languages and Literature at the University of Oxford. Professor D'Angour pursued careers as a cellist and businessman before becoming a Tutor in Classics at Jesus College in 2000. In addition to my monograph The Greeks and the New (CUP 2011), he is the author of articles and chapters on the language, literature, psychology and culture of ancient Greece. In 2013-14 he was awarded a British Academy Fellowship to undertake research into ancient Greek music, and in 2017 was awarded a Vice Chancellor’s Prize for Public Engagement with Research. Professor D'Angour has since co-edited with Tom Phillips Music, Text, and Culture in Ancient Greece (OUP 2018), and in addition to numerous broadcasts on radio and television, a short film on Youtube (https://www.youtube.com/watch?v=4hOK7bU0S1Y) has reached over 650,000 views since its publication in December 2017. His book Socrates in Love: The Making of a Philosopher was published in April 2019, and How to Innovate: An Ancient Guide to Creating Change is due from Princeton University Press in 2021.
Chaired by John Tasioulas, the inaugural Director for the Institute for Ethics and AI, and Professor of Ethics and Legal Philosophy, Faculty of Philosophy, University of Oxford. Professor Tasioulas was at The Dickson Poon School of Law, Kings College London, from 2014, as the inaugural Chair of Politics, Philosophy and Law and Director of the Yeoh Tiong Lay Centre for Politics, Philosophy and Law. He has degrees in Law and Philosophy from the University of Melbourne, and a D.Phil in Philosophy from the University of Oxford, where he studied as a Rhodes Scholar. He was previously a Lecturer in Jurisprudence at the University of Glasgow, and Reader in Moral and Legal Philosophy at the University of Oxford, where he taught from 1998-2010. He has also acted as a consultant on human rights for the World Bank.
Ethics in AI
Ethics in AI Seminar - presented by the Institute for Ethics in AI Chair: Peter Millican, Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford University
What role should the technical AI community play in questions of AI ethics and those concerning the broader impacts of AI? Are technical researchers well placed to reason about the potential societal impacts of their work?
What does it mean to conduct and publish AI research responsibly?
What challenges does the AI community face in reaching consensus about responsibilities, and adopting appropriate norms and governance mechanisms?
How can we maximise the benefits while minimizing the risks of increasingly advanced AI research?
AI and related technologies are having an increasing impact on the lives of individuals, as well as society as a whole. Alongside many current and potential future benefits, there has been an expanding catalogue of harms arising from deployed systems, raising questions about fairness and equality, privacy, worker exploitation, environmental impact, and more. In addition, there have been increasing incidents of research publications which have caused an outcry over ethical concerns and potential negative societal impacts. In response, many are now asking whether the technical AI research community itself needs to do more to ensure ethical research conduct, and to ensure beneficial outcomes from deployed systems. But how should individual researchers and the research community more broadly respond to the existing and potential impacts from AI research and AI technology? Where should we draw the line between academic freedom and centering societal impact in research, or between openness and caution in publication? Are technical researchers well placed to grapple with issues of ethics and societal impact, or should these be left to other actors and disciplines? What can we learn from other high-stakes, ‘dual-use’ fields? In this seminar, Rosie Campbell, Carolyn Ashurst and Helena Webb will discuss these and related issues, drawing on examples such as conference impact statements, release strategies for large language models, and responsible research innovation in practice.
Speakers
Rosie Campbell leads the Safety-Critical AI program the Partnership on AI . She is currently focused on responsible publication and deployment practices for increasingly advanced AI, and was a co-organizer of the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, Rosie was the Assistant Director of the Center for Human-Compatible AI (CHAI) , a technical AI safety research group at UC Berkeley working towards provably beneficial AI. Before that, Rosie worked as a research engineer at BBC R and D, a multidisciplinary research lab based in the UK. There, she worked on emerging technologies for media and broadcasting, including an award-winning project exploring the use of AI in media production. Rosie holds a Master’s in Computer Science and a Bachelor’s in Physics, and also has academic experience in Philosophy and Machine Learning. She co-founded a futurist community group in the UK to explore the social implications of emerging tech, and was recently named one of ‘100 Brilliant Women to follow in AI Ethics.’
Dr Carolyn Ashurst
Carolyn is a Senior Research Scholar at the Future of Humanity Institute and Research Affiliate with the Centre for the Governance of AI . Her research focuses on improving the societal impacts of machine learning and related technologies, including topics in AI governance, responsible machine learning, and algorithmic fairness. Her technical fairness research focuses on using causal models to formalise incentives for fairness related behaviours. On the question of responsible research and publication, Carolyn recently co-authored A Guide to Writing the NeurIPS Impact Statement , Institutionalizing Ethics in AI through Broader Impact requirements , and co-organised the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, she worked as a data and research scientist in various roles within government and finance. She holds an MMath and PhD from the University of Bath.
Dr Helena Webb
Helena is a Senior Researcher in the Department of Computer Science at Oxford. She is an interdisciplinary researcher and specialises in projects that bridge social science and computational analysis. She is interested in the ways that users interact with technologies in different kinds of settings and how social action both shapes and is shaped by innovation. She works on projects that seek to identify mechanisms for the improved design, responsible development and effective regulation of technology. Whilst at Oxford she has worked on projects relating to, amongst others, harmful content on social media, algorithm bias, resources in STEM education, and responsible robotics. Helena is the Research Lead at the newly formed Responsible Technology Institute in the Department of Computer Science. She also co convenes student modules in the Department on Computers in Society and Ethics and Responsible Innovation.
Chair
Professor Peter Millican
Peter is Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford. He has researched and published over a wide range, including Early Modern Philosophy, Epistemology, Ethics, Philosophy of Language and of Religion, but has a particular focus on interdisciplinary connections with Computing and AI. He founded and oversees the Oxford undergraduate degree in Computer Science and Philosophy, which has been running since 2012.