Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
TV & Film
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts115/v4/43/3a/e8/433ae8c8-8058-2547-c01a-f0440362ef2d/mza_2125784445061829392.jpg/600x600bb.jpg
Ethics in AI
Oxford University
27 episodes
9 months ago
Ethics in AI Seminar - presented by the Institute for Ethics in AI Chair: Peter Millican, Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford University What role should the technical AI community play in questions of AI ethics and those concerning the broader impacts of AI? Are technical researchers well placed to reason about the potential societal impacts of their work? What does it mean to conduct and publish AI research responsibly? What challenges does the AI community face in reaching consensus about responsibilities, and adopting appropriate norms and governance mechanisms? How can we maximise the benefits while minimizing the risks of increasingly advanced AI research? AI and related technologies are having an increasing impact on the lives of individuals, as well as society as a whole. Alongside many current and potential future benefits, there has been an expanding catalogue of harms arising from deployed systems, raising questions about fairness and equality, privacy, worker exploitation, environmental impact, and more. In addition, there have been increasing incidents of research publications which have caused an outcry over ethical concerns and potential negative societal impacts. In response, many are now asking whether the technical AI research community itself needs to do more to ensure ethical research conduct, and to ensure beneficial outcomes from deployed systems. But how should individual researchers and the research community more broadly respond to the existing and potential impacts from AI research and AI technology? Where should we draw the line between academic freedom and centering societal impact in research, or between openness and caution in publication? Are technical researchers well placed to grapple with issues of ethics and societal impact, or should these be left to other actors and disciplines? What can we learn from other high-stakes, ‘dual-use’ fields? In this seminar, Rosie Campbell, Carolyn Ashurst and Helena Webb will discuss these and related issues, drawing on examples such as conference impact statements, release strategies for large language models, and responsible research innovation in practice. Speakers Rosie Campbell leads the Safety-Critical AI program the Partnership on AI . She is currently focused on responsible publication and deployment practices for increasingly advanced AI, and was a co-organizer of the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, Rosie was the Assistant Director of the Center for Human-Compatible AI (CHAI) , a technical AI safety research group at UC Berkeley working towards provably beneficial AI. Before that, Rosie worked as a research engineer at BBC R and D, a multidisciplinary research lab based in the UK. There, she worked on emerging technologies for media and broadcasting, including an award-winning project exploring the use of AI in media production. Rosie holds a Master’s in Computer Science and a Bachelor’s in Physics, and also has academic experience in Philosophy and Machine Learning. She co-founded a futurist community group in the UK to explore the social implications of emerging tech, and was recently named one of ‘100 Brilliant Women to follow in AI Ethics.’ Dr Carolyn Ashurst Carolyn is a Senior Research Scholar at the Future of Humanity Institute and Research Affiliate with the Centre for the Governance of AI . Her research focuses on improving the societal impacts of machine learning and related technologies, including topics in AI governance, responsible machine learning, and algorithmic fairness. Her technical fairness research focuses on using causal models to formalise incentives for fairness related behaviours. On the question of responsible research and publication, Carolyn recently co-authored A Guide to Writing the NeurIPS Impact Statement , Institutionalizing Ethics in AI through Broader Impact requirements , and co-organised the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, she worked as a data and research scientist in various roles within government and finance. She holds an MMath and PhD from the University of Bath. Dr Helena Webb Helena is a Senior Researcher in the Department of Computer Science at Oxford. She is an interdisciplinary researcher and specialises in projects that bridge social science and computational analysis. She is interested in the ways that users interact with technologies in different kinds of settings and how social action both shapes and is shaped by innovation. She works on projects that seek to identify mechanisms for the improved design, responsible development and effective regulation of technology. Whilst at Oxford she has worked on projects relating to, amongst others, harmful content on social media, algorithm bias, resources in STEM education, and responsible robotics. Helena is the Research Lead at the newly formed Responsible Technology Institute in the Department of Computer Science. She also co convenes student modules in the Department on Computers in Society and Ethics and Responsible Innovation. Chair Professor Peter Millican Peter is Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford. He has researched and published over a wide range, including Early Modern Philosophy, Epistemology, Ethics, Philosophy of Language and of Religion, but has a particular focus on interdisciplinary connections with Computing and AI. He founded and oversees the Oxford undergraduate degree in Computer Science and Philosophy, which has been running since 2012.
Show more...
Education
RSS
All content for Ethics in AI is the property of Oxford University and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Ethics in AI Seminar - presented by the Institute for Ethics in AI Chair: Peter Millican, Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford University What role should the technical AI community play in questions of AI ethics and those concerning the broader impacts of AI? Are technical researchers well placed to reason about the potential societal impacts of their work? What does it mean to conduct and publish AI research responsibly? What challenges does the AI community face in reaching consensus about responsibilities, and adopting appropriate norms and governance mechanisms? How can we maximise the benefits while minimizing the risks of increasingly advanced AI research? AI and related technologies are having an increasing impact on the lives of individuals, as well as society as a whole. Alongside many current and potential future benefits, there has been an expanding catalogue of harms arising from deployed systems, raising questions about fairness and equality, privacy, worker exploitation, environmental impact, and more. In addition, there have been increasing incidents of research publications which have caused an outcry over ethical concerns and potential negative societal impacts. In response, many are now asking whether the technical AI research community itself needs to do more to ensure ethical research conduct, and to ensure beneficial outcomes from deployed systems. But how should individual researchers and the research community more broadly respond to the existing and potential impacts from AI research and AI technology? Where should we draw the line between academic freedom and centering societal impact in research, or between openness and caution in publication? Are technical researchers well placed to grapple with issues of ethics and societal impact, or should these be left to other actors and disciplines? What can we learn from other high-stakes, ‘dual-use’ fields? In this seminar, Rosie Campbell, Carolyn Ashurst and Helena Webb will discuss these and related issues, drawing on examples such as conference impact statements, release strategies for large language models, and responsible research innovation in practice. Speakers Rosie Campbell leads the Safety-Critical AI program the Partnership on AI . She is currently focused on responsible publication and deployment practices for increasingly advanced AI, and was a co-organizer of the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, Rosie was the Assistant Director of the Center for Human-Compatible AI (CHAI) , a technical AI safety research group at UC Berkeley working towards provably beneficial AI. Before that, Rosie worked as a research engineer at BBC R and D, a multidisciplinary research lab based in the UK. There, she worked on emerging technologies for media and broadcasting, including an award-winning project exploring the use of AI in media production. Rosie holds a Master’s in Computer Science and a Bachelor’s in Physics, and also has academic experience in Philosophy and Machine Learning. She co-founded a futurist community group in the UK to explore the social implications of emerging tech, and was recently named one of ‘100 Brilliant Women to follow in AI Ethics.’ Dr Carolyn Ashurst Carolyn is a Senior Research Scholar at the Future of Humanity Institute and Research Affiliate with the Centre for the Governance of AI . Her research focuses on improving the societal impacts of machine learning and related technologies, including topics in AI governance, responsible machine learning, and algorithmic fairness. Her technical fairness research focuses on using causal models to formalise incentives for fairness related behaviours. On the question of responsible research and publication, Carolyn recently co-authored A Guide to Writing the NeurIPS Impact Statement , Institutionalizing Ethics in AI through Broader Impact requirements , and co-organised the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, she worked as a data and research scientist in various roles within government and finance. She holds an MMath and PhD from the University of Bath. Dr Helena Webb Helena is a Senior Researcher in the Department of Computer Science at Oxford. She is an interdisciplinary researcher and specialises in projects that bridge social science and computational analysis. She is interested in the ways that users interact with technologies in different kinds of settings and how social action both shapes and is shaped by innovation. She works on projects that seek to identify mechanisms for the improved design, responsible development and effective regulation of technology. Whilst at Oxford she has worked on projects relating to, amongst others, harmful content on social media, algorithm bias, resources in STEM education, and responsible robotics. Helena is the Research Lead at the newly formed Responsible Technology Institute in the Department of Computer Science. She also co convenes student modules in the Department on Computers in Society and Ethics and Responsible Innovation. Chair Professor Peter Millican Peter is Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford. He has researched and published over a wide range, including Early Modern Philosophy, Epistemology, Ethics, Philosophy of Language and of Religion, but has a particular focus on interdisciplinary connections with Computing and AI. He founded and oversees the Oxford undergraduate degree in Computer Science and Philosophy, which has been running since 2012.
Show more...
Education
https://is1-ssl.mzstatic.com/image/thumb/Podcasts115/v4/43/3a/e8/433ae8c8-8058-2547-c01a-f0440362ef2d/mza_2125784445061829392.jpg/600x600bb.jpg
AI in a Democratic Culture - Presented by the Institute for Ethics in AI
Ethics in AI
1 hour 30 minutes
4 years ago
AI in a Democratic Culture - Presented by the Institute for Ethics in AI
Launch of the Institute for Ethics in AI with Sir Nigel Shadbolt, Joshua Cohen and Hélène Landemore. Part of the Colloquium on AI Ethics series presented by the Institute for Ethics in AI Introduced by the Vice-Chancellor, Professor Louise Richardson and chaired by Professor John Tasioulas. Speakers Professor Joshua Cohen (Apple University), Professor Hélène Landemore (Yale University), and Professor Sir Nigel Shadbolt (Computer Science, Oxford) Speakers: Professor Sir Nigel Shadbolt Professor Sir Nigel Shadbolt is Principal of Jesus College Oxford and a Professor of Computer Science at the University of Oxford. He has researched and published on topics in artificial intelligence, cognitive science and computational neuroscience. In 2009 he was appointed along with Sir Tim Berners-Lee as Information Advisor to the UK Government. This work led to the release of many thousands of public sector data sets as open data. In 2010 he was appointed by the Coalition Government to the UK Public Sector Transparency Board which oversaw the continued release of Government open data. Nigel continues to advise Government in a number of roles. Professor Shadbolt is Chairman and Co-founder of the Open Data Institute (ODI), based in Shoreditch, London. The ODI specialised in the exploitation of Open Data supporting innovation, training and research in both the UK and internationally. Professor Joshua Cohen Joshua Cohen is a political philosopher. He has written on issues of democratic theory, freedom of expression, religious freedom, political equality, democracy and digital technology, good jobs, and global justice. His books include On Democracy; Democracy and Associations; Philosophy, Politics, Democracy; Rousseau: A Free Community of Equals; and The Arc of the Moral Universe and Other Essays. He is co-editor of the Norton Introduction to Philosophy. Cohen taught at MIT (1977-2005), Stanford (2005-2014), is currently on the faculty at Apple University, and is Distinguished Senior Fellow in Law, Philosophy, and Political Science at Berkeley. Cohen held the Romanell-Phi Beta Kappa Professorship in 2002-3; was Tanner Lecturer at UC Berkeley in 2007; and gave the Comte Lectures at LSE in 2012. Since 1991, he has been editor of Boston Review. Professor Hélène Landemore (Yale) is Associate Professor of Political Science, with Tenure. Her research and teaching interests include democratic theory, political epistemology, theories of justice, the philosophy of social sciences (particularly economics), constitutional processes and theories, and workplace democracy. Hélène is the author of Hume (Presses Universitaires de France: 2004), a historical and philosophical investigation of David Hume’s theory of decision-making; Democratic Reason (Princeton University Press: 2013, Spitz prize 2015), an epistemic defense of democracy; and Open Democracy (Princeton University Press 2020), a vision for a new kind, more open form of democracy based on non-electoral forms of representation, including representation based on random selection. Chaired by Professor John Tasioulas, the inaugural Director for the Institute for Ethics and AI, and Professor of Ethics and Legal Philosophy, Faculty of Philosophy, University of Oxford. Professor Tasioulas was at The Dickson Poon School of Law, Kings College London, from 2014, as the inaugural Chair of Politics, Philosophy and Law and Director of the Yeoh Tiong Lay Centre for Politics, Philosophy and Law. He has degrees in Law and Philosophy from the University of Melbourne, and a D.Phil in Philosophy from the University of Oxford, where he studied as a Rhodes Scholar. He was previously a Lecturer in Jurisprudence at the University of Glasgow, and Reader in Moral and Legal Philosophy at the University of Oxford, where he taught from 1998-2010. He has also acted as a consultant on human rights for the World Bank.
Ethics in AI
Ethics in AI Seminar - presented by the Institute for Ethics in AI Chair: Peter Millican, Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford University What role should the technical AI community play in questions of AI ethics and those concerning the broader impacts of AI? Are technical researchers well placed to reason about the potential societal impacts of their work? What does it mean to conduct and publish AI research responsibly? What challenges does the AI community face in reaching consensus about responsibilities, and adopting appropriate norms and governance mechanisms? How can we maximise the benefits while minimizing the risks of increasingly advanced AI research? AI and related technologies are having an increasing impact on the lives of individuals, as well as society as a whole. Alongside many current and potential future benefits, there has been an expanding catalogue of harms arising from deployed systems, raising questions about fairness and equality, privacy, worker exploitation, environmental impact, and more. In addition, there have been increasing incidents of research publications which have caused an outcry over ethical concerns and potential negative societal impacts. In response, many are now asking whether the technical AI research community itself needs to do more to ensure ethical research conduct, and to ensure beneficial outcomes from deployed systems. But how should individual researchers and the research community more broadly respond to the existing and potential impacts from AI research and AI technology? Where should we draw the line between academic freedom and centering societal impact in research, or between openness and caution in publication? Are technical researchers well placed to grapple with issues of ethics and societal impact, or should these be left to other actors and disciplines? What can we learn from other high-stakes, ‘dual-use’ fields? In this seminar, Rosie Campbell, Carolyn Ashurst and Helena Webb will discuss these and related issues, drawing on examples such as conference impact statements, release strategies for large language models, and responsible research innovation in practice. Speakers Rosie Campbell leads the Safety-Critical AI program the Partnership on AI . She is currently focused on responsible publication and deployment practices for increasingly advanced AI, and was a co-organizer of the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, Rosie was the Assistant Director of the Center for Human-Compatible AI (CHAI) , a technical AI safety research group at UC Berkeley working towards provably beneficial AI. Before that, Rosie worked as a research engineer at BBC R and D, a multidisciplinary research lab based in the UK. There, she worked on emerging technologies for media and broadcasting, including an award-winning project exploring the use of AI in media production. Rosie holds a Master’s in Computer Science and a Bachelor’s in Physics, and also has academic experience in Philosophy and Machine Learning. She co-founded a futurist community group in the UK to explore the social implications of emerging tech, and was recently named one of ‘100 Brilliant Women to follow in AI Ethics.’ Dr Carolyn Ashurst Carolyn is a Senior Research Scholar at the Future of Humanity Institute and Research Affiliate with the Centre for the Governance of AI . Her research focuses on improving the societal impacts of machine learning and related technologies, including topics in AI governance, responsible machine learning, and algorithmic fairness. Her technical fairness research focuses on using causal models to formalise incentives for fairness related behaviours. On the question of responsible research and publication, Carolyn recently co-authored A Guide to Writing the NeurIPS Impact Statement , Institutionalizing Ethics in AI through Broader Impact requirements , and co-organised the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, she worked as a data and research scientist in various roles within government and finance. She holds an MMath and PhD from the University of Bath. Dr Helena Webb Helena is a Senior Researcher in the Department of Computer Science at Oxford. She is an interdisciplinary researcher and specialises in projects that bridge social science and computational analysis. She is interested in the ways that users interact with technologies in different kinds of settings and how social action both shapes and is shaped by innovation. She works on projects that seek to identify mechanisms for the improved design, responsible development and effective regulation of technology. Whilst at Oxford she has worked on projects relating to, amongst others, harmful content on social media, algorithm bias, resources in STEM education, and responsible robotics. Helena is the Research Lead at the newly formed Responsible Technology Institute in the Department of Computer Science. She also co convenes student modules in the Department on Computers in Society and Ethics and Responsible Innovation. Chair Professor Peter Millican Peter is Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford. He has researched and published over a wide range, including Early Modern Philosophy, Epistemology, Ethics, Philosophy of Language and of Religion, but has a particular focus on interdisciplinary connections with Computing and AI. He founded and oversees the Oxford undergraduate degree in Computer Science and Philosophy, which has been running since 2012.