Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
TV & Film
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts116/v4/ff/e6/f7/ffe6f767-7929-c98a-3087-8e2a61149ae0/mza_11749452095557898022.jpg/600x600bb.jpg
Ethics in AI
Oxford University
25 episodes
9 months ago
Ethics in AI Seminar - presented by the Institute for Ethics in AI Chair: Peter Millican, Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford University What role should the technical AI community play in questions of AI ethics and those concerning the broader impacts of AI? Are technical researchers well placed to reason about the potential societal impacts of their work? What does it mean to conduct and publish AI research responsibly? What challenges does the AI community face in reaching consensus about responsibilities, and adopting appropriate norms and governance mechanisms? How can we maximise the benefits while minimizing the risks of increasingly advanced AI research? AI and related technologies are having an increasing impact on the lives of individuals, as well as society as a whole. Alongside many current and potential future benefits, there has been an expanding catalogue of harms arising from deployed systems, raising questions about fairness and equality, privacy, worker exploitation, environmental impact, and more. In addition, there have been increasing incidents of research publications which have caused an outcry over ethical concerns and potential negative societal impacts. In response, many are now asking whether the technical AI research community itself needs to do more to ensure ethical research conduct, and to ensure beneficial outcomes from deployed systems. But how should individual researchers and the research community more broadly respond to the existing and potential impacts from AI research and AI technology? Where should we draw the line between academic freedom and centering societal impact in research, or between openness and caution in publication? Are technical researchers well placed to grapple with issues of ethics and societal impact, or should these be left to other actors and disciplines? What can we learn from other high-stakes, ‘dual-use’ fields? In this seminar, Rosie Campbell, Carolyn Ashurst and Helena Webb will discuss these and related issues, drawing on examples such as conference impact statements, release strategies for large language models, and responsible research innovation in practice. Speakers Rosie Campbell leads the Safety-Critical AI program the Partnership on AI . She is currently focused on responsible publication and deployment practices for increasingly advanced AI, and was a co-organizer of the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, Rosie was the Assistant Director of the Center for Human-Compatible AI (CHAI) , a technical AI safety research group at UC Berkeley working towards provably beneficial AI. Before that, Rosie worked as a research engineer at BBC R and D, a multidisciplinary research lab based in the UK. There, she worked on emerging technologies for media and broadcasting, including an award-winning project exploring the use of AI in media production. Rosie holds a Master’s in Computer Science and a Bachelor’s in Physics, and also has academic experience in Philosophy and Machine Learning. She co-founded a futurist community group in the UK to explore the social implications of emerging tech, and was recently named one of ‘100 Brilliant Women to follow in AI Ethics.’ Dr Carolyn Ashurst Carolyn is a Senior Research Scholar at the Future of Humanity Institute and Research Affiliate with the Centre for the Governance of AI . Her research focuses on improving the societal impacts of machine learning and related technologies, including topics in AI governance, responsible machine learning, and algorithmic fairness. Her technical fairness research focuses on using causal models to formalise incentives for fairness related behaviours. On the question of responsible research and publication, Carolyn recently co-authored A Guide to Writing the NeurIPS Impact Statement , Institutionalizing Ethics in AI through Broader Impact requirements , and co-organised the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, she worked as a data and research scientist in various roles within government and finance. She holds an MMath and PhD from the University of Bath. Dr Helena Webb Helena is a Senior Researcher in the Department of Computer Science at Oxford. She is an interdisciplinary researcher and specialises in projects that bridge social science and computational analysis. She is interested in the ways that users interact with technologies in different kinds of settings and how social action both shapes and is shaped by innovation. She works on projects that seek to identify mechanisms for the improved design, responsible development and effective regulation of technology. Whilst at Oxford she has worked on projects relating to, amongst others, harmful content on social media, algorithm bias, resources in STEM education, and responsible robotics. Helena is the Research Lead at the newly formed Responsible Technology Institute in the Department of Computer Science. She also co convenes student modules in the Department on Computers in Society and Ethics and Responsible Innovation. Chair Professor Peter Millican Peter is Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford. He has researched and published over a wide range, including Early Modern Philosophy, Epistemology, Ethics, Philosophy of Language and of Religion, but has a particular focus on interdisciplinary connections with Computing and AI. He founded and oversees the Oxford undergraduate degree in Computer Science and Philosophy, which has been running since 2012.
Show more...
Education
RSS
All content for Ethics in AI is the property of Oxford University and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Ethics in AI Seminar - presented by the Institute for Ethics in AI Chair: Peter Millican, Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford University What role should the technical AI community play in questions of AI ethics and those concerning the broader impacts of AI? Are technical researchers well placed to reason about the potential societal impacts of their work? What does it mean to conduct and publish AI research responsibly? What challenges does the AI community face in reaching consensus about responsibilities, and adopting appropriate norms and governance mechanisms? How can we maximise the benefits while minimizing the risks of increasingly advanced AI research? AI and related technologies are having an increasing impact on the lives of individuals, as well as society as a whole. Alongside many current and potential future benefits, there has been an expanding catalogue of harms arising from deployed systems, raising questions about fairness and equality, privacy, worker exploitation, environmental impact, and more. In addition, there have been increasing incidents of research publications which have caused an outcry over ethical concerns and potential negative societal impacts. In response, many are now asking whether the technical AI research community itself needs to do more to ensure ethical research conduct, and to ensure beneficial outcomes from deployed systems. But how should individual researchers and the research community more broadly respond to the existing and potential impacts from AI research and AI technology? Where should we draw the line between academic freedom and centering societal impact in research, or between openness and caution in publication? Are technical researchers well placed to grapple with issues of ethics and societal impact, or should these be left to other actors and disciplines? What can we learn from other high-stakes, ‘dual-use’ fields? In this seminar, Rosie Campbell, Carolyn Ashurst and Helena Webb will discuss these and related issues, drawing on examples such as conference impact statements, release strategies for large language models, and responsible research innovation in practice. Speakers Rosie Campbell leads the Safety-Critical AI program the Partnership on AI . She is currently focused on responsible publication and deployment practices for increasingly advanced AI, and was a co-organizer of the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, Rosie was the Assistant Director of the Center for Human-Compatible AI (CHAI) , a technical AI safety research group at UC Berkeley working towards provably beneficial AI. Before that, Rosie worked as a research engineer at BBC R and D, a multidisciplinary research lab based in the UK. There, she worked on emerging technologies for media and broadcasting, including an award-winning project exploring the use of AI in media production. Rosie holds a Master’s in Computer Science and a Bachelor’s in Physics, and also has academic experience in Philosophy and Machine Learning. She co-founded a futurist community group in the UK to explore the social implications of emerging tech, and was recently named one of ‘100 Brilliant Women to follow in AI Ethics.’ Dr Carolyn Ashurst Carolyn is a Senior Research Scholar at the Future of Humanity Institute and Research Affiliate with the Centre for the Governance of AI . Her research focuses on improving the societal impacts of machine learning and related technologies, including topics in AI governance, responsible machine learning, and algorithmic fairness. Her technical fairness research focuses on using causal models to formalise incentives for fairness related behaviours. On the question of responsible research and publication, Carolyn recently co-authored A Guide to Writing the NeurIPS Impact Statement , Institutionalizing Ethics in AI through Broader Impact requirements , and co-organised the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, she worked as a data and research scientist in various roles within government and finance. She holds an MMath and PhD from the University of Bath. Dr Helena Webb Helena is a Senior Researcher in the Department of Computer Science at Oxford. She is an interdisciplinary researcher and specialises in projects that bridge social science and computational analysis. She is interested in the ways that users interact with technologies in different kinds of settings and how social action both shapes and is shaped by innovation. She works on projects that seek to identify mechanisms for the improved design, responsible development and effective regulation of technology. Whilst at Oxford she has worked on projects relating to, amongst others, harmful content on social media, algorithm bias, resources in STEM education, and responsible robotics. Helena is the Research Lead at the newly formed Responsible Technology Institute in the Department of Computer Science. She also co convenes student modules in the Department on Computers in Society and Ethics and Responsible Innovation. Chair Professor Peter Millican Peter is Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford. He has researched and published over a wide range, including Early Modern Philosophy, Epistemology, Ethics, Philosophy of Language and of Religion, but has a particular focus on interdisciplinary connections with Computing and AI. He founded and oversees the Oxford undergraduate degree in Computer Science and Philosophy, which has been running since 2012.
Show more...
Education
https://is1-ssl.mzstatic.com/image/thumb/Podcasts116/v4/ff/e6/f7/ffe6f767-7929-c98a-3087-8e2a61149ae0/mza_11749452095557898022.jpg/600x600bb.jpg
Does AI threaten Human Autonomy?
Ethics in AI
1 hour 38 minutes
4 years ago
Does AI threaten Human Autonomy?
This event is also part of the Humanities Cultural Programme, one of the founding stones for the future Stephen A. Schwarzman Centre for the Humanities. How can AI systems influence our decision-making in ways that undermine autonomy? Do they do so in new or more problematic ways? To what extent can we outsource tasks to AI systems without losing our autonomy? Do we need a new conception of autonomy that incorporates considerations of the digital self? Autonomy is a core value in contemporary Western societies – it is a value that is invoked across a range of debates in practical ethics, and it lies at the heart of liberal democratic theory. It is therefore no surprise that AI policy documents frequently champion the importance of ensuring the protection of human autonomy. At first glance, this sort of protection may appear unnecessary – after all, in some ways, it seems that AI systems can serve to significantly enhance our autonomy. They can give us more information upon which to base our choices, and they may allow us to achieve many of our goals more effectively and efficiently. However, it is becoming increasingly clear that AI systems do pose a number of threats to our autonomy. One (but not the only) example is the fact that they enable the pervasive and covert use of manipulative and deceptive techniques that aim to target and exploit well-documented vulnerabilities in our decision-making. This raises the question of whether it is possible to harness the considerable power of AI to improve our lives in a manner that is compatible with respect for autonomy, and whether we need to reconceptualize both the nature and value of autonomy in the digital age. In this session, Carina Prunkl, Jessica Morley and Jonathan Pugh engage with these general questions, using the example of mHealth tools as an illuminating case study for a debate about the various ways in which an AI system can both enhance and hinder our autonomy. Speakers Dr Carina Prunkl, Research Fellow at the Institute for Ethics in AI, University of Oxford (where she is one of the inaugural team); also Research Affiliate at the Centre for the Governance of AI, Future of Humanity Institute. Carina works on the ethics and governance of AI, with a particular focus on autonomy, and has both publicly advocated and published on the importance of accountability mechanisms for AI. Jessica Morley, Policy Lead at Oxford’s DataLab, leading its engagement work to encourage use of modern computational analytics in the NHS, and ensuring public trust in health data records (notably those developed in response to the COVID-19 pandemic). Jess is also pursuing a related doctorate at the Oxford Internet Institute’s Digital Ethics Lab. As Technical Advisor for the Department of Health and Social Care, she co-authored the NHS Code of Conduct for data-driven technologies. Dr Jonathan Pugh, Senior Research Fellow at the Oxford Uehiro Centre for Practical Ethics, University of Oxford researching on how far AI Ethics should incorporate traditional conceptions of autonomy and “moral status”. He recently led a three-year project on the ethics of experimental Deep Brain Stimulation and “neuro-hacking”, and in 2020 published Autonomy, Rationality and Contemporary Bioethics (OUP). he has written on a wide range of ethical topics, but has particular interest in issues concerning personal autonomy and informed consent. Chair Professor Peter Millican is Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford. He has researched and published over a wide range, including Early Modern Philosophy, Epistemology, Ethics, Philosophy of Language and of Religion, but has a particular focus on interdisciplinary connections with Computing and AI. He founded and oversees the Oxford undergraduate degree in Computer Science and Philosophy, which has been running since 2012, and last year he instituted this ongoing series of Ethics in AI Seminars.
Ethics in AI
Ethics in AI Seminar - presented by the Institute for Ethics in AI Chair: Peter Millican, Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford University What role should the technical AI community play in questions of AI ethics and those concerning the broader impacts of AI? Are technical researchers well placed to reason about the potential societal impacts of their work? What does it mean to conduct and publish AI research responsibly? What challenges does the AI community face in reaching consensus about responsibilities, and adopting appropriate norms and governance mechanisms? How can we maximise the benefits while minimizing the risks of increasingly advanced AI research? AI and related technologies are having an increasing impact on the lives of individuals, as well as society as a whole. Alongside many current and potential future benefits, there has been an expanding catalogue of harms arising from deployed systems, raising questions about fairness and equality, privacy, worker exploitation, environmental impact, and more. In addition, there have been increasing incidents of research publications which have caused an outcry over ethical concerns and potential negative societal impacts. In response, many are now asking whether the technical AI research community itself needs to do more to ensure ethical research conduct, and to ensure beneficial outcomes from deployed systems. But how should individual researchers and the research community more broadly respond to the existing and potential impacts from AI research and AI technology? Where should we draw the line between academic freedom and centering societal impact in research, or between openness and caution in publication? Are technical researchers well placed to grapple with issues of ethics and societal impact, or should these be left to other actors and disciplines? What can we learn from other high-stakes, ‘dual-use’ fields? In this seminar, Rosie Campbell, Carolyn Ashurst and Helena Webb will discuss these and related issues, drawing on examples such as conference impact statements, release strategies for large language models, and responsible research innovation in practice. Speakers Rosie Campbell leads the Safety-Critical AI program the Partnership on AI . She is currently focused on responsible publication and deployment practices for increasingly advanced AI, and was a co-organizer of the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, Rosie was the Assistant Director of the Center for Human-Compatible AI (CHAI) , a technical AI safety research group at UC Berkeley working towards provably beneficial AI. Before that, Rosie worked as a research engineer at BBC R and D, a multidisciplinary research lab based in the UK. There, she worked on emerging technologies for media and broadcasting, including an award-winning project exploring the use of AI in media production. Rosie holds a Master’s in Computer Science and a Bachelor’s in Physics, and also has academic experience in Philosophy and Machine Learning. She co-founded a futurist community group in the UK to explore the social implications of emerging tech, and was recently named one of ‘100 Brilliant Women to follow in AI Ethics.’ Dr Carolyn Ashurst Carolyn is a Senior Research Scholar at the Future of Humanity Institute and Research Affiliate with the Centre for the Governance of AI . Her research focuses on improving the societal impacts of machine learning and related technologies, including topics in AI governance, responsible machine learning, and algorithmic fairness. Her technical fairness research focuses on using causal models to formalise incentives for fairness related behaviours. On the question of responsible research and publication, Carolyn recently co-authored A Guide to Writing the NeurIPS Impact Statement , Institutionalizing Ethics in AI through Broader Impact requirements , and co-organised the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, she worked as a data and research scientist in various roles within government and finance. She holds an MMath and PhD from the University of Bath. Dr Helena Webb Helena is a Senior Researcher in the Department of Computer Science at Oxford. She is an interdisciplinary researcher and specialises in projects that bridge social science and computational analysis. She is interested in the ways that users interact with technologies in different kinds of settings and how social action both shapes and is shaped by innovation. She works on projects that seek to identify mechanisms for the improved design, responsible development and effective regulation of technology. Whilst at Oxford she has worked on projects relating to, amongst others, harmful content on social media, algorithm bias, resources in STEM education, and responsible robotics. Helena is the Research Lead at the newly formed Responsible Technology Institute in the Department of Computer Science. She also co convenes student modules in the Department on Computers in Society and Ethics and Responsible Innovation. Chair Professor Peter Millican Peter is Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford. He has researched and published over a wide range, including Early Modern Philosophy, Epistemology, Ethics, Philosophy of Language and of Religion, but has a particular focus on interdisciplinary connections with Computing and AI. He founded and oversees the Oxford undergraduate degree in Computer Science and Philosophy, which has been running since 2012.