Home
Categories
EXPLORE
Society & Culture
Business
TV & Film
True Crime
News
Comedy
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts116/v4/ff/e6/f7/ffe6f767-7929-c98a-3087-8e2a61149ae0/mza_11749452095557898022.jpg/600x600bb.jpg
Ethics in AI
Oxford University
25 episodes
9 months ago
Ethics in AI Seminar - presented by the Institute for Ethics in AI Chair: Peter Millican, Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford University What role should the technical AI community play in questions of AI ethics and those concerning the broader impacts of AI? Are technical researchers well placed to reason about the potential societal impacts of their work? What does it mean to conduct and publish AI research responsibly? What challenges does the AI community face in reaching consensus about responsibilities, and adopting appropriate norms and governance mechanisms? How can we maximise the benefits while minimizing the risks of increasingly advanced AI research? AI and related technologies are having an increasing impact on the lives of individuals, as well as society as a whole. Alongside many current and potential future benefits, there has been an expanding catalogue of harms arising from deployed systems, raising questions about fairness and equality, privacy, worker exploitation, environmental impact, and more. In addition, there have been increasing incidents of research publications which have caused an outcry over ethical concerns and potential negative societal impacts. In response, many are now asking whether the technical AI research community itself needs to do more to ensure ethical research conduct, and to ensure beneficial outcomes from deployed systems. But how should individual researchers and the research community more broadly respond to the existing and potential impacts from AI research and AI technology? Where should we draw the line between academic freedom and centering societal impact in research, or between openness and caution in publication? Are technical researchers well placed to grapple with issues of ethics and societal impact, or should these be left to other actors and disciplines? What can we learn from other high-stakes, ‘dual-use’ fields? In this seminar, Rosie Campbell, Carolyn Ashurst and Helena Webb will discuss these and related issues, drawing on examples such as conference impact statements, release strategies for large language models, and responsible research innovation in practice. Speakers Rosie Campbell leads the Safety-Critical AI program the Partnership on AI . She is currently focused on responsible publication and deployment practices for increasingly advanced AI, and was a co-organizer of the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, Rosie was the Assistant Director of the Center for Human-Compatible AI (CHAI) , a technical AI safety research group at UC Berkeley working towards provably beneficial AI. Before that, Rosie worked as a research engineer at BBC R and D, a multidisciplinary research lab based in the UK. There, she worked on emerging technologies for media and broadcasting, including an award-winning project exploring the use of AI in media production. Rosie holds a Master’s in Computer Science and a Bachelor’s in Physics, and also has academic experience in Philosophy and Machine Learning. She co-founded a futurist community group in the UK to explore the social implications of emerging tech, and was recently named one of ‘100 Brilliant Women to follow in AI Ethics.’ Dr Carolyn Ashurst Carolyn is a Senior Research Scholar at the Future of Humanity Institute and Research Affiliate with the Centre for the Governance of AI . Her research focuses on improving the societal impacts of machine learning and related technologies, including topics in AI governance, responsible machine learning, and algorithmic fairness. Her technical fairness research focuses on using causal models to formalise incentives for fairness related behaviours. On the question of responsible research and publication, Carolyn recently co-authored A Guide to Writing the NeurIPS Impact Statement , Institutionalizing Ethics in AI through Broader Impact requirements , and co-organised the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, she worked as a data and research scientist in various roles within government and finance. She holds an MMath and PhD from the University of Bath. Dr Helena Webb Helena is a Senior Researcher in the Department of Computer Science at Oxford. She is an interdisciplinary researcher and specialises in projects that bridge social science and computational analysis. She is interested in the ways that users interact with technologies in different kinds of settings and how social action both shapes and is shaped by innovation. She works on projects that seek to identify mechanisms for the improved design, responsible development and effective regulation of technology. Whilst at Oxford she has worked on projects relating to, amongst others, harmful content on social media, algorithm bias, resources in STEM education, and responsible robotics. Helena is the Research Lead at the newly formed Responsible Technology Institute in the Department of Computer Science. She also co convenes student modules in the Department on Computers in Society and Ethics and Responsible Innovation. Chair Professor Peter Millican Peter is Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford. He has researched and published over a wide range, including Early Modern Philosophy, Epistemology, Ethics, Philosophy of Language and of Religion, but has a particular focus on interdisciplinary connections with Computing and AI. He founded and oversees the Oxford undergraduate degree in Computer Science and Philosophy, which has been running since 2012.
Show more...
Education
RSS
All content for Ethics in AI is the property of Oxford University and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Ethics in AI Seminar - presented by the Institute for Ethics in AI Chair: Peter Millican, Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford University What role should the technical AI community play in questions of AI ethics and those concerning the broader impacts of AI? Are technical researchers well placed to reason about the potential societal impacts of their work? What does it mean to conduct and publish AI research responsibly? What challenges does the AI community face in reaching consensus about responsibilities, and adopting appropriate norms and governance mechanisms? How can we maximise the benefits while minimizing the risks of increasingly advanced AI research? AI and related technologies are having an increasing impact on the lives of individuals, as well as society as a whole. Alongside many current and potential future benefits, there has been an expanding catalogue of harms arising from deployed systems, raising questions about fairness and equality, privacy, worker exploitation, environmental impact, and more. In addition, there have been increasing incidents of research publications which have caused an outcry over ethical concerns and potential negative societal impacts. In response, many are now asking whether the technical AI research community itself needs to do more to ensure ethical research conduct, and to ensure beneficial outcomes from deployed systems. But how should individual researchers and the research community more broadly respond to the existing and potential impacts from AI research and AI technology? Where should we draw the line between academic freedom and centering societal impact in research, or between openness and caution in publication? Are technical researchers well placed to grapple with issues of ethics and societal impact, or should these be left to other actors and disciplines? What can we learn from other high-stakes, ‘dual-use’ fields? In this seminar, Rosie Campbell, Carolyn Ashurst and Helena Webb will discuss these and related issues, drawing on examples such as conference impact statements, release strategies for large language models, and responsible research innovation in practice. Speakers Rosie Campbell leads the Safety-Critical AI program the Partnership on AI . She is currently focused on responsible publication and deployment practices for increasingly advanced AI, and was a co-organizer of the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, Rosie was the Assistant Director of the Center for Human-Compatible AI (CHAI) , a technical AI safety research group at UC Berkeley working towards provably beneficial AI. Before that, Rosie worked as a research engineer at BBC R and D, a multidisciplinary research lab based in the UK. There, she worked on emerging technologies for media and broadcasting, including an award-winning project exploring the use of AI in media production. Rosie holds a Master’s in Computer Science and a Bachelor’s in Physics, and also has academic experience in Philosophy and Machine Learning. She co-founded a futurist community group in the UK to explore the social implications of emerging tech, and was recently named one of ‘100 Brilliant Women to follow in AI Ethics.’ Dr Carolyn Ashurst Carolyn is a Senior Research Scholar at the Future of Humanity Institute and Research Affiliate with the Centre for the Governance of AI . Her research focuses on improving the societal impacts of machine learning and related technologies, including topics in AI governance, responsible machine learning, and algorithmic fairness. Her technical fairness research focuses on using causal models to formalise incentives for fairness related behaviours. On the question of responsible research and publication, Carolyn recently co-authored A Guide to Writing the NeurIPS Impact Statement , Institutionalizing Ethics in AI through Broader Impact requirements , and co-organised the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, she worked as a data and research scientist in various roles within government and finance. She holds an MMath and PhD from the University of Bath. Dr Helena Webb Helena is a Senior Researcher in the Department of Computer Science at Oxford. She is an interdisciplinary researcher and specialises in projects that bridge social science and computational analysis. She is interested in the ways that users interact with technologies in different kinds of settings and how social action both shapes and is shaped by innovation. She works on projects that seek to identify mechanisms for the improved design, responsible development and effective regulation of technology. Whilst at Oxford she has worked on projects relating to, amongst others, harmful content on social media, algorithm bias, resources in STEM education, and responsible robotics. Helena is the Research Lead at the newly formed Responsible Technology Institute in the Department of Computer Science. She also co convenes student modules in the Department on Computers in Society and Ethics and Responsible Innovation. Chair Professor Peter Millican Peter is Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford. He has researched and published over a wide range, including Early Modern Philosophy, Epistemology, Ethics, Philosophy of Language and of Religion, but has a particular focus on interdisciplinary connections with Computing and AI. He founded and oversees the Oxford undergraduate degree in Computer Science and Philosophy, which has been running since 2012.
Show more...
Education
https://is1-ssl.mzstatic.com/image/thumb/Podcasts116/v4/ff/e6/f7/ffe6f767-7929-c98a-3087-8e2a61149ae0/mza_11749452095557898022.jpg/600x600bb.jpg
Privacy Is Power
Ethics in AI
1 hour 1 minute
5 years ago
Privacy Is Power
Part of the Colloquium on AI Ethics series presented by the Institute of Ethics in AI. This event is also part of the Humanities Cultural Programme, one of the founding stones for the future Stephen A. Schwarzman Centre for the Humanities. In conversation with author, Dr Carissa Veliz (Associate Professor Faculty of Philosophy, Institute for Ethics in AI, Tutorial Fellow at Hertford College University of Oxford). The author will be accompanied by Sir Michael Tugendhat and Dr Stephanie Hare in a conversation about privacy, power, and democracy, and the event will be chaired by Professor John Tasioulas (inaugural Director for the Institute for Ethics and AI, and Professor of Ethics and Legal Philosophy, Faculty of Philosophy, University of Oxford). Summary Privacy Is Power argues that people should protect their privacy because privacy is a kind of power. If we give too much of our data to corporations, the wealthy will rule. If we give too much personal data to governments, we risk sliding into authoritarianism. For democracy to be strong, the bulk of power needs to be with the citizenry, and whoever has the data will have the power. Privacy is not a personal preference; it is a political concern. Personal data is a toxic asset, and should be regulated as if it were a toxic substance, similar to asbestos. The trade in personal data has to end. As surveillance creeps into every corner of our lives, Carissa Véliz exposes how our personal data is giving too much power to big tech and governments, why that matters, and what we can do about it. Have you ever been denied insurance, a loan, or a job? Have you had your credit card number stolen? Do you have to wait too long when you call customer service? Have you paid more for a product than one of your friends? Have you been harassed online? Have you noticed politics becoming more divisive in your country? You might have the data economy to thank for all that and more. The moment you check your phone in the morning you are giving away your data. Before you've even switched off your alarm, a whole host of organisations have been alerted to when you woke up, where you slept, and with whom. Our phones, our TVs, even our washing machines are spies in our own homes. Without your permission, or even your awareness, tech companies are harvesting your location, your likes, your habits, your relationships, your fears, your medical issues, and sharing it amongst themselves, as well as with governments and a multitude of data vultures. They're not just selling your data. They're selling the power to influence you and decide for you. Even when you've explicitly asked them not to. And it's not just you. It's all your contacts too, all your fellow citizens. Privacy is as collective as it is personal. Digital technology is stealing our personal data and with it our power to make free choices. To reclaim that power, and our democracy, we must take back control of our personal data. Surveillance is undermining equality. We are being treated differently on the basis of our data. What can we do? The stakes are high. We need to understand the power of data better. We need to start protecting our privacy. And we need regulation. We need to pressure our representatives. It is time to pull the plug on the surveillance economy. To purchase a copy of ‘Privacy is Power’, please click https://www.amazon.co.uk/Privacy-Power-Should-Take-Control/dp/1787634043 Biographies: Dr Carissa Véliz is an Associate Professor at the Faculty of Philosophy and the Institute for Ethics in AI, and a Tutorial Fellow in Philosophy at Hertford College. Carissa completed her DPhil in Philosophy at the University of Oxford. She was then a Research Fellow at the Uehiro Centre for Practical Ethics and the Wellcome Centre for Ethics and Humanities at the University of Oxford. To find out more about Carissa’s work, visit her website: www.carissaveliz.com Sir Michael Tugendhat was a Judge of the High Court of England and Wales from 2003 to 2014 after being a barrister from 1970. From 2010 to 2014 he was the Judge in charge of the Queen’s Bench Division media and civil lists. He was Honorary Professor of Law at the University of Leicester (2013-16) and is a trustee of JUSTICE. His publications include Liberty Intact: Human Rights in English Law: Human Rights in English Law (Oxford University Press 2017) and Fighting for Freedom? (Bright Blue 2017), The Law of Privacy and Media (Oxford University Press 1st edn 2002). Dr Stephanie Hare is an independent researcher and broadcaster focused on technology, politics and history. Previously she worked as a Principal Director at Accenture Research, a strategist at Palantir, a Senior Analyst at Oxford Analytica, the Alistair Horne Visiting Fellow at St Antony's College, Oxford, and a consultant at Accenture. She holds a PhD and MSc from the London School of Economics and a BA in Liberal Arts and Sciences (French) from the University of Illinois at Urbana-Champaign. Her work can be found at harebrain.co Professor John Tasioulas is the inaugural Director for the Institute for Ethics and AI, and Professor of Ethics and Legal Philosophy, Faculty of Philosophy, University of Oxford. Professor Tasioulas was at The Dickson Poon School of Law, Kings College London, from 2014, as the inaugural Chair of Politics, Philosophy & Law and Director of the Yeoh Tiong Lay Centre for Politics, Philosophy & Law. He has degrees in Law and Philosophy from the University of Melbourne, and a D.Phil in Philosophy from the University of Oxford, where he studied as a Rhodes Scholar. He was previously a Lecturer in Jurisprudence at the University of Glasgow, and Reader in Moral and Legal Philosophy at the University of Oxford, where he taught from 1998-2010. He has also acted as a consultant on human rights for the World Bank.
Ethics in AI
Ethics in AI Seminar - presented by the Institute for Ethics in AI Chair: Peter Millican, Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford University What role should the technical AI community play in questions of AI ethics and those concerning the broader impacts of AI? Are technical researchers well placed to reason about the potential societal impacts of their work? What does it mean to conduct and publish AI research responsibly? What challenges does the AI community face in reaching consensus about responsibilities, and adopting appropriate norms and governance mechanisms? How can we maximise the benefits while minimizing the risks of increasingly advanced AI research? AI and related technologies are having an increasing impact on the lives of individuals, as well as society as a whole. Alongside many current and potential future benefits, there has been an expanding catalogue of harms arising from deployed systems, raising questions about fairness and equality, privacy, worker exploitation, environmental impact, and more. In addition, there have been increasing incidents of research publications which have caused an outcry over ethical concerns and potential negative societal impacts. In response, many are now asking whether the technical AI research community itself needs to do more to ensure ethical research conduct, and to ensure beneficial outcomes from deployed systems. But how should individual researchers and the research community more broadly respond to the existing and potential impacts from AI research and AI technology? Where should we draw the line between academic freedom and centering societal impact in research, or between openness and caution in publication? Are technical researchers well placed to grapple with issues of ethics and societal impact, or should these be left to other actors and disciplines? What can we learn from other high-stakes, ‘dual-use’ fields? In this seminar, Rosie Campbell, Carolyn Ashurst and Helena Webb will discuss these and related issues, drawing on examples such as conference impact statements, release strategies for large language models, and responsible research innovation in practice. Speakers Rosie Campbell leads the Safety-Critical AI program the Partnership on AI . She is currently focused on responsible publication and deployment practices for increasingly advanced AI, and was a co-organizer of the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, Rosie was the Assistant Director of the Center for Human-Compatible AI (CHAI) , a technical AI safety research group at UC Berkeley working towards provably beneficial AI. Before that, Rosie worked as a research engineer at BBC R and D, a multidisciplinary research lab based in the UK. There, she worked on emerging technologies for media and broadcasting, including an award-winning project exploring the use of AI in media production. Rosie holds a Master’s in Computer Science and a Bachelor’s in Physics, and also has academic experience in Philosophy and Machine Learning. She co-founded a futurist community group in the UK to explore the social implications of emerging tech, and was recently named one of ‘100 Brilliant Women to follow in AI Ethics.’ Dr Carolyn Ashurst Carolyn is a Senior Research Scholar at the Future of Humanity Institute and Research Affiliate with the Centre for the Governance of AI . Her research focuses on improving the societal impacts of machine learning and related technologies, including topics in AI governance, responsible machine learning, and algorithmic fairness. Her technical fairness research focuses on using causal models to formalise incentives for fairness related behaviours. On the question of responsible research and publication, Carolyn recently co-authored A Guide to Writing the NeurIPS Impact Statement , Institutionalizing Ethics in AI through Broader Impact requirements , and co-organised the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, she worked as a data and research scientist in various roles within government and finance. She holds an MMath and PhD from the University of Bath. Dr Helena Webb Helena is a Senior Researcher in the Department of Computer Science at Oxford. She is an interdisciplinary researcher and specialises in projects that bridge social science and computational analysis. She is interested in the ways that users interact with technologies in different kinds of settings and how social action both shapes and is shaped by innovation. She works on projects that seek to identify mechanisms for the improved design, responsible development and effective regulation of technology. Whilst at Oxford she has worked on projects relating to, amongst others, harmful content on social media, algorithm bias, resources in STEM education, and responsible robotics. Helena is the Research Lead at the newly formed Responsible Technology Institute in the Department of Computer Science. She also co convenes student modules in the Department on Computers in Society and Ethics and Responsible Innovation. Chair Professor Peter Millican Peter is Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford. He has researched and published over a wide range, including Early Modern Philosophy, Epistemology, Ethics, Philosophy of Language and of Religion, but has a particular focus on interdisciplinary connections with Computing and AI. He founded and oversees the Oxford undergraduate degree in Computer Science and Philosophy, which has been running since 2012.