Artificial Intelligence: Whatever Happened To Informed Societal Consent?
Imagine this: you hail a self-driving taxi, your smart speaker adjusts the thermostat based on your mood, and your social media feed curates content it thinks you’ll like during your drive. Artificial intelligence (AI) is now almost-woven into our everyday lives, offering undeniable convenience. But have we truly consented to this integration?
What is Informed Consent?
Informed consent is a fundamental ethical principle ensuring individuals have the right to make autonomous decisions based on a set of given or prior information.
It requires:
Disclosure: Providing clear, understandable information about a proposed action, including its purpose, risks, benefits, and alternatives.
Capacity: The individual must be mentally competent to understand the information and make a voluntary decision.
Voluntariness: The decision must be free from coercion or undue influence.
In essence, informed consent empowers individuals to actively participate in decisions that affect them, whether in healthcare, research, or increasingly, in today’s world, in their interactions with AI technologies.
Informed consent is a cornerstone of both ethics and law in medical treatment. Patients have the right to receive detailed information and ask questions about recommended treatments, enabling them to make thoughtful decisions about their care. Traditionally, informed consent has been a cornerstone of medical ethics.
Now, with AI’s growing influence, the concept needs a societal upgrade.
Writes Brian Patrick Green, director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University in Santa Clara, CA: When new technology poses risks of endangering, harming, or even killing people, there’s no public consent form to sign. Despite this logical inconsistency, we accept those risks. But why?
The concept of societal informed consent, according to Brian, has been explored in engineering ethics for over a decade. Yet, it remains absent from public discourse in the last many decades, where most people go about their lives assuming technology is largely beneficial and safe.
While most technology is indeed helpful and relatively low-risk, this is not always the case.
As users, we rarely receive clear explanations about how our data is collected, used, and potentially manipulated by AI algorithms. Terms of service agreements are often lengthy and dense, leaving us clicking “agree” without fully understanding the implications.
Experts like Brian believe it’s time for a new conversation between governments and the public — one focused on technology, particularly artificial intelligence. Historically, we’ve extended technology the benefit of the doubt, treating it as “innocent until proven guilty.”A common mantra in Silicon Valley has been, “It’s better to ask forgiveness than permission.”But that approach no longer fits the world we live in today, argues Brian.
The problem lies in the very nature of AI. Its decision-making processes are often opaque, and shrouded in complex algorithms. This lack of transparency makes it difficult to assess potential risks, from algorithmic bias to privacy violations.
We are thus left with a nagging question: are we truly in control of our interactions with AI, or are we unwittingly surrendering ourselves to this relatively new tech?
The Developer’s Perspective: Balancing Innovation with Responsibility
As AI develops, we understand the immense potential of this technology to improve lives. We’re driven by the desire to create solutions for healthcare, education, and environmental sustainability. However, we also recognize the ethical concerns surrounding informed societal consent.
The “move fast and break things” mentality that once dominated the tech industry no longer serves us. The rapid development of AI necessitates a more cautious approach, one that prioritizes transparency and accountability. Developers must translate complex algorithms into readily understandable language for the public. We need to create mechanisms for users to understand how AI is used, and provide options for them to control their data and tailor their interactions with AI systems.
This requires open dialogue between developers, policymakers, and the public. We need a framework for informed societal consent, one that fosters trust and empowers users to participate in shaping the future of AI.
A New Conversation: Moving Beyond “Innocent Until Proven Guilty”
The outdated notion of technology being “innocent until proven guilty” is no longer tenable. The potential benefits of AI are undeniable, but so are the risks. Just as we wouldn’t blindly accept a new medical treatment without understanding the side effects, we shouldn’t blindly accept AI integration without understanding its impact on our lives.
So, is there any possibility of moving beyond the “ask forgiveness, not permission” approach and engaging in a new conversation about AI? Developers must strive for transparency, users must demand clear explanations, and policymakers must create a framework for informed societal consent. Only then can we ensure that AI truly benefits everyone, not just a privileged few. The future of AI is not predetermined; it’s a story we write together, one informed consent at a time.
References:
https://tdwi.org/articles/2023/11/13/adv-all-should-ai-require-societal-informed-consent.aspx
https://code-medical-ethics.ama-assn.org/ethics-opinions/informed-consent
https://researchsupport.admin.ox.ac.uk/governance/ethics/resources/consent
Come, Join The “AI For Real” Online Community.