ARTIFICIAL INTELLIGENCE AND ETHICAL CHALLENGES: WHO IS RESPONSIBLE WHEN ERRORS OCCUR?

National Competition for the Promotion of Professional Ethics – Co-organised by Rotary and the Conférence des Grandes Écoles

Essay ranked 3rd nationwide and 2nd at the regional level (Rotary District 1760)


Preamble

This essay looks at the limits of current regulations in the face of the meteoric rise of artificial intelligence (AI). Far from infallible, AI can generate algorithmic bias, lead to automated judicial errors or inappropriately influence medical decisions. These drifts remind us that AI is not neutral; it reflects the choices and intentions of those who design it.

Several avenues are therefore being considered: strengthening algorithmic transparency, setting up independent audits, re-thinking the allocation of responsibility and adapting the legal framework to the specific features of AI.

Approach

The rapid integration of AI into critical sectors such as healthcare, transport and justice raises major ethical questions, especially when mistakes occur. Unlike a human decision—often contextualised and explainable—AI decisions can be opaque, hard to explain and tricky when it comes to assigning responsibility.

This issue caught my attention in particular. As a future engineer, I will be called upon to design, operate or oversee such systems. While AI offers tremendous prospects, it also raises unprecedented dilemmas: how can we ensure that its decisions remain fair and safe? Who should be held liable when an algorithm causes harm? Existing legislation is struggling to keep pace with technological advances, making these questions all the more pressing.

Introduction

An autonomous car is driving safely along a road. Suddenly, the unexpected: a pedestrian steps onto the carriageway, and in the only escape lane a group of cyclists is coming in the opposite direction. In a fraction of a second the algorithm must decide: save the pedestrian and collide with the cyclists, or avoid the cyclists and strike the pedestrian? A dramatic decision—and above all, a programmed one. So who is responsible? The vehicle manufacturer? The engineer who wrote the algorithm? The owner who blindly trusts the technology?

Far from a new concept, artificial intelligence traces its roots back to the 1950s, when researchers such as Alan Turing and John McCarthy laid the foundations for machines capable of simulating human intelligence. After decades of uneven progress, breakthroughs in machine learning and computing power have enabled AI to spread massively across many sectors. Yet the more it integrates into our lives, the more ethical and legal dilemmas emerge: algorithmic bias, automated judicial errors, incorrect medical diagnoses… Decisions once made by humans are now entrusted to opaque systems whose mistakes can have major consequences.

In a world where AI no longer merely assists humans but makes decisions in their stead, who should bear the blame when it errs? AI’s lack of moral conscience forces us to rethink our notions of responsibility and ethics.

First, we will see how AI—though promising—remains an opaque entity whose decisions can be unpredictable. Next, we will analyse the difficulties of assigning clear liability among designers, companies and users. Finally, we will explore ethical and legal safeguards that can limit these risks and ensure the responsible development of artificial intelligence.

1 - Artificial intelligence: promises and illusions

Artificial intelligence is no longer a futuristic chimera. It is everywhere, weaving its way into our daily lives with sometimes disconcerting ease. From facial recognition on our smartphones to algorithms that suggest the next series to binge-watch, AI is redefining how we produce, decide and interact. Yet this technological revolution oscillates between the promise of a more efficient future and the fear of a dehumanised world.

1.1 A technological revolution between fascination and caution

The rise of AI is driven by spectacular advances in computing power, learning algorithms and access to data. Its application in sectors such as healthcare, finance, transport or industry brings remarkable efficiency gains. Who would have thought, only a few decades ago, that artificial intelligences would outperform radiologists, spotting anomalies invisible to the human eye? Or that autonomous cars could drive an entire route without any human intervention?

However, this enthusiasm should not blind us. Impressive as it is, AI remains a “black box” whose decisions are often difficult to explain. And this opacity poses a serious problem of trust—and above all of tracking responsibility.

Another critical issue lies in training-data bias. AI is neither neutral nor objective: it reflects the trends and prejudices contained in the information it is fed. A striking example is Amazon’s recruitment algorithm which, trained on predominantly male data, ended up discriminating against female candidates [3]. This kind of drift reminds us that AI, far from being an autonomous and impartial entity, remains the product of the human choices that shape it.

1.2 When the machine is wrong: unforeseen drifts

To err is human… but it is also algorithmic. Unlike human intuition, machines mechanically apply the rules imposed on them, devoid of any moral conscience. This algorithmic rigidity can lead to catastrophic decisions, and the question of responsibility then becomes central to the debate.

Consider the justice system. In the United States, the COMPAS algorithm used to predict recidivism risk was criticised for racial bias. Some populations found themselves systematically disadvantaged, not through malicious intent but because the algorithm had absorbed historically biased data. Who should be held accountable? The designers? The judges who rely on these recommendations? In medicine, drifts can be just as worrying. A poorly-calibrated diagnostic AI can lead to serious errors, particularly if its training data do not represent patient diversity. Instead of reducing health inequalities, such models risk amplifying them, reinforcing a vicious circle that is hard to break.

Model explainability, rigorous result validation, tighter human supervision: these precautions are essential to prevent AI from becoming a source of injustice rather than a tool for progress. Legally, our current laws struggle to frame these new forms of intelligence that operate autonomously. Should we envisage shared liability among developers, companies and users? Or create a brand-new regulatory framework for machine decisions? Philosopher Adela Cortina [2] warns of a dangerous illusion: believing that AI can solve everything for us. Behind every algorithm lie human choices, implicit values and decisions that demand thorough ethical reflection. For his part, Martin Gibert suggests teaching machines a form of morality based on ethical dilemmas such as the trolley problem [5]. But how far can we go? Can we really delegate our moral choices to entities without consciousness?

2 - Responsibility in search of clarity

The meteoric rise of artificial intelligence is accompanied by legal vagueness that makes the question of liability all the more critical. When an error occurs, who must answer for it? The developer who coded the algorithm? The company that deploys it? The user who blindly trusts it? Or even the regulator that approved its use? Lack of a clear framework breeds mistrust and leaves room for abuse. Yet solid oversight is indispensable: an AI that makes autonomous decisions—sometimes with major consequences—cannot evolve without precise rules. But faced with the vertiginous speed of technological progress, laws lag behind, always a step behind on-the-ground reality.

2.1 A fragmented ecosystem of responsibilities

Artificial intelligence relies on a web of actors who each influence its development and use. Understanding this chain is a first step toward pinpointing liability when malfunctions occur.

Designers and developers are on the front line: they program the algorithms, train the models and optimise their performance. Despite their expertise, anticipating every possible drift remains a challenge. An algorithm can evolve unexpectedly, adapt to new data and sometimes produce biased or erroneous results. Who then should bear responsibility for these errors? Some argue that strict evaluation protocols are enough to limit such risks, yet reality shows that bias often emerges well after initial development. Companies integrating AI into their services play an equally important role. They define the uses but often without mastering all the technical workings. Must a bank using a credit-scoring algorithm understand its internal mechanisms? In healthcare, can a hospital be held liable if a diagnostic AI leads to mistreatment? Lack of harmonised standards for these companies complicates liability management even further. Some firms do adopt ethical charters, but implementation varies across jurisdictions and economic interests [1].

The end-user is also concerned. Whether a doctor following an algorithmic recommendation or an employee applying an automated decision, their role in interpreting and using results matters. Can we demand that they systematically question the AI’s suggestions? Some experts advocate compulsory training on reading and interpreting algorithmic outputs, which is already the case in some organisations [8].

Finally, a more speculative yet increasingly discussed question concerns recognising a form of liability vested in AI itself. Certain systems—especially those based on deep learning—can evolve autonomously after deployment. If an AI commits an error or acts unpredictably, does it make sense to consider it a legally responsible entity? This hypothesis raises fundamental philosophical and legal issues, for recognising AI’s own liability would require a complete overhaul of existing legislative frameworks.

The lack of consensus leaves us with situations in which liability is often diluted among the various actors, each trying to avoid bearing the consequences.

2.2 Regulation lagging behind technology

Artificial intelligence evolves at a pace that leaves lawmakers scrambling. Designed for more static technologies, current regulations struggle to encompass systems capable of autonomous learning and real-time adaptation. How can we legislate on an AI whose decisions are not fully predictable? How can we assign liability when those decisions rely on thousands—if not millions—of data interactions? In this context, several countries are working on specific frameworks, but implementation often remains patchy [4].

Today we do our best to adapt existing legal frameworks to new realities. For example, the European Directive on defective product liability works well for traditional machines but becomes shaky with an evolving algorithm that changes its behaviour over time. Should we create a new legal status for these autonomous systems? The European Union is attempting to respond with its proposed Artificial Intelligence Act, which aims to regulate high-risk AI. Yet these initiatives are slow to materialise and risk being outdated upon adoption. The debate focuses in particular on classifying AIs and imposing mandatory certifications, a sticking point between regulators and technology firms.

In the United States, the approach is more fragmented. Some jurisdictions—California, for instance—have restricted facial-recognition use, while others have no specific regulation. The absence of a federal framework creates disparities and uncertainties for companies and users, hampering the adoption of uniform rules. Other countries, such as China, adopt a different approach: AI is tightly regulated in certain strategic domains but promoted in others.

The rise of self-learning models calls traditional liability frameworks into question and demands legal reform to better regulate use of these systems. Without appropriate overhaul, the risks of algorithmic abuse and injustice will continue to spread, eroding public trust in emerging technologies.

3 - Building an ethical future for AI

Artificial intelligence is taking an ever-larger place in our lives, influencing decisions as crucial as granting a loan, delivering a medical diagnosis or even handing down a court sentence. Yet its rapid development still largely outpaces the legislative and ethical guardrails meant to govern its use. If AI is a formidable driver of innovation, it can also generate errors, discrimination or abuse when poorly overseen. The question is no longer whether AI should be regulated but how to do so effectively—without stifling progress and while protecting citizens.

Two major levers emerge: more responsible governance by companies that exploit these technologies, and a legal framework redesigned to address AI’s specificities.

3.1 Responsible governance for companies

Companies that develop and use artificial intelligence can no longer innovate without regard for consequences. They bear direct responsibility for how these technologies are designed, tested and deployed. Letting AI spread unchecked is to invite biased decisions, serious errors and widespread loss of trust.

Algorithmic transparency is an indispensable first step. Too often, AI decisions are perceived as black boxes, incomprehensible to users and even to regulators. Requiring companies to disclose their models’ decision-making criteria would help prevent certain drifts, notably algorithmic discrimination. This transparency doesn’t necessarily mean publishing the entire source code, but at least explaining the logic underlying the AI’s choices.

Another essential lever is independent audits and certifications. Before deploying AI in critical sectors—healthcare, finance, safety—systematic checks should ensure these systems meet strict ethical and technical standards. Anti-bias tests, robustness checks, simulations of extreme scenarios… all measures that would curb risks and establish clear liability in case of problems [1].

User training is a too-often overlooked necessity. Many AI-related errors stem not from the algorithm itself but from human misinterpretation of its results. A physician blindly following an automated diagnosis, a recruiter mechanically applying a CV-sorting algorithm’s recommendations… Such situations can be avoided with better awareness of AI systems’ limits and biases [7].

3.2 Toward a legal framework tailored to AI’s specificities

While companies have a role to play, they cannot be left to define the rules alone. It seems essential to adapt the legal framework to avoid a system in which each actor shifts liability onto another.

One of the boldest—yet most controversial—proposals is to create electronic personhood for advanced AIs: assigning a form of legal liability to systems themselves, especially those capable of autonomous learning. Though it may sound futuristic, the idea addresses a real issue: how to sanction harm caused by an AI that has evolved unpredictably and for which no human actor can be directly blamed?

A more pragmatic, immediately applicable solution would be mandatory insurance for AI, modelled on car insurance. Any AI system used in a critical domain would have to be insured for potential damage, guaranteeing victims swift compensation when mistakes occur. It would also push companies to better police their systems, facing higher premiums if their models are too risky [6].

Finally, establishing specialised ethics committees could offer an effective safeguard over sensitive AI uses. Comprising AI, legal and ethics experts, these committees would monitor high-risk applications, flag potential drifts and provide real-time regulatory guidance. Unlike traditional regulation—often slow and rigid—these committees could adjust their recommendations to the fast pace of technological change.

Conclusion

Like the autonomous car confronted with an impossible choice, our society is hurtling toward a future where AI makes critical decisions. Yet we persist in believing our hands remain on the wheel, that we can control these algorithms which, in reality, are shaped by invisible human decisions scattered across artificial neural networks and biased databases. Liability cannot be reduced to a simple equation distributing blame among designers, companies and users. The issue runs deeper: are we prepared to accept that some decisions escape any human morality? For in the end, it is not AI that chooses, but the humans who set its rules.

Bibliography

  1. [1] J. J. Bryson, « The Ethics of Artificial Intelligence: Balancing Risk and Benefit », Science and Engineering Ethics, 2018.
  2. [2] A. Cortina, ¿Ética o ideología de la inteligencia artificial?, Madrid : Paidós, 2024.
  3. [3] J. Dastin, « Amazon scraps secret AI recruiting tool that showed bias against women », Reuters, oct. 2018.
  4. [4] Commission européenne, Proposal for a Regulation on a European Approach for Artificial Intelligence, 2021.
  5. [5] M. Gibert, Faire la morale aux robots: une introduction à l'éthique des algorithmes, Montréal : Atelier 10, 2020.
  6. [6] F. Pasquale, New Laws of Robotics: Defending Human Expertise in the Age of AI, Harvard University Press, 2020.
  7. [7] S. Russell & P. Norvig, Artificial Intelligence: A Modern Approach, Pearson, 2020.
  8. [8] S. Wachter, B. Mittelstadt & L. Floridi, « Why a Right to Explanation of Automated Decision-Making Does Not Exist in the GDPR », International Data Privacy Law, 2017.