Relevant AI Systems and Ethics Without Algorithms



Relevant AI Systems and Ethics Without Algorithms

In the upcoming decades, our daily lives may become intricately linked with autonomous objects guided by AI systems. An exemplary case of this transformation is the emergence of driverless cars, poised to revolutionize our daily routines.

However, the coexistence with AI entities brings to the forefront ethical difficulties that necessitate thoughtful examination. If a driverless car were to cause harm to a human, the question arises: who bears the responsibility?

One could posit that engineers shoulder the responsibility for ethical considerations. Consequently, they must devise algorithms capable of navigating any ethical dilemma. Essentially, the challenge lies in creating a flawless ethical algorithm capable of resolving the myriad ethical conflicts that have occupied philosophers for millennia, utilizing normative ethical concurrent theories: deontology, utilitarianism, and ethical virtues.

Agents’ judgments are inherently complex, involving their perspectives on circumstances. Individuals faced with the same situations may adopt utilitarian, deontological, or virtue ethics perspectives, evaluating consequences, intentions, or virtues. The intricacies of morality embedded in common-sense rules resist confinement by algorithms.

In exploring this theme, lecturers William E. S. McNeil and Fiona Woollard presented the panel “Driverless Cars and Ethics Without Algorithms” at the Southampton Ethics Centre Afternoon Workshop1. The objective was to expose an inverse correlation between transparency and reliability in ethically relevant AI systems through Artificial Neural Networks (ANNs).

ANNs are a layered and nonlinear system of weighted nodes trained based on desired outputs, forming the backbone of deep learning systems. Their processes involve no explicit non-mathematical representation, limiting our ability to predict future behavior.

ANNs present an ethical challenge due to their “black box” nature. The internal workings of ANNs are opaque to human understanding, and discerning their structures is aligned more with the realms of neuroscience and philosophy than traditional computer science.

Theoretically, transparency demands a “good” and “universal” ethical algorithm, which is impossible. On the other hand, the “black box” in ANNs brings opaque comprehension about the “how” AI performs an ethical act.

Given the technical and theoretical challenges of developing a good ethical behavior by AI, and considering the ethical normative approach, it is likely that we have to weigh up desires for transparency and reliability in AI entities.

My bet is that AI entities should rely more on machine learning algorithms collected from the environment and users’ behavior than transparency based on ethical algorithms or ANNs processes.

However, this approach may raise questions about its alignment with Utilitarianism? If true, then this approach appears to be suited to work with machine learning algorithm models. Constantly, it can be revised by other ethical approaches (deontology and ethical virtue) in charge of human agency.

Another challenge will be persuading people to accept an ethically opaque AI judgment, given the complexities inherent in humans in accepting distinct ethical decisions different from their preferences. Mainly, if these decisions are taken by a new kind of alterity: The AI entities.


  1. McNeill, Will and Woollard, Fiona. “Driverless Cars and Ethics without Algorithms.” Workshop presented at the Southampton Ethics Centre Online Workshop: Ethics of Artificial Intelligence, remote, June 12, 2023. https://groups.google.com/g/isr-hps-sts/c/SvZtetny4c4/m/ksQpEse5AAAJ↩︎

comments powered by Disqus