Ethics in AI is not a new concept. It is on everyone’s minds, from scientists to scholars, corporations, and users alike. Everyone seems to have an opinion, which collectively can’t quite be broken down into ‘for’ and ‘against’. So, how can we simplify debating the pros and cons in order to better understand the current state of ethics in AI?

I’d like to address the question of ethics in artificial intelligence not by focusing on what or who is in need of reform but through the lens of its systemic issues — which are truly a reflection of the larger systemic issues that prevail in our world today. As an anthropologist by training, I prefer to use a wide lens to observe complexities by zooming out, and zooming in again as needed, taking note of both macro and micro-social viewpoints.

In this article, and in all of my writing on this subject, my goal is to see AI used ultimately for the good of all people and the planet. There are a lot of changes that need to be made if we are to see a sustainable and harmonious future for the Earth and for humanity, and although these may seem impossible, I believe that using AI is the most promising way to begin implementing plans for a better future.

Why do I think this way? AI is increasingly embedded into everything we do and will continue to expand its influence in our lives. I hope to spend more time researching and highlighting the ways in which socially and ecologically responsible AI is already being practiced. For now, let’s focus on the seemingly simple question: Should ethics be built into artificial intelligence in the first place?

Are ethics necessary for AI?

The short answer is yes. To think otherwise would be unethical. But how? And whose ethics?

One article on the subject Economies of Virtue asks, “…is ethical AI possible in the current social systems, and if so, what is required of the engineering profession, company directors, users, policymakers, and others?” Phan et al. 2021 p.3

The authors come to some valuable conclusions for getting through this sticky situation: “Redefining problems collectively, rather than through mere technical parameters, is a crucial first step. Researchers must recognize that attempts to reconcile a contradiction between ethics and commercial profit usually result in ethical products being shaped to consumer demand or the business needs of ‘end users.’ This demand comes in the form of equity, diversity, and fairness ‘outputs’ from Big Tech itself, and in the form of ethical assuage to reputational deficiency from other sites of Big Capital,” Phan, et al. 2021

Now, let’s turn to the question at hand: What are the arguments for and against ethics in artificial intelligence?

Who would argue that no, ethics shouldn’t be a part of AI? I’m not going to make the argument for Bad-actors or those who are profiting from AI’s misuse. Let’s rework the question: What are the criticisms of ethics in AI?

The Shortcomings of Ethics in AI

Douglas Rushkoff, well-known media theorist, author, and Professor of Media at the City University of New York wrote, “…the reasons why I think AI won’t be developed ethically is because AI is being developed by companies looking to make money — not to improve the human condition. So, while there will be a few simple AIs used to optimize water use on farms or help manage other limited resources, I think the majority is being used on people. . . My concern is that even ethical people still think in terms of using technology on human beings instead of the other way around. So, we may develop a ‘humane’ AI, but what does that mean? It extracts value from us in the most ‘humane’ way possible?” Rainie et al. 2021

Most people, I find, just want to live their lives comfortably and be free to make their own decisions. I support that. Often, people’s personal ethics contradict those of other individuals and society at large, and, it is difficult in our current world to live completely within one’s own values. If AI is to learn through the observation of our own human ethics, I imagine it would come out quite confused.

Self-driving cars can show us some examples of potential ethical conflict. Etzioni and Etzioni pointed out that “…driverless cars could learn from the ethical decisions of millions of human drivers, through some kind of aggregation system, as a sort of groupthink or drawing on the wisdom of the crowds. One should note, however, that this may well lead cars to acquire some rather unethical preferences. . . If they learn what many people do, smart cars may well speed, tailgate, and engage in road rage . . . That is, observing people will not teach these machines what is ethical, but what is common.” Etzioni, 2017

“…That is, observing people will not teach these machines what is ethical, but what is common.”

There are others who question the practicality of AI learning ethics from humans. Marcel Fafchamps, Professor of Economics and senior fellow at the Center on Democracy, Development and the Rule of Law at Stanford University, commented, “AI is just a small cog in a big system. The main danger currently associated with AI is that machine learning reproduces past discrimination — e.g., in judicial processes for setting bail, sentencing, or parole review. But if there hadn’t been discrimination in the first place, machine learning would have worked fine. This means that AI, in this example, offers the possibility of improvement over unregulated social processes.” Rainie et al. 2021

Given our apparent tendency for individualism over collectivism, who gets to decide what codes of ethics AI follows? If it is influenced by Big Tech, which is often the case, it will serve to support the ethics of some company, which generally has the primary goal of making money.

“Big Tech has transformed ethics into a form of capital…”

The value of profit over all else needs to shift. “Big Tech has transformed ethics into a form of capital — a transactional object external to the organization, one of the many ‘things’ contemporary capitalists must tame and procure.” (Birch & Muniesa, 2020) Phan, et al. 2021 Furthermore, “…By engaging in an economy of virtue, it was not the corporation that became more ethical, but rather ethics that became corporatized. That is, ethics was reduced to a form of capital — another industrial input to maintain a system of production, which tolerated change insofar as it aligned with existing structures of profit-making.” Phan, et al. 2021

As I understand it, this is just the next phase of capitalism taking over ethics as something to be exploited and monetized as a form of currency. Values and ethics shouldn’t work that way. They aren’t something that can be monetized, exchanged, or assigned a standardized value. They are worth something on an entirely different scale, and should certainly not be for sale.

So, what can be done? Let’s turn to some of the opinions that are clearly arguing in favor of ethics being a necessary tenet in AI.

The Arguments for Ethics in AI

There are many reasons that AI needs ethical tune-ups. First, let’s try to understand what ethics really means when we are talking about ethics in AI.

“We misunderstand ethics when we think of it as a binary when we think that things can be ethical or unethical…”

As Danah Boyd, Founder and President of the Data & Society Research Institute (and Principal Researcher at Microsoft), explained, “We misunderstand ethics when we think of it as a binary when we think that things can be ethical or unethical. A true commitment to ethics is a commitment tounderstanding societal values and power dynamics — and then working toward justice.” Rainie et al. 2021

Most data-driven systems, especially AI systems, entrench existing structural inequities into their systems by using training data to build models. The key here is to actively identify and combat these biases, which requires the digital equivalent of reparations. . . These systems are also primarily being built within the context of late-stage capitalism, which fetishizes efficiency, scale, and automation. A truly ethical stance on AI requires us to focus on augmentation, localized context, and inclusion,Rainie et al. 2021

“A truly ethical stance on AI requires us to focus on augmentation, localized context, and inclusion”

I want to highlight the people and groups that are indeed working towards justice: using all of their powers for good. One example is ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) which is a yearly conference that brings together a diverse group of scholars to contemplate the fairness, accountability, and transparency of socio-technical systems, mainly AI. The work they are doing is incredible, transformative, and so necessary. FAccT 2021

Ch-ch-ch-changes

As we have seen throughout the article, there are times when people prioritize ethics for the wrong reasons, and usually, those reasons are financial.

Bias in AI is something that is widely agreed upon to need attention, and that is known to be harmful, especially to already marginalized groups of people. Hence, this is something that has stirred up debate when it comes to ethics in AI. Referencing the work of FAccT, Phan et al. state that “. . .tools for identifying and ameliorating bias have become a new class of industrial products in and of themselves; for example, AI Fairness 360 by IBM, Google Inclusive ML, and Microsoft FairLearn. These products allow firms to make the ethical claim that they have solved the bias problem. By reframing social problems as a series of technical challenges, these solutions limit the meaning of ethics to the domain of individual actions and decisions. This, again, works in favor of firms as it enables them to ‘neatly pose and resolve the problem of violence’ (Hoffman, 2020: 10). Phan, et al. 2021

Ethics has to start somewhere. Social change can happen from the bottom-up, top-down, or in most cases, a combination of both. Tech companies are paying attention to ethics because consumers are demanding it. Maybe they will end up doing some good, but to me, it looks like more concessions of capitalism will keep things as they are, while keeping up the façade of continually making things better.

What we need is a higher moral standard of ethics in AI. Higher than society’s ethics, higher than the ethics we grew up on. Ethical AI in practice needs checks and balances to keep improving, and never settling because things are always changing and AI needs to maintain its cutting edge capacity.

I call for an override of the current systems and the implementation of completely new systems, which I believe AI can achieve. Making the case for and against ethics in artificial intelligence is no easy task. I would like to think that most people can agree that ethics, social responsibility, and accountability are important to consider when designing AI. Most would likely agree that it is crucial to make sure that robots won’t hurt anyone, that algorithms shouldn’t be unfairly biased, and that self-driving cars should try not to hit anyone. The question isn’t as much if ethics should be worked into AI, but rather how, and who gets to decide. Like so many social issues, often people can agree on what is wrong, but how to fix the problem is where the disagreements abound.

“The question isn’t as much if ethics should be worked into AI, but rather how, and who gets to decide…”

You can stay up to date with Accel.AI; workshops, research, and social impact initiatives through our website, mailing list, meetup group, Twitter, and Facebook.

Join us in driving #AI for #SocialImpact initiatives around the world!

If you enjoyed reading this, you could contribute good vibes (and help more people discover this post and our community) by hitting the 👏 below — it means a lot!

Citations

36, R. I. T. A. F. (2020, June 4). Wef_global_risk_report_2020. Issuu. Retrieved November 30, 2021, from https://issuu.com/revistaitahora/docs/wef_global_risk_report_2020.

ACM Conference on Fairness, accountability, and transparency (ACM FACCT). ACM FAccT. (n.d.). Retrieved November 30, 2021, from https://facctconference.org/.

Economies of virtue: The circulation of ‘ethics’ in big tech. Taylor & Francis. Retrieved November 30, 2021, from https://www.tandfonline.com/doi/full/10.1080/09505431.2021.1990875.

Incorporating ethics into Artificial Intelligence — Philpapers. (n.d.). Retrieved November 30, 2021, from https://philpapers.org/archive/ETZIEI.pdf.

Rainie, L., Anderson, J., & Vogels, E. A. (2021, June 21). Experts doubt ethical AI design will be broadly adopted as the norm within the next decade. Pew Research Center: Internet, Science & Tech. Retrieved November 30, 2021, from https://www.pewresearch.org/internet/2021/06/16/experts-doubt-ethical-ai-design-will-be-broadly-adopted-as-the-norm-within-the-next-decade/.

Comment