Viewing entries in
AI Ethics

Future Technologies Conference 2023: Thoughts and Reflections

“Nobody can predict the future. But, we have to try.”

In early November, I was invited to speak at the Future Technologies Conference (FTC) in San Francisco. I was grateful for the opportunity to take the stage and share my research, and equally grateful to be present and learn from others. The quote above was stated by the keynote speaker, Tom Mitchell, who opened the conference. While Mitchell’s words hold a certain truth, they also invite us to delve deeper into the intricate dance between anticipation and adaptability in our rapidly changing world.

Any time I think about the future, I am brought back to a Bolivian proverb which I learned from distinguished writer, sociologist, feminist, and activist Silvia Rivera Cusicanqui. The saying, roughly translated, goes like this:

“The future is at my back. I stand in the present, and all we can see is the past.”

It’s an idea that initially may seem foreign, but as it settles in, it reveals a profound wisdom. We can only tell what is likely to happen by what has already happened. In Western culture, we’re often conditioned to fixate on the future, perpetually reaching for it, striving to predict and shape what’s to come. However, the reality is that while we can make educated guesses, the future remains shrouded in uncertainty. The unpredictable nature of the future raises questions about the power of self-fulfilling prophecies.

As we delve into the realm of future technologies, the landscape is vast and bustling with innovation. Many breakthroughs are driven by predictive abilities, often harnessed for purposes related to commerce, such as the orchestration of emotional responses in marketing endeavors. We are in a time where technologies are being deployed at great speed with very few guardrails in place, disregarding potential consequences. It is essential to recognize that these powerful technologies are not solely defined by their applications, but also by what goes into the making of them, including raw materials and data.

The creators behind new technologies, such as some of the speakers at the conference, often have noble intentions, with a focus on critical global issues like climate change, sustainability, public health, and technologies that enhance the well-being of individuals. Nevertheless, there is a recurring pattern where technologies take unforeseen paths, diverging from their original intentions. This becomes particularly complex when dealing with formidable forces like artificial intelligence (AI) and virtual reality (VR). These technologies are developing incredibly rapidly, with nearly endless possibilities, along with a lot of ethical concerns.

The topics covered at FTC included:

  1. Deep Learning

  2. Large language models

  3. Data Science

  4. Ambient Intelligence

  5. Computer Vision

  6. Robotics

  7. Agents and Multi-agent Systems

  8. Communications

  9. Security

  10. e-Learning

  11. Artificial Intelligence

  12. Computing

In this post, I will give an overview of my experience at the conference, including the talk that I presented, and share some highlights from other presentations.

My Presentation

The Future Technologies Conference covered a wide range of topics and specialties, with mine falling under data science. My presentation focused on the topic of data mining, titled: “In Consideration of Indigenous Data Sovereignty: Data Mining as a Colonial Practice”. During my talk, I explained what Indigenous Data Sovereignty is and why it is important, before describing the CARE Principles of Indigenous Data Governance: Collective Benefit, Authority to Control, Responsibility, and Ethics.

To exemplify how the CARE principles could be applied throughout the data lifecycle, I reviewed a case study to first show how the above principles are often missing in data collection practices.

A European NGO went to Burundi to collect data on water accessibility.[1]

The NGO failed to understand:

1. The community’s perspective on the actual central issues

2. The potential harms of their actions

By sharing data publicly, which included geographic locations, the NGO put the community at risk.

Collective privacy was violated and there was a loss of trust.

The CARE principles were violated, particularly Collective benefit and Responsibility.

I closed the talk with some recommendations. There is a need to remediate three main data challenges: data collection, data access, and relevance in order to allow for access, use and control of Indigenous peoples’ own data and information.[2] It is vital to note that there are varying local concerns in different regions, although all have been negatively influenced and impacted by long standing exploitative colonial practices. It is imperative that we continue to educate ourselves and question broader narratives that stem from colonial roots.

It was apparent that many attendees hadn’t considered the approach I presented, yet it resonated with them. I hope I prompted the attendants to think about data from a different perspective and think about the people behind the data.

I was amazed with how well it was received and the feedback and questions that I got. One question was about the case study that I presented. I was asked how exactly the people in question were harmed by data collection that did not adhere to the CARE principles? I explained in more detail how the community in Burundi from the case study was harmed by researchers who ended up sharing their personal data including location data, which broke the community’s trust. This shows that privacy is not only a personal issue, but for Indigenous communities, privacy is often a collective issue. This might not be considered from a Western perspective, which views privacy as purely individualistic. By expanding our understanding of human values and how they can vary culturally and regionally, we can better understand how data collection and new technologies will affect different populations.

Afterwards, many people approached me and wanted to discuss my research further. The best comment I got was simply: “Your research is so rare.”

That is why I wanted to present this work at this particular conference on future technologies, because so much of it relies on data, and a lot of data. Data often comes from people, and it has a lot of value. Some say it is more valuable than any other resource. Most people benefit through convenience of the learnings and apps developed by the use of public data. However, why is it only corporations who benefit monetarily from everyone’s data? Why is this outright exploitation allowed? Isn’t it neo-colonialism at work? This is the message I was getting across.

Notable Talks at FTC

The talks ranged across the board of new and future technology, centering around AI, VR, and more. At lunch, I met Myana Anderson, who told me that she was speaking about bears. What her talk — Metabolic Adaptation in Hibernating American Black Bears — was about was how bears have something that humans don’t: the ability to be sedentary for long periods of time, in order to hibernate. For humans, our bodies are made to move, and if we are too sedentary we get blood clots and all sorts of health issues. Her and her fellow researchers studied blood samples from hibernating bears to see what exactly it is that allows bears to remain immoble and maintain homeostasis. They collected this data and studied it to see what we could learn to reflect on for treating a variety of sedentary-related human diseases, and people with conditions that worsen with immobility.

This was certainly unique and compelling research that could potentially benefit people with disabilities and illnesses who are immobilized. However, there is an aspect that worries me, in that, are we headed for a dystopia of immobility, where life is lived in VR and people turn into blobs and hibernate like bears, due to this research? This was not mentioned, but is purely my speculation. Is this really the direction we want to go in? Why can’t we find ways to keep up being mobile and active in our world? Or, would this research truly be used just to support people with conditions that forced them to be immobile, and not for the general population to be able to sit without moving for long periods?

It was interesting because it was far-removed from my own research, but I remain slightly worried about how it will be used. It was also interesting to consider a study that does not use data from humans, but from animals, and may necessitate the consideration of animal rights going forward.

Another of the talks which stood out to me was called The Effects of Gender and Age-Progressed Avatars on Future Self-Continuity, by Marian McDonnell. The research was on the use of VR to create age-progressed avatars, in an effort to make people have more empathy for their future selves and save more money for their retirements. The idea is that people find it easy to empathize and care for their parents and children, but not for themselves in the future. This is true, and they did find that this was effective in getting people to think about their retirement more and put money away for their own futures. However, the most interesting thing in this study was that there were differences for men versus women. When men were introduced via VR to an older version of themselves, they thought they looked like their father, and thought it was neat. When women did the same, they were shocked and terrified to see themselves aged.

Women and men are socially and culturally treated so differently with such different expectations, including around aging. Older women are not represented as having worth, rather, they are really quite invisible, where older men are shown as being still attractive, as well as having an air of respect that women are not afforded. This social conundrum became very clear through their study, and they made sure to include it as a vital part of the research, which I thought was notable. Repairing these inequalities might take more than VR, but might be an interesting body of research to approach. These deep-seated inequalities make themselves visible during projects such as this, and it presents an opportunity to address them in appropriate and creative ways.

Final Thoughts

Throughout the conference, there were instances where I hesitated to express my thoughts, observing a prevailing emphasis on valuing speed and sales, and a lack of responsibility and transparency considerations. While some discussions touched on the ethical dimensions of technology, particularly in environmental applications, the technical details often delved into intricacies beyond my social science expertise. It was an opportunity to work on my own knowledge development in technical areas, and share knowledge with others in adjacent fields. That is why in-person conferences are so vital, so that the knowledge shared can mesh together and those in attendance can come away with a better understanding of things that may have been overlooked.

As I sat in attendance, occasions arose where I wished to inquire about ethical considerations. In one of these moments, another participant raised a question about my same concerns, only to receive a response acknowledging a lack of expertise in that domain. I found this a bit concerning, however it highlights the necessity of safety and responsibility in what we are building now and in the future.

In addressing the rapid evolution of the present into the future, concerns inevitably arise. Rather than understanding these as worries, reframing them as foresight becomes crucial for establishing checks, balances, and comprehensive protections around emerging technologies. This includes considerations not only during implementation but also at the initial stages of the data lifecycle, ensuring safeguards at every level without causing unintended harm. The question persists: Can we mitigate potential harms associated with new technologies, or is some level of harm inevitable?

Presently, an opportune moment exists for integrating ethics into technological discourse. However, it is imperative to approach this integration with an awareness of historical and existing systemic contexts. This nuanced approach is essential to navigate ethical considerations in a manner that acknowledges the complexities of past and current systems.

References

[1] Abebe, Rediet, Kehinde Aruleba, Abeba Birhane, Sara Kingsley, George Obaido, Sekou L. Remy, and Swathi Sadagopan. “Narratives and Counternarratives on Data Sharing in Africa.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 329–41. Virtual Event Canada: ACM, 2021. https://doi.org/10.1145/3442188.3445897.

[2] Rodriguez, Oscar Luis Figueroa. “Indigenous Policy and Indigenous Data in Mexico: Context, Challenges and Perspectives.” In Indigenous Data Sovereignty and Policy, 130–47. Routledge Studies in Indigenous Peoples and Policy. Abingdon, Oxon ; New York, NY: Routledge, 2021



In consideration of Indigenous Data Sovereignty


“Indigenous Peoples have always been ‘data warriors’. Our ancient traditions recorded and protected information and knowledge through art, carving, song, chants and other practices.” (Kukutai, 2020)

Long before the advent of computers and the written word, Indigenous Peoples had a rich tradition of transmitting knowledge and data from one generation to the next. However, the concept of Indigenous Data Sovereignty (ID-SOV) is a relatively recent development, with its first official documentation dating to 2016. (Taylor & Kukutai) This post will review ID-SOV and the CARE principles of Indigenous data governance in an effort to move towards decolonizing data. 

ID-SOV can be described as the right of Indigenous Peoples to possess, manage, access, and have authority over data that originates from their communities and relates to their members, traditional knowledge, customs, or lands. (Kukutai, 2020)

To state something as a right is one thing, however to see it carried out, we must detangle from a long history of manipulation of data on Indigenous peoples, who were historically demonized to justify settler colonialism. Now, when neo-colonialism is rife, we see how this narrative continues by victimizing Indigenous peoples. I align with those who argue that this narrative needs to change. According to the Global Indigenous Data Alliance (GIDA), building strategic relationships with global bodies and mechanisms is necessary to promote ID-SOV and governance internationally by providing a visible, collective approach. (Kukutai, 2020)

Even today, sensitive COVID-19 data is being mined and reused without consent from Indigenous Americans by the media, researchers and non-governmental organizations, under the assumption that making tribes more visible would be helpful but actually causing unintentional harm in the process. (RDA COVID-19 Indigenous Data WG, 2020) Settler colonialists thought that they were ‘helping’ too, via ethnic cleansing and direct violence. Where neocolonialism is not inherently violent, it is extremely dangerous (Couldry & Mejias, 2021), and tracing the histories can help us understand how to move towards decolonizing data for the benefit of all. 

Decolonizing Data Via Self-Determination

Data and data analytics have become increasingly important and interdependent in many ways in the digital age. Even governments are heavily reliant on data for their decision making and policies. As has been the case in much of our history, the unwilling targets of policy interventions are disproportionately Indigenous Peoples, whose enduring aspirations for self-determination over their own knowledge, information systems, institutions and resources get undermined by governments. Data is extracted from Indigenous Peoples, their territories, and cultural heritage without seeking their consent or involvement in decisions regarding the collection, utilization, or application of this data. (Walter et al. 2021)

To have the conversation about ID-SOV, let us first discuss the difficulty in defining what it means to be Indigenous. As per The UN Declaration on the Rights of Indigenous Peoples (UNDRIP), indigeneity is intricately tied to the point of initial colonial contact, which can prove challenging to ascertain in regions where colonizers did not establish permanent settlements. The term 'tribes,' though sometimes practical, carries with it problematic colonial connotations. Nevertheless, the label 'indigenous' possesses a broader scope, encompassing a diverse range of ethnic groups, including tribes like the hill tribes residing in the Mekong River area of Southeast Asia (Scott, 2009). A common thread among Indigenous Peoples is their strong inclination toward preserving their autonomy. Simultaneously, they frequently confront marginalization and discrimination, often framed within a narrative that portrays them as victims. (Chung & Chung 2019 P7)

In the pursuit of decolonization, it's crucial to emphasize that the concept of 'Indigenous' itself was a construct devised by colonizers to delineate who was considered fully human and who was relegated to a status deemed less than human (Scott, 2009). It is inherently problematic that we continue to operate within the framework established by this historical perspective. When it comes to the contemporary mission of decolonizing data, a pivotal starting point lies in the recognition of Indigenous Data Sovereignty. By placing the focus on those who have endured the most severe marginalization due to colonialism, we may uncover a clearer path forward in our journey towards decolonization.

There are many concerns from Indigenous groups, such as those in the Mekong area, referred to as Indigenous ethnic minorities (IEM). Many contradictions arise that result in security risks and the impact of sharing IEM data could be both positive and negative in ways unanticipated. A balance of freedoms is required - transparency versus personal security. (Chung & Chung 2019 P12

Within this contradiction lies a major difficulty: how to have accessible and transparent data, while also ensuring the right to privacy for the subjects of that data. This presents the deeper issue which is that data does not promote change automatically nor address issues of marginalization, colonialism or discrimination, not to mention combatting imbalances of power in negotiations and consultations led by governments. (Chung & Chung 2019 P20)

Open Data initiatives raise apprehensions within ID-SOV networks because they often lack safeguards for Indigenous Peoples. There is a growing emphasis on expanded data sharing, exemplified by the widely embraced FAIR principles (Findable, Accessible, Interoperable, Reusable). Nevertheless, this trend has generated tensions when it comes to safeguarding, sharing, and utilizing data pertaining to Indigenous Peoples. To promote meaningful engagement between data collectors and users with Indigenous perspectives, the CARE Principles provide a valuable framework for deliberating upon responsible data utilization.  (Kukutai, 2020)

CARE Principles for Indigenous Data Governance 

While the FAIR principles primarily focus on data itself and overlook the ethical and socially responsible aspects of data usage, such as power imbalances and historical contexts related to data acquisition and utilization, the CARE principles prioritize the welfare of Indigenous Peoples and their data. They can be integrated alongside the FAIR Principles across the entire data lifecycle to ensure mutual advantages and address these broader ethical considerations.(RDA, 2020 P57)

CARE Principles

Collective Benefit

Data ecosystems shall be designed and function in ways that enable Indigenous Peoples to derive benefit from the data

Authority to Control

Indigenous Peoples’ rights and interests in Indigenous data must be recognised and their authority to control such data be empowered. Indigenous data governance enables Indigenous Peoples and governing bodies to determine how Indigenous Peoples, as well as Indigenous lands, territories, resources, knowledges and geographical indicators, are represented and identified within data

Responsibility

Those working with Indigenous data have a responsibility to share how those data are used to support Indigenous Peoples’ self determination and collective benefit. Accountability requires meaningful and openly available evidence of these efforts and the benefits accruing to Indigenous Peoples. 

Ethics

Indigenous Peoples’ rights and wellbeing should be the primary concern at all stages of the data life cycle and across the data ecosystem

(Carroll et al. 2020


If these principles can be integrated into systems of open data, it could truly turn towards decolonizing data, however, the need to be more than just principles. If we center on the CARE principles and Indigenous data sovereignty for data governance on a global scale, perhaps we can steer away from harmful colonial data mining and towards a more balanced relationship with data. 





 Resources

Carroll, S. R., Garba, I., Figueroa-Rodríguez, O. L., Holbrook, J., Lovett, R., Materechera, S., Parsons, M., Raseroka, K., Rodriguez-Lonebear, D., Rowe, R., Sara, R., Walker, J. D., Anderson, J., & Hudson, M. (2020). The CARE Principles for Indigenous Data Governance. Data Science Journal, 19. https://doi.org/10.5334/dsj-2020-043

Chung, P., & Chung, M. (2019). INDIGENOUS DATA SOVEREIGNTY IN THE MEKONG REGION. 2019 WORLD BANK CONFERENCE ON LAND AND POVERTY”.

Nick Couldry & Ulises Ali Mejias (2021): The decolonial turn in data and technology research: what is at stake and where is it heading?, Information, Communication & Society, DOI: 10.1080/1369118X.2021.1986102 

Kukutai, T., Carroll , S. R., & Walter , M. (2020). Indigenous data sovereignty . eprints.utas.edu.au. Retrieved March 5, 2022, from https://eprints.utas.edu.au/34971/2/140589-Indigenous%20data%20sovereignty.pdf 

RDA COVID-19 Indigenous Data WG. "Data sharing respecting Indigenous data sovereignty." In RDA COVID-19 Working Group (2020). Recommendations and guidelines on data sharing. Research Data Alliance. https://doi.org/10.15497/rda00052

Taylor, J., & Kukutai, T. (2016). Indigenous data sovereignty toward an agenda. Australian National University Press. 

Walter, M., Kukutai, T., Russo Carroll, S., & Rodriguez-Lonebear, D. (2021). INDIGENOUS DATA SOVEREIGNTY AND POLICY.


Decolonizing AI Ethics: Indigenous AI Reflections

TLDR

This article highlights a quote about bringing an end to colonization and elaborates on how we can imagine AI bringing on this end and starting new beginnings for a sustainable future. We will establish the importance of citing Indigenous thinkers in all spaces, in an effort to decolonize without repeatedly colonizing, and how this is something that is crucial to my own research. Then we begin to discuss the challenges we are up against in the Western techno-utilitarian landscape and the importance of a mindset shift. Following that, some examples of how AI could potentially help democracy are introduced in order to show its true potential.

This is a call to action for readers to get involved in decolonizing ethics in AI. The action starts in our own thoughts and behaviors, and in how we interact with and use AI. If we can consider the possibilities of AI weaving into the goals of inclusivity and sustainability, celebrating diversity, and promoting democracy, we can bring it into reality. It does take work. If nothing is done, things will continue as they are.

Imagining Decolonization with AI

“¿Por qué podemos imaginar el fin del mundo pero no el final del colonialismo?” (Indigenous Action Media, 2022)

English translation: “Why can we imagine the end of the world but not the end of colonization?”

This quote from a little book on ancestral futurism from Indigenous Action Media stood out to me, but I want to take it even further as it is relevant to the current landscape of decolonizing ethics in AI:

Why can we imagine AI bringing on the end of the world, and not imagine AI bringing on the end of colonization?

As a writer, I have been experimenting with AI chatbots, specifically Microsoft’s Bing, and I thought it would be interesting to engage with it on this topic. Here is what Bing came up with after I asked it: “Can you help me with a blog post around this theme: Why can we imagine AI bringing on the end of the world, and not imagine AI bringing on the end of colonization?” This is what the AI said:

“The reason why we can imagine AI bringing on the end of the world and not imagine AI bringing on the end of colonization is because of the way we think about AI. We tend to think of AI as a tool that can be used for good or bad purposes. When we think about AI bringing on the end of the world, we are thinking about how it could be used for bad purposes. For example, if an AI system were to become self-aware and decide that humans are a threat to its existence, it could take actions that would lead to the end of the world.

On the other hand, when we think about AI bringing on the end of colonization, we are thinking about how it could be used for good purposes. For example, if an AI system were to be used to help colonized people gain their independence, it could help bring an end to colonization.

It’s important to remember that AI is just a tool and it’s up to us to decide how we want to use it. If we use it for good purposes, it can help us solve some of the world’s biggest problems. If we use it for bad purposes {or without care} it could lead to disastrous consequences.”

Here are my thoughts: First of all, I think negligence and speed of development are just as dangerous as bad actors. (I added in the {or without care} part.) There is a lack of accountability and responsibility in the air around AI, and this needs to be addressed. Secondly, I noticed that Bing summarized that it is a problem to think of AI as a tool, which is similar to Jason Edward Lewis’ piece on making kin with machines. It is vital to avoid enacting a master/slave relationship with AI. Lewis, Indigenous Future Imaginary and professor, argues for an alternative kinship with AI, and a relationality as opposed to a rationality. (Lewis, 2018)

The concept of kinship with AI is hard for someone like me to understand, as a mixed white person from Northeastern USA, coming from generations upon generations of colonization. But I think this is what we need to strive to understand. Because creating a master/slave relationship with humans being the masters and the machines being the slaves should not be the goal.

This is a call to action: to change the way we relate to AI. It is a mindset shift at its core. With no intervention, there will be more of the same. Check out my last blog The Precarious Human Work Behind AI for more on the dark side of the rapid growth of AI and the hidden ways it is affecting workers.

As we continue to develop AI and data science technologies, it’s important that we consider the ethical implications of these systems. While many people are quick to imagine AI bringing on the end of the world, few consider that these systems could bring about the end of colonization. In this blog post, we explore some of the key issues surrounding AI and data science ethics and discuss how we can work towards a more equitable future.

The Importance of Citing Indigenous Thinkers

In the ongoing process of decolonization, Indigenous peoples and their expansive knowledge systems (aka data) are at the core. Indigenous people have always been at the center of the fight against colonization. They have always been imagining futures in a sustainable manner, aware of the balanced systems of interacting with each other and the earth. We need to be considering how to work with both new technologies and traditional knowledge. Indigenous wisdom needs to be highlighted, as well as the acknowledgment of the centuries-long battle against oppression that Indigenous peoples have faced.

AI runs on data, we are in the age of information. It doesn’t have to be a future that keeps repeating colonialism.

Over the past year, I have been a part of a research group at the Berkman Klein Center for Internet and Society at Harvard, and for our last meeting, we met with another group, Indigenous in AI. Some of the readings done as a group I have included in this post. In our joint meeting, we got to hear directly from Jason Edward Lewis and other representatives from Indigenous AI, which made for an enlightening and rich conversation.

A strong point that I want to carry through is the importance of centering on and citing Indigenous knowledge in respectful ways. One of the readings I have done recently was by Zoe Todd, a Métis anthropologist. I will repeat her sentiment and say “I cannot, nor would I try, to speak for Indigenous thinkers.” (2016, p.14)

In Todd’s article, she gives an example of being in a European lecture hall and hearing indigenous thinkers being left out and not cited for their original ideas, and what is worse is that instead, ideas are filtered through white intermediaries. This is incredibly difficult to track and prove, as it takes looking for what is not there. There isn’t any data on the number of Indigenous researchers or tech workers who work in AI, but the data on diversity in these and other STEM fields shows that we still have a long way to go toward diversity and inclusion.

Todd stressed the importance of citing and quoting Indigenous thinkers directly and generously, and I would say that this needs to be done everywhere, in tech and business as well as academia. The author argues that Indigenous thinkers should be cited as thinkers in their own right, not just as representatives of an amorphous Indigeneity that serves European intellectual or political purposes. They should also be cited as dynamic philosophers and intellectuals, not just as research subjects or collaborators. By doing so, we can help to decolonize intellectual spaces and create a more inclusive and equitable academic environment. (Todd, 2016 P.7) Her piece An Indigenous Feminist’s Take On The Ontological Turn: ‘Ontology’ Is Just Another Word For Colonialism is an enlightening and refreshing read which I would recommend.

Working in this intellectual space and being non-indigenous is tricky, but it is what has always felt the most authentic to me. My goals as a researcher include supporting Indigenous thinkers and knowledge and crediting them as much as I can. I don’t want to take the credit for indigenous thought, and I will ensure it doesn’t happen. I hope that people who read my work and want to cite it will cite the indigenous creators whom I reference. I think it is important to get the ideas out there, and the trend toward creating sustainability across sectors is a good trend overall. Sustainability is not a Western idea, but something that we need to fight a lot of Western ideals to achieve. Sustainability requires a change of mindset and new ways of thinking and doing things, and willingness to change. The fastest changing things around are in technology, especially AI. If we want to really change things and go in a more sustainable direction, we must take action, and this is where I focus my research.

The Importance of Understanding AI Systems Beyond a Western Technoutilitarian Lens

In the fabulous ‘Indigenous Protocol and Artificial Intelligence Position Paper’, Lewis explains the importance of understanding AI systems beyond a Western techno utilitarian lens. Lewis argues that if we only think about these systems through a Western lens, we will not fully grasp what they are and could be. “At best, we risk burdening them with the prejudices and biases that we ourselves still retain. At worst, we risk creating relationships with them that are akin to that of a master and slave.” (Lewis, 2020 p.6) As I stated above, I am convinced that we must avoid falling into master/slave style relations with AI, especially as AI gets more human-like. If we want to create an inclusive, equitable, and sustainable world, we need to put this dynamic fully in the past.

When I asked Bing’s AI to summarize Lewis’s work, it came out with something that was very black and white as far as getting AI to be used for good versus bad purposes. Yes, looking out for bad actors is important. But the issues currently and continually tend to be more dangerously subtle, like reproducing biases that put minorities at unfair disadvantages.

It isn’t simply controlling the systems to be used for good purposes and not bad; rather, it is changing the mindset around the relationships between humans and machines. Think about your relationship with your car, moto, or bicycle. This is a machine that you own and it is a tool to get you from place to place. Is it your slave? Do you treat it that way? Don’t you take care of it, change the tires, check the brakes? Doesn’t it take care of you? Don’t you go through good times and sometimes hard times together? Don’t you respect your vehicle, and understand its utility and also its danger, since it is dangerous out on the road? Sometimes we name them, personify them, and anthropomorphize them. Think of this relationship, and then think of it with something that is made to be human-like, something you can talk to in a natural way; but is still a tool, a machine. We need to be seriously thinking about how we think about AI.

Lewis and his coauthors pointed out that the assumptions we are currently making are getting “baked into the core material of our machines, fundamentally shaping the future for decades hence.” (Lewis et al., 2018) We could take this as a warning, or as an opportunity. But we cannot be lazy and expect technology to change things on its own. People need to step in and change the ingredients to be baked in, to create a more inclusive and sustainable world.

The Potential Benefits of AI

What would the world look like if AI followed value systems which truly promoted the betterment of all, instead of racing for profits and for the sake of speed itself? The potential is amazing, even for helping democracy.

There are many possibilities of AI being used for public benefit, and to help democracy, not hurt it. If used correctly, it could plausibly help democracy by:

-Educating citizens

-Assisting with group deliberations

-Summarizing what people think

-Finding possible solutions (Scheier et al., 2023)

Large language models could potentially aid politicians in better understanding their citizens’ needs. (Scheier et al., 2023) We know that data science and AI can greatly influence efficiency and effectiveness, not to mention the speed of getting things done. However, as many have pointed out, “that same speed and efficiency can also magnify social inequities and public harms.” (Bruce, 2023)

These are just some examples of what the future of AI could look like if it were actually geared toward public benefit. What would it look like for AI to align more with Indigenous values? This is the question that needs to be asked as we move forward.

References

Bruce, A. B. (2023, May 26). How Public Interest Tech Principles Can Shape the Future of Data Science and Artificial Intelligence — PITcases. PITcases. https://pitcases.org/portfolio/pit-future-of-data-science-ai-afua-bruce/

Lewis, J. S., Arista, N., Pechawis, A., & Kite, S. (2018). Making Kin with the Machines. Journal of Design and Science. https://doi.org/10.21428/bfafd97b

Lewis, Jason Edward, ed. 2020. Indigenous Protocol and Artificial Intelligence Position Paper. Honolulu, Hawaiʻi: The Initiative for Indigenous Futures and the Canadian Institute for Advanced Research (CIFAR).

Schneier, B., Farrell, H., & Sanders, N. E. (2023, April 21). The Tech Investment We Should Make Now to Avoid A.I. Disaster. Slate Magazine. https://slate.com/technology/2023/04/ai-public-option.html

Todd, Z. (2016). An Indigenous Feminist’s Take On The Ontological Turn: ‘Ontology’ Is Just Another Word For Colonialism. Journal of Historical Sociology, 29(1), 4–22. https://doi.org/10.1111/johs.12124

The Precarious Human Work Behind AI

AI is now everywhere, but it doesn’t exist as autonomously as it makes it seem. AI is increasingly prevalent in a large variety of industries, many which hide the countless workers behind the curtain making it function, and I am not just talking about the engineers who create it.

It is important to acknowledge the human work behind AI development and maintenance, from grueling content moderation to rideshare drivers to all of us whose data serves to profit large corporations. This leaves countless workers in precarious positions, stuck in survival mode and forced to adapt as best as they can, with low wages and the threat of job loss looming as tasks continue to be automated.

Anything done in the name of ‘safety and trustworthiness’ of AI is truly an afterthought to capital gain for corporations. In a podcast with engineers from Open AI, they were laughing about how ‘Trust and Safety’ (T&S) more so stands for ‘Tradeoffs and Sadness.’ This is a fundamental problem for multiple reasons. Here in this blog, we will discuss the areas where the rapid development and deployment of AI is affecting precarious work in various ways.  

The Human Work Behind Data

Data is the foundation of AI and is generated by people. Each day, approximately 328.77 million terabytes of data are created. The work done to produce data is almost never compensated, although it is massively profited off of by large corporations. How could companies compensate their users for the data that they use and profit from? What kind of laws or policies could be created to solve this problem, and how would it work on a global scale? These are still questions that we are grappling with as a society. 

Data is the fuel of AI. There is a stark lack of control and ownership over data, which brings up some serious ethical considerations which include but are not limited to privacy, and which are barely covered by inconsistent and often unenforced data protection laws.  

What should be done about this aspect of human work behind AI? This could be seen as  a form of ghost work. Should it be compensated? How would this be implemented? There are some companies which are taking some initiatives in this and paying very small amounts to users for their data, but the issue is much bigger than that. The data collected is used to target advertising at users, which means further exploitation. Not to mention that it can be used to feed AI that replaces human work, so that your own data which you aren’t paid for could be used to put you out of a job, while also being used to sell you things. 

In 2017, it was estimated that the transaction of giving up personal details to companies like Facebook came to about $1,000 per person per year, but this is quickly rising. (Madsbjerg, 2017) The exact value of our data is unclear, even to Google, but is often used for targeted advertising, as well as being sold to data brokers who sell it as a commodity to advertisers, retailers, marketers, government agencies, and other data brokerages. According to a report by SecurityMadeSimple.org, the data brokerage industry generates over $200 billion of revenue yearly and continues to grow annually. Another report by MAXIMIZE MARKET RESEARCH states that the Data Broker Market size was valued at $257.16 billion in 2021 and the total Data Broker revenue is expected to grow at 4.5% from 2022 to 2029, reaching nearly $365.71 billion. When will we as users and providers of data ever see any of these profits? 

One proposed answer would be universal basic income based on the data we produce. This idea is not  new, and was first presented by Jaron Lainer in his 2013 book, Who owns the future? The book criticizes the accumulations and evaluation of consumer data in the tech industry which fails to acknowledge any monetary debt to the people for all this free information they create and give. 

The Exploitation of Workers in AI Moderation and Content Labeling

Now, we will leave that can of worms crawling around and discuss the low-paid gig work that goes into moderating AI systems, such as scanning content for violence and hate speech or endlessly labeling data. These jobs are often outsourced to workers in the Global South who are repeatedly exposed to traumatic content and receive little compensation. This is highly exploitative work, with little room for workers to organize and demand worker’s rights. 

Take for example the story of Sama, which claims to be an “ethical AI” outsourcing company. Sama is headquartered in California and handles content moderation for Facebook. Its Kenya office pays its foreign employees a monthly pre-tax salary of around $528, which includes a monthly bonus for relocating from elsewhere in Africa. After tax, this amounts to around $440 per month. Based on a 45-hour work week, this equates to a take-home wage of roughly $2.20 per hour. Sama employees from within Kenya who are not paid the monthly relocation bonus receive a take-home wage equivalent to around $1.46 per hour after tax. (Perrigo, 2022) 

Time published a report  on Sama which detailed a failed worker uprising. The workers faced the trauma of viewing hundreds of horrific pieces of content every day, with the goal of determining if they were Facebook appropriate within 50 seconds for each, while living hand-to-mouth on low salaries and not given the appropriate support needed for this PTSD-inducing job. When workers organized in protest and planned a strike, high-paid executives flew in from San Francisco to ‘deal’ with the situation. They isolated the spearheader of the worker’s alliance, and terminated him, making him look like the bully who forced 100 other workers to sign a petition against the company. (Perrigo, 2022) The real bullies got away with this, as the ultimate goal is to make Facebook happy. It suits them to have low-waged workers with no other options to suffer life-long trauma everyday, all day long. But these workers need fair pay and worker’s rights. They need real support for their labor which is what makes Facebook a safer space, with less hate speech and violent content. They deserve to have a voice. 

Another example is Mechanical Turk, or MTurk, which is a marketplace for human intelligence micro-tasks which are extremely low-paid (with no guarantee of pay), not to mention poor labor protection and high exploitation, and involves tasks such as tedious image labeling. As of December 2019 MTurk’s workers’ portal had 536,832 visitors, and although the work is demoralizing and pays pennies, many depend on it over no work at all. (Mehrotra, 2020) MTurk has been functioning since 2005, still with no worker protections. 

The Human Intervention Required for AI Systems Case Studies: Spotlight on the Global South

Taking a deeper peek behind the curtain, we see that AI systems often require unseen human intervention and workarounds to operate effectively. This goes beyond the desks of technologists, and drives through the streets of nearly every city. 

One study looked into the operations of two startups, Gojek and Grab, which entered Jakarta in 2015 with the aim of digitizing the city’s motorbike taxi market. (Qadri, & D’Ignazio, 2022) They found that the platform’s view of the city is idealized and flattened, with no consideration for frictions such as traffic, parking delays, or blocked roads. The routes assigned to drivers are often inappropriate or dangerous due to the platform’s lack of consideration for these variables, which local drivers develop work-arounds for that remain invisible and unacknowledged by the platforms. The drivers know the safest ways through their own city, despite what the app says. 

The authors compared this to Donna Haraway’s “god trick” (1988) because it places the viewer in the impossible position of a disembodied, all-knowing eye looking down at the city. (Qadri, & D’Ignazio, 2022) The startups’ discourse often casts technology as the central organizer and optimizer of activity, while other forms of (human) intelligence are considered inferior. And to further demonstrate the dehumanization at play, Grab’s blog refers to drivers as “supply” units that can be moved around like goods or trucks. (Garg, et al., 2019) In reality, it is the human drivers who have knowledge of the city in its ever-changing state which makes the taxi service work, but the “AI” technology gets all the credit and the company owners benefit the most profit.

Workers rights remain an issue for lots of new areas of precarious occupation behind AI. As stated in a paper on work regulations for platform food delivery workers in Colombia, a neoliberal discourse on entrepreneurship is deepening the crisis of platform workers who are characterized as “self-employed” and therefore excluded from employment rights guaranteed for “employed workers” in local labor legislation. (Wood et al., 2019) (Vargas et al, 2022,  p..38)

What is desperately needed are people to care about people. AI has no capability of systems to actually care about people, even if it were based on human systems that did. Algorithms are programmed with the ultimate goal to promote business. This leads to human workers being treated more and more like machines. With humans working under control of algorithms, digital workers are excluded from the benefits of the value chain in which they are one of the most important subjects. (Vargas et al, 2022 p.34)

Discussion

In a Harvard Business Review article on the subject of the humans behind the curtain of AI, the authors spoke of the paradox of automation’s last mile, which is the ever-moving frontier of AI’s development. (Gray & Suri, 2017) This is all the more relevant today. As AI makes progress, it creates and destroys temporary labor markets for new types of humans-in-the-loop tasks at a rapid pace

Contract workers are needed to train algorithms to make important decisions about content. They are also responsible for making snap decisions about what stays on a site and what’s deleted. This is a new form of employment that should be valued. (Gray & Suri, 2017) However, this work is not only still largely invisible, but the workers are not valued and the work is unreliable, low-paid, and often traumatizing. 

Adrienne Williams, Milagros Miceli and Timnet Gebru wrote an essay late last year which argued that the idea of a world where AI is the primary source of labor is still far from being realized. The push towards this goal has created a group of people who are performing what is called “ghost work”, a term introduced by anthropologist Mary L. Gray and computational social scientist Siddharth Suri. This refers to the human labor that is often overlooked and undervalued but is actually driving AI. Companies that have branded themselves as “AI first” rely heavily on gig workers such as data labelers, delivery drivers and content moderators who are underpaid and often subject to heavy surveillance. (Williams, Milagros and Gebru, 2022)

Recommendations from Williams, Milagros and Gebru:

  1. Funding for research and public initiatives which highlight labor and AI issues.

  2. Analysis of causes and consequences of unjust labor conditions of harmful AI systems.

  3. Consideration for the use of precarious crowdworkers to advance careers of AI researchers and practitioners and shift power into the hands of workers.

  4. Co-create research agendas based on worker’s needs.

  5. Support for cross-geographical labor organizing efforts.

  6. Ensuring that research findings are accessible to workers rather than confined to academic publications. 

  7. Journalists, artists and scientists can foster solidarity by drawing clear connections between harmful AI products and labor exploitation. (Williams, Milagros and Gebru, 2022)

Recommendations from Gray and Suri:

  1. Require more transparency from tech companies that have been selling AI as devoid of human labor.

  2. Demand truth in advertising with regard to where humans have been brought in to benefit us.

  3. Recognize the value of human labor in the loop.

  4. Understand the training and support that informed their decision-making, especially if their work touches on the public interest. (Gray & Suri, 2017) 

Conclusion

I can’t stress enough the importance of acknowledging the human work behind AI. There is a need to ensure that those who contribute to the development of AI are fairly compensated and protected. When trust and safety are dismissed as tradeoffs and sadness, with no question that the ends might not be justifying the means, there are some fundamental changes necessary to the approach on this. We might even question the end goals while we are at it. 

We need to be humanized. It is arguable that AI was started back in the day to try to eventually replace human slavery. This is inherently problematic, as master/slave relations are built on exploitation, subjugation and dehumanization, which extends to the workers behind AI and not just to the AI itself. Although there are many benefits to AI replacing, changing, or accompanying work, it must be done in a way that is not exploitative and is centered on the betterment of all people and the planet, not in a speed-race for AI. 

While AI has the potential to revolutionize many industries, it is important to acknowledge the human work that goes behind its development and maintenance. From data collection to system maintenance, humans play a critical role in the AI ecosystem. It is essential that we recognize and value this work, and understand the real harms that are already happening around AI. 

It is easy to have a lot of fear for what AI can bring, how many jobs it is going to take. The reality is that most jobs will need to adapt to AI, and also that AI is creating so many new jobs at various skill levels. This would be much better news if it was something that everyone could benefit from, instead of being a product of exploitation and techno-solutionism. 



Sources 

Garg A, Yim LP and Phang C (2019) Understanding Supply & Demand in Ride-hailing Through the Lens of Data. In: Grab Tech. Available at: https://engineering.grab.com/ understanding-supply-demand-ride-hailing-data (accessed 6 October 2021).

Gray, M. L., & Suri, S. (2017). The humans working behind the AI curtain. Harvard Business Review. https://hbr.org/2017/01/the-humans-working-behind-the-ai-curtain

Haraway, D. (1988). Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist Studies, 14(3), 575-599.

Fagen, R. (2023). GPT4: Eldritch abomination or intern? A discussion with OpenAI — Integrity Institute. Integrity Institute. https://integrityinstitute.org/podcast/trust-in-tech-e19-eldritch-open-ai-gpt

Lanier, J. (2013). Who Owns the Future? Simon and Schuster.

Mehrotra, D. (2020, January 28). Horror Stories From Inside Amazon’s Mechanical Turk. Gizmodo. https://gizmodo.com/horror-stories-from-inside-amazons-mechanical-turk-1840878041#:~:text=The%20workers%20of%20Mechanical%20Turk,numbers%20and%20other%20personal%20data

Perrigo, B. (2022, February 17). Inside Facebook’s African Sweatshop. Time. https://time.com/6147458/facebook-africa-content-moderation-employee-treatment/

Qadri, R., & D’Ignazio, C. (2022). Seeing like a driver: How workers repair, resist, and reinforce the platform’s algorithmic visions. Big Data & Society, 9(2), 205395172211337. https://doi.org/10.1177/20539517221133780

Should tech companies pay us for our data? (2022, May 20). World Economic Forum. https://www.weforum.org/agenda/2018/12/tech-companies-should-pay-us-for-our-data/

Vargas, D. S., Castañeda, O. C., & Hernández, M. R. (2022). Technolegal Expulsions: Platform Food Delivery Workers and Work Regulations in Colombia. Journal of Labor and Society, 1–27. https://doi.org/10.1163/24714607-bja10009

Wood, A.J, Graham, M., Lehdonvirta, V. and Hjorth, I. “Good Gig, Bad Gig: Autonomy and Algorithmic Control in the Global Gig Economy.” Work, Employment and Society 33(1) (2019), 56–75. https://doi.org/10.1177/0950017018785616.

Williams, A., Miceli, M., & Gebru, T. (2022, December 10). The Exploited Labor Behind Artificial Intelligence. NOEMA. https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/

Comparative analysis: Ubuntu Collectivism versus Western Ethics in AI development

When AI technologies affect everyone globally, wouldn’t it be nice if they were built with the collective in mind?

In my last blog, I introduced the African collectivist philosophy of Ubuntu and how it could be applied to Artificial Intelligence (AI) ethics for the benefit of all, based on the works of Mhlambi in 2020 and Gwagwa in 2022. The word ubuntu means “human-ness” or “being human” in Zulu and Xhosa languages spoken in South Africa and Zimbabwe respectively. Here I dig deeper into some of the key concepts of Ubuntu which either parallel or stand in opposition to Western ethics such as utilitarianism, and addresses the flaws of individualism and why we should move away from it. 

What draws me personally to Ubuntu as an ethical theory for AI Governance 

Learning about Ubuntu was a breath of fresh air, as Western ideals such as individualism never sat well with me. I confronted individualism in my master’s thesis research, but didn’t come across Ubuntu until rather recently, in connection to my work in ethics in AI. It is exactly what I was looking for: an alternative ethical system which relates personhood to how we are all connected, that a person is a person through other people. It relates to mutual aid (Kropotkin, 1902) and the sense of care in the grand sense of care, caring about how everything affects everything, not just for oneself. The idea that this level of care and collectivism could be applied to AI ethics blew me away, and the papers I have read on it, especially one by Sabelo Mhlambi, really drove this home. 

A snippet of my story 

Nearly five years ago, I chose to leave the Western world behind and live in South East Asia, after also spending time in Western Africa. My decision was fueled by the distasteful air of individualism in the West, which promotes greed and putting others down to get ahead. No amount of personal comfort could erase that ever present feeling of disconnection I feel when in the US, Europe or Australia. When I visit my hometown, everyone always asks me, why do I live so far away? It is a difficult question to answer, but I think it comes down to this notion of the isolation caused by individualism that puts everyone in toxic competition with each other and in situations where your success means that you are climbing over others. I look around and see constant valuing of profit over life. The fact that AI has been born from this ideology is extremely problematic, as it has this baseline of individualism built in. 

From my travels and living abroad, I have seen that the world is rich with diversity, and that diversity is a beautiful thing and should be celebrated, not discriminated against. White men are not actually the majority in the world, as much as everyone else is marginalized and minoritized. Women are minoritized, and we are over half of the population. The world has been running on systems that make zero sense. As we breathe life into artificial intelligence, it is overdue that we have a re-haul on how we relate to one another and the world around us. It is time that we turn to non-Western-centric ideals and embrace the diversity of the world when deploying technologies that affect everyone globally. 

The rest of this article will engage more deeply with Mhlambi’s work on utilizing Ubuntu as an ethical framework for AI governance moving forward, something I endorse completely. 

Ubuntu: an African value of the collectivism of communities

Alternative ethics systems such as Ubuntu are not currently included in the exclusive discourse on ethics and AI. The default is Western ethics, which are burdened with individualism and greed, and are not adequate to address technological and algorithmic harms. (Mhlambi,  2020 p. 23) Individualism and greed also stand in opposition to Ubuntu’s foundations of interconnectedness, empathy, and generosity. (Mhlambi,  2020 p. 24) These are the values that AI development would benefit from immensely, which would make individualistic values irrelevant. How can this be implemented for the governance of AI?

Ethical Leadership: Ubuntu promotes cooperation and helping each other

Ethical governance requires a closer look at leadership. Cooperation and participation are requirements for Ubuntu, particularly when it comes to leadership, as it rejects elite power concentrations. (Mhlambi,  2020 p. 15-16) The current leadership in AI consists of concentrations of power amongst a few elites, which could be something that gets in the way of Ubuntu truly working. One Ubuntu quote “Inkosi yinkosi ngaba-Ntu” translates to “A leader derives power from the consent and will of the governed. (Mhlambi,  2020 p. 15-16) Government and other powers should be acting in service to the people. This is the purpose of leadership. 

However, it is not what we see from most leaders. Following Ubuntu, rulership is collaborative. That is how things should really be done within governance, by being in service to the people. 

How do we make this value-shift happen and balance power structures?

Focusing on Inclusion to combat exclusion

Arthur Gwagwa suggested that there be more of a focus in research and policy work on “Ubuntu-based action guiding principles for all AI stakeholders.” (Gwagwa, 2022 p. 1) He gave an example of providing guidance to reconcile ethical dilemmas in AI design, including conflicting or competing cultural values. (Gwagwa, 2022 p. 1) This would support the notion of inclusivity that Ubuntu ethics would bring to AI design. 

Gwagwa went on to provide a useful definition of exclusion: ‘‘the inability to participate effectively in economic, social, political, and cultural life, and, in some characterizations, alienation and distance from the mainstream society.’’ (Duffy, 1995) (Gwagwa, 2022 p. 2) This is something that is important to keep in mind, also when thinking about digital identity. 

Rationality vs. Relationality

While reading about Ubuntu and AI ethics, the comparison was continually brought up between rationality versus relationality as to the question, how do we define personhood?  

Personhood as rationality traditionally comes from a Western viewpoint, which is what has modeled machine intelligence, and “has always been marked by contradictions, exclusions, and inequality.” (Mhlambi, 2020) How do we situate what it means to be a person when contemplating “artificial or mechanical personhood”? (Mhlambi, 2020)

Looking to Western ethics, utilitarianism, which tends to be very rationalizing, doesn’t always play out appropriately. Utilitarianism as applied to AI ethics aims to maximize what is good for people and minimize what is bad for them in the long run. (Shulman et. al, 2009) (Gwagwa, 2022 p. 5) This ultimately still leaves some people excluded and disadvantaged, and they continue to be those that are perpetually marginalized. 

Taking a bottom-up approach, African philosophy could address both the disproportionate negative effects of AI on people and work towards global equality and protections. (Mhlambi,  2020 p. 6)

Contrasting Collectivism and Individualism

Individualism, something that I have butted heads with in my own research over the years, desperately needs to be challenged, as it has many flaws. Generally, individualism is the idea that the central point of value in society is the self-complete, autonomous-self individual. (Mhlambi,  2020 p. 7) 

Mhlambi lists several flaws of individualism, including:

  1. Justification of inequality

  2. Power asymmetries and exploitation which disallow upward social mobility

  3. Worsening of inequalities due to lack of upward mobility

  4. Increased inequality and prioritized private interests of those in power causes cycles of political instability (Mhlambi,  2020 p. 7, 10)

These harms are ultimately produced by any system based on individualistic principles. (Mhlambi,  2020 p. 10) My question is, does individualism really fit in with any ethical system? When will we realize that individualism is unethical?

Ethics beyond the human-centered world

Western ethics at best is people-centered, and ignores any connection between us and the Earth; rather, it allows for exploitation of it. “Mastery of nature” was the Enlightenment’s goal of self-realization, which some say that today has transformed into “the mastery of bits and cyberspace.” (Kennington, 1978) (Mhlambi, 2020 p. 9) These ideals “tolerate the inevitability of inequality.” (Mhlambi, 2020 p. 9) Justification of exploitation is incredibly unethical, and for this ideal to be adopted by AI could cause unimaginable problems, where instead technologies should be used to support and protect humanity and the Earth. 

What is currently valued in AI development?

One of the most highly valued and problematic aspects of AI is speed, where perhaps it shouldn’t be the most important thing. In the world of AI, speed can equate to success. It is said that similarity creates speed. However like individualism, similarity has many flaws, including:

  1. Decreased diversity

  2. Filter bubbles

  3. May lead to discrimination ex: race, gender (Mhlambi,  2020 p. 20)

This ties in with individualism coming from a monoculture of Silicon Valley, which promotes excessiveness and greedy competition, as self-interest takes center stage. (Murobe, 2000)  (Mhlambi,  2020 p. 9) Theoretically, this goes against Western ethics as well, which would lead us to act in the best interest of all humans and not put ourselves above others. However, this is not how it works in reality, arguably, because of individualism. 

So where do we turn? In the unique balance which is absent from Western individualism as well as Eastern communism, we find African Ubuntu, which “seeks to avoid the worst of extreme systems.” (Mhlambi,  2020 p. 17)

Ubuntu is about human connectedness with other people, living things and the universe at large.

Ubuntu views humanity as how a person relates in meaningful ways in relation with other persons. “A shared humanity, a oneness and indissoluble interconnectedness between all humans, needs to be the paramount human identity and positionality from which we organize our societies, and produce the technological advances that maintain social harmony.” (Mhlambi,  2020 p. 21)

This is not to say that there is no concept of the individual within Ubuntu ideology. Rather, the individual has many important roles to play. These include:

  1. Doing one’s part to maximize public good

  2. Affirming the dignity of all and restoring breaks in harmony 

  3. Creating the necessary environment for all to thrive (Mhlambi,  2020 p. 24)

My conclusions from Mhlambi’s work lead me to reiterate that inclusion cannot be complete as long as inequality exists. (Mhlambi,  2020 p. 24) 

Ubuntu is a philosophy that encourages us to help each other: Can we apply that to building AI?

Technology is not lacking ethics. Societal values are ever-present in the creation and use of technology: but what ethics are included matters. This gives us a clear view of where society’s ethics stand: with those in power. Compassion, equity and relationality are missing in this, and that is a problem. If actions are taken to shift to these crucial values of Ubuntu and collectivism, this change could start with AI and radiate out to benefit everyone as well as the planet. 

“Personhood must be extended to all human beings, informed by the awareness that one’s personhood is directly connected to the personhood of others.” (Mhlambi,  2020 p. 7)

Resources

Duffy, K. (1995). Social Exclusion and Human Dignity in Europe: Background Report for the Proposed Initiative by the Council of Europe (Strasbourg: Council of Europe)

Gwagwa, A.E. (2021). Africa’s contribution to an intercultural reflective turn

in the ethics of technology in the era of disruption. https://www.academia.

edu/51050494/Africas_contribution_to_an_intercultural_reflective_turn_

in_the_ethics_of_te

Gwagwa, A., Kazim, E., & Hilliard, A. (2022). The role of the African value of Ubuntu in global AI inclusion discourse: A normative ethics perspective. In Patterns (Vol. 3, Issue 4). Cell Press. https://doi.org/10.1016/j.patter.2022.100462

Kennington R. “Descartes and Mastery of Nature.” In: Spicker S.F. (eds) Organism, Medicine, and Metaphysics. Philosophy and Medicine,

vol 7. Springer, Dordrecht, 1978.

Kropotkin, Piotr Alexeievich. Mutual Aid: A Factor or Evolution. New York: McClure Phillips and Co., 1902. 

Mhlambi, S., & School, H. K. (2020). Sabelo Mhlambi Carr Center Discussion Paper Ubuntu as an Ethical & Human Rights Framework for Artificial Intelligence Governance Technology and Human Rights Fellow Carr Center for Human Rights 

Shulman, C., Jonsson, H., and Tarleton, N. (2009). Which consequentialism? Machine ethics and moral divergence. Asia-Pacific Conf. Comput.

Philos. 23–25. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.

1.1.363.2419& rep=rep1& type=pdf.

Murobe, M.F. ‘Globalization and African Renaissance: An ethical reflection’, in Problematising the African Renaissance, E. Maloka and

E.Le Roux (eds). Pretoria: Africa Institute of South Africa, 2000, pp. 43–67.

Introduction to Ubuntu Collectivism Theory Applied to AI Ethics

Justice, inclusivity and solidarity, can we consider these to be universal values?

 

These are some of the main values in the Sub-Saharan African philosophy of Ubuntu, which clarifies many of the core ethics that we find across cultures, such as the age-old golden rule: do unto others as you would want them to do unto you. In essence, it is seeing one’s humanity in the humanity of others. (Gwagwa, 2022 p. 2) 

 

In order to work in a values-first manner, Ubuntu can be useful for informing artificial intelligence (AI) ethics, with an emphasis on inclusivity which is key for AI principles and guidelines that are universally applied. (Gwagwa, 2022 p. 1) Sub-Saharan Africa has historically been excluded from the benefits of past industrial revolutions, as “... its people and their resources and aspirations have been objectified through slavery, colonialism, imperialism, and neo-colonialism.” (Gwagwa, 2022 p. 2) Could Ubuntu inform AI ethics in an effort to create a more inclusive future?

 

One of the core principles of Ubuntu is: “Umu-Ntu ngumu-Ntu nga ba-Ntu” – A person is a person through other persons. What this means is that how one relates to others is interconnected with one’s personhood and measure of ethics. Under this notion, relationality is emphasized, and the successes and failures of individuals are equally the successes and failures of the community. (Mhlambi,  2020 p. 15-16)

 

 The way that ethics is measured in Ubuntu is through how a person relates to others as well as to the environment and all other independent parts. Ubuntu can be described as relational personhood, where relationality means accepting the interconnectedness of others while recognizing their individuality, and generally the connection of people, nature, and the spiritual. (Mhlambi, 2022, p. 13) We could take the classic saying about raising children, that it takes a village, as opposed to individual family units as found in Western cultures. This is a practical example of Ubuntu. One would not ignore a misbehaving child, any nearby adult would reprimand them, as opposed to Western cultures where this would never happen. Another example from an Ubuntu proverb states that you would not walk by a house being built without lending a hand. (“Wadlula ngendl’isakhiwa kayibeka qaza” He passed by a hut being built and did not tie a knot) (Mhlambi, 2022, p. 14)

 

When someone is acting ethically, they are said to “have Ubuntu” or considered “unoBuntu.” Someone acting unethically, by only considering themselves and being distant or unhelpful to others, is thought to not have Ubuntu, or be “akala ubu-Ntu.” If the word Ubuntu is broken down, “Ubu” stands for “a state of being and becoming” and “Ntu” in essence means “the idea of a continuous being or becoming a person oriented towards cosmic unity and creative purpose.” (Mhlambi,  2020 p. 13-14)

 

The question is, what can we learn from Ubuntu when thinking through ethics for AI? This type of relational ethics is important to consider when we think about ethics in AI because of how such powerful technology affects people and the world around us. This brings up a lot of questions. How does AI affect people and the world, and why is it important to have a relational type of ethics for AI? Also, how do values in different parts of the world play a role in relational ethical AI development?

 

AI is shaped by the dominant economic, political, and social inequalities fueled by neocolonial thought and resulting in assaults on human dignity. This can be countered by postcolonial African philosophy when creating AI. (Mhlambi, 2020) Greater inclusion and diversity in global discourse on AI ethics is non-negotiable, and we should be collecting the best tools we can to achieve this. Ubuntu is especially helpful for the inclusion of African voices. (Gwagwa, 2021) (Gwagwa, 2022 p. 5) The importance of collective identity in the struggles of African peoples is stressed by Africanist scholars, (Hall, 2012) (Gwagwa, 2022 p. 5) and this must be considered ongoing as technology affects everyone globally. 

Postcolonial African philosophy’s relevance to the ethics of artificial intelligence is that, as a response to the traumatic encounter between the African world and European modernity, it puts in clear view modernity’s dependency on marginalization and exposes the weaponization of rationality veiled as moral benevolence." (Eze, 1997) (Mhlambi,  2020 p. 6) By starting from a point of relationality, things that are ultimately harmful to fellow human beings and the world around us cannot be rationalized. 

 

A unanimous consensus was reached at the UN Global Pulse in Ghana and Tunisia (Pizzi & Romanoff, 2020), which was that the mistakes of the Global North regarding the development of technologies could be a lesson for Africa to learn from and not repeat: first, formulate a set of values to guide technology, as opposed to thinking of values as an afterthought. “Africans advocated for the need for human control of technology and the promotion of human values, something which has been reactionary rather than proactive in global principles.” (Fjeld & Nagy, 2020) (Gwagwa, 2022 p. 4)

By linking one person’s personhood to the personhood of others, Ubuntu reconciles ethical limitations of rationality as personhood. One cannot be rational when one is only considering oneself. “Rationality is not an individual product or endeavor of a consistent formal system but is a result of interconnected consistent formal systems. Rationality is thus a product of relationality.” (Mhlambi, 2020 p. 3)

Can computers understand relationality? Computers have difficulty around social contexts, particularly racial and gender norms, and automated systems that have access to all of this data end up perpetuating racism and gender stereotypes because the data cannot interpret itself, nevermind be informative on how to respond to and avoid moral dilemmas. (Mhlambi,  2020 p. 4)

Automated decision making systems (ADMS) have five general critiques that are in direct violation of Ubuntu. As listed by Mhlambi (2020, p. 8) these critiques include:

1) the exclusion of marginalized communities and their interests in the design, development, decision making, and funding of ADMS

2) biases resulting in the selection of features in ADMS and biases entrenched in the data that generate these systems

3) power asymmetries worsened by the use of ADMS

4) dehumanization that occurs from the commodification of our digital selves

5) the centralization of the resources and power necessary in designing and using ADMS. (Mhlambi,  2020 p. 8)

Solutions would start by correcting these violations at a fundamental level, and at all points throughout AI, machine learning and ADMS development, production, use and application. 

Here is a list of suggestions from Sabelo Mhlambi that would include the values of Ubuntu going forward:

1) Address the harms to climate change which much of ADMS relies on via cloud computing. (Greenpeace, 2010)

2) Normalize the eradication of inequality through the participation of the most disenfranchised at the start of creating technology.

3) Use data which powers ADMS for public good.

4) Make data publicly available whilst protecting privacy and promoting societal wellbeing.

5) Treat community data as intellectual property, with the ability to be licensed or revoked from online platforms. 

6) Fund and provide access to technical skill sets for the most disenfranchised. 

7) Allow users to directly shape the way they receive recommendations from algorithms. 

8) Tailor technology companies’ recommendations according to agreed upon social ideals which are based on human dignity and social cohesion. (Mhlambi,  2020 p. 25)

Ubuntu is just one representation of non-Western ethics that decentralizes individualism and recenters the importance of relationality and inclusion. It is sometimes difficult to understand when we have been so overexposed to individualism and the rationality that comes from putting the individual above all else. However, by looking for ethical systems outside of the Western world, perhaps the development of technology that affects everyone could benefit more than just the few, and break cycles of colonialism for good. 

Resources

Eze, Emmanuel Chukwudi. Postcolonial African Philosophy: A Critical Reader. Cambridge, Mass.: Blackwell, 1997.

Fjeld, J., and Nagy, A. (2020). Principled Artificial Intelligence: mapping

consensur in ethical and rights-based appraiches to principles for AI.

https://cyber.harvard.edu/publication/2020/principled-ai.

Greenpeace. “Make IT Green: Cloud Computing and Its Contribution to Climate Change.” 2010.

Gwagwa, A.E. (2021). Africa’s contribution to an intercultural reflective turn

in the ethics of technology in the era of disruption. https://www.academia.

edu/51050494/Africas_contribution_to_an_intercultural_reflective_turn_

in_the_ethics_of_te

Gwagwa, A., Kazim, E., & Hilliard, A. (2022). The role of the African value of Ubuntu in global AI inclusion discourse: A normative ethics perspective. In Patterns (Vol. 3, Issue 4). Cell Press. https://doi.org/10.1016/j.patter.2022.100462

Mhlambi, S., & School, H. K. (2020). Sabelo Mhlambi Carr Center Discussion Paper Ubuntu as an Ethical & Human Rights Framework for Artificial Intelligence Governance Technology and Human Rights Fellow Carr Center for Human Rights Policy From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance.

Pizzi, M., and Romanoff, M. (2020). Governance of AI in Global Pulse’s policy work: zooming in on human rights and ethical frameworks. https://www.

unglobalpulse.org/2020/12/governance-of-ai-in-global-pulses-policywork-zooming-in-on-human-rights-and-ethical-frameworks/.


Is Data Mining Ethical?

The idea of data mining is one that sends a chill down my spine. The collection and use of data that relies on peoples’ production and sharing of personal and sensitive information has a certain creep factor. Specifically, when data mining is used in ways inconsiderate of the people behind the data, the creep factor increases dramatically.

The media, researchers, and non-governmental organizations continue to access and reuse sensitive data without consent from Indigenous governing bodies. This has been done recently amidst the COVID-19 pandemic where tribal data in the United States was released by government entities without permission or knowledge of the tribes themselves. There is an effort to address gaps in data and data invisibility of Indigenous peoples in America. However, this can result in unintentional harm while ignoring Indigenous sovereign rights, which need to be protected. (RDA COVID-19 Indigenous Data WG, 2020).

In this article, we will review case studies on data mining in African communities, and on contact tracing for COVID-19 in South Korea and Brazil to demonstrate how ethical AI strategies work in different scenarios and cultures to impart a global perspective. These projects appear beneficial on the surface level, however, they embody a colonial nature that is deeply embedded in our world structures. We will be discussing these cases within the framework of top-down, bottom-up, and hybrid models of ethics in artificial intelligence (AI) which you can read more about here. Before we review the case studies, we will review what data mining is in this context.

Defining Data Mining

What is the difference between data sharing and data mining?

Data sharing implies that there is an owner of the data and openness or agreement to share information. Data mining gives the impression of taking without asking, with no acknowledgment or compensation, while the miners of the data are the sole beneficiaries. However, can data sharing and data mining be one and the same?

Data mining is closely tied to data colonialism, an enactment of neo-colonialism in the digital world which uses data as a means of power and manipulation. Manipulation runs rampant in this age of misinformation, which we have seen heavily at play in recent times as well as throughout history, playing on emotions to steer public opinion.

Case Study 1: Data Mining in the African Context

Data sharing is a prime example of conflicting principles of AI ethics. On one hand, it is the epitome of transparency and a crucial element to scientific and economic growth. On the other hand, it brings up serious concerns about privacy, intellectual property rights, organizational and structural challenges, cultural and social contexts, unjust historical pasts, and potential harms to marginalized communities. (Abebe et al., 2021)

The term data colonialism can be used to describe some of the challenges of data sharing, or data mining, which reflect the historical and present-day colonial practices such as in the African and Indigenous context. (Couldry and Mejias, 2019) When we use terms such as ‘mining’ to discuss how data is collected from people, the question remains, who benefits from the data collection?

The use of data can paradoxically be harmful to the communities it is collected from. Establishing trust is challenging due to the historical actions taken by data collectors while mining data from indigenous populations. What barriers exist that prevent data from being of benefit to African and indigenous people? We must address the entrenched legacies of power disparities concerning what challenges they present for modern data sharing. (Abebe et al., 2021)

One problematic example is non-government organizations (NGOs) that try to ‘fix’ problems for marginalized ethnic groups and can end up causing more harm than good. For instance, a Europe-based NGO attempted to address the problem of access to clean potable water in Buranda, while testing new water accessibility technology and online monitoring of resources. (Abebe et al., 2021)

The NGO failed to understand the perspective of the community on the true central issues and potential harms. Sharing the data publicly, including geographic locations put the community at risk, as collective privacy was violated and trust was lost. In the West we often think of privacy as a personal concern, however, collective identity serves as great importance to a multitude of African and Indigenous communities. (Abebe et al., 2021)

Another case study in Zambia observed that up to 90% of health research funding comes from external funders, meaning the bargaining power gives little room for Zambian scholars. In the study, power imbalances were reported in everything from funding to agenda-setting, data collection, analysis, interpretation, and reporting of results. (Vachnadze. 2021) This example exhibits further the understanding that trust cannot be formed on the foundation of these imbalances of power.

Many of these research projects lead with good intentions, yet there is a lack of forethought into the ethical use of data, during and after the project, which can create unforeseen and irreparable harms to the wellbeing of communities. This creates a hostile environment to build relationships of respect and trust. (Abebe et al., 2021)

To conclude the reflection of this case study, we can pose the ethical question, is data sharing good/beneficial? First and foremost, local communities must be the primary beneficiaries of responsible data-sharing practices. It is important to specify who benefits from data sharing and to make sure that it is not doing any harm to the people behind the data.

Case Study 2: Data Sharing for Contact Tracing during COVID-19

Contact tracing for the COVID-19 pandemic is another example of a complex ethical case of data collection.

Contact tracing can be centralized or non-centralized, which directly relates to top-down and bottom-up methods of data collection. Depending on the country and government, some have taken a more centralized top-down approach, and some have utilized a hybrid approach of government recommendations and bottom-up implementation via self-reporting.

The centralized approach was deployed in South Korea, whereby law, and for the purposes of infectious disease control, the national authority is permitted to collect and use the information on all COVID-19 patients and their contacts. In 2020, Germany and Israel tried and failed at adopting centralized approaches, due to a lack of exceptions for public health emergencies in their privacy laws. Getting past the legal barriers can be a lengthy and complex process and not conducive to applying a centralized contact tracing system for the outbreak. (Sagar. 2021)

Justin Fendos, a professor of cell biology from South Korea, wrote that in supporting the public health response to COVID-19, Korea had the political willingness to use technological tools to its full potential. The Korean government had collected massive amounts of transaction data to investigate tax fraud even before the COVID-19 outbreak. Korea’s government databases hold records of literally every credit card and bank transaction, and this information was repurposed during the outbreak to retroactively track individuals. In Korea, 95% of adults own a smartphone and many use cashless tools everywhere they go, including on buses and subways. (Fendos, 2020) Hence, contact tracing in Korea was extremely effective.

Public opinion about surveillance in Korea has been stated to be overwhelmingly positive. Fatalities in Korea due to COVID-19 were a third of the global average as of April 2020, when it was also said that they were one of the few countries to have successfully flattened the curve. There have been concerns, despite the success, regarding the level of personal details released by health authorities, which have motivated updated surveillance guidelines for sensitive information. (Fendos, 2020)

Non-centralized approaches to contact tracing are essentially smartphone apps that track proximal coincidence with less invasive data collection methods. These approaches have thus been adopted by many countries, and don’t have the same cultural and political obstacles as centralized approaches, avoiding legal pitfalls and legislative reform. (Sagar. 2021) Because of this and other reasons, contact tracing doesn’t always work the same as in Korea.

One study focused on three heavily impacted cities in Brazil that had the most deaths from COVID-19 until the first half of 2021. A methodology for applying data mining as a public health management tool included identifying variables of climate and air quality in relation to the number of COVID-19 cases and deaths. They provided forecasting models of new COVID-19 cases and daily deaths in the three Brazilian cities studied. However, the researchers noted that the counting of cases in Brazil was affected by high underreporting due to low testing, as well as technical and political problems, including the spread of misinformation, hence the study stated that cases may have been up to 12 times greater than investigations indicated. (Barcellos et al., 2021)

We can see from these examples that contact tracing has worked very differently in countries that have contrasting systems of government, and the same approach wouldn’t work for all countries. A lack of trust comes into play as well, and contact tracing didn’t work in many places simply because people didn’t trust the technology or the government behind it, often reflecting judgments based on misinformation. In Brazil, the spread of misinformation was coming from the government, which doesn’t inspire trust.

In America, a July 2020 study found that 41% said they would likely not speak on the phone or text with a public health official and 27% were unlikely to share names of recent contacts (McClain, 2020), which are vital steps that create a bottleneck in the process of contact tracing adoption. While there are concerns with contact tracing and privacy, there is a contradiction and hypocrisy when it comes to the prolific use of social media apps and how much data is freely shared on them on a daily basis. Yet, when it comes to participation in a tracking system for a global pandemic that is built with fundamental principles to protect personal privacy, it can be seen as a threat.

Conclusion

Data ethics issues across the planet are complex and this article only offers a couple of examples of areas of use and tensions. We must keep in mind that data represents real people and collecting or mining data from indigenous communities can be at their detriment, often unknown to the data scientists and companies who reap the benefits. This is not a new story, just a new setting, and we must be cognizant of these instances of colonialism that still penetrate our relations across cultures and across the world.

You can stay up to date with Accel.AI; workshops, research, and social impact initiatives through our websitemailing listmeetup groupTwitter, and Facebook.

www.accel.ai

Join us in driving #AI for #SocialImpact initiatives around the world!

References

Abebe, R., Aruleba, K., Birhane, A., Kingsley, S., Obaido, G., Remy, S. L., & Sadagopan, S. (2021). Narratives and Counternarratives on Data Sharing in Africa. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 329–341. https://doi.org/10.1145/3442188.3445897

Anane‐Sarpong, E., Wangmo, T., Ward, C. L., Sankoh, O., Tanner, M., & Elger, B. S. (2018). “You cannot collect data using your own resources and put It on open access”: Perspectives from Africa about public health data‐sharing. Developing World Bioethics18(4), 394–405. https://doi.org/10.1111/dewb.12159

Barcellos, D. da S., Fernandes, G. M. K., & de Souza, F. T. (2021). Data based model for predicting COVID-19 morbidity and mortality in metropolis. Scientific Reports11(1), 24491. https://doi.org/10.1038/s41598-021-04029-6

Bezuidenhout, L., & Chakauya, E. (2018). Hidden concerns of sharing research data by low/middle-income country scientists. Global Bioethics29(1), 39–54. https://doi.org/10.1080/11287462.2018.1441780

Chilisa, B. (2012). Indigenous Research Methodologies. SAGE.

Couldry, N., & Mejias, U. A. (2019). Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject. Television & New Media20(4), 336–349. https://doi.org/10.1177/1527476418796632

Fendos, J. (2020). How surveillance technology powered South Korea’s COVID-19 response. Brookings.

Hooker, S. (2018). Why “data for good” lacks precision. Medium.

Maxmen, A. (2019). Can tracking people through phone-call data improve lives? Nature569(7758), 614–617. https://doi.org/10.1038/d41586-019-01679-5

McClain, C. (2020, November 13). Key findings about Americans’ views on COVID-19 contact tracing. Pew Research Center.

RDA COVID-19 Indigenous Data WG. “Data sharing respecting Indigenous data sovereignty.” In RDA COVID-19 Working Group (2020). Recommendations and guidelines on data sharing. Research Data Alliance. https://doi.org/10.15497/rda00052

Sagar, R. (2021). What is Hybrid AI? Analytics India Magazine.

Walsh, A., Brugha, R., & Byrne, E. (2016). “The way the country has been carved up by researchers”: ethics and power in north–south public health research. International Journal for Equity in Health15(1), 204. https://doi.org/10.1186/s12939-016-0488-4

Walter, M., Kukutai, T., Carroll, S. R., & Rodriguez-Lonebear, D. (2020). Indigenous Data Sovereignty and Policy (M. Walter, T. Kukutai, S. R. Carroll, & D. Rodriguez-Lonebear, Eds.). Routledge. https://doi.org/10.4324/9780429273957

Big Nudging and Misinformation in the Era of COVID-19

There are many worries about the Information Age, or Misinformation Age in which we find ourselves, and how living in the digital world is driving us further away from democracy and self-determination. In my last post, I introduced neo-colonialism, which is enforced through data colonialism and digital colonialism, and in this post, I will give a review of these terms as a precursor for discussing how Big Nudging and misinformation in the Era of COVID-19 is having an effect on our free-will. However, I argue that if we can become aware of these things and work together, perhaps we can move toward democracy and not away from it. To do this, we can take some tips from the US Surgeon General, which I review below.

Data Mining and Big Nudging Help to Spread Misinformation

Data mining is a term used to describe the act of data collection in a manner that is reminiscent of colonialism. Data colonialism is when Big Data is collected and used as a way to control or manipulate populations. (Couldry and Mejias, 2019) Digital colonialism is a parallel term that covers the use of digital technologies being used for social, economic, and political domination. (Kwet, 2021)

Big Nudging could be considered Data colonialism in action, although who is holding the reins in the seat of power is not always clear. Is Big Nudging merely a tool that can be used for control, or can it also be used for good?

The concept of nudging is akin to ‘influence with an agenda’ when external forces influence individual or group behaviors and decisions. Nudge theory was first made popular by Richard Thaler, a behavioral economist, and Cass Sunstein, a political scientist. Nudging coaxes behavior without forcing, tweaking the environments in which we make decisions by utilizing insights about our mental processes, and can be used on family, say to remind a loved one to take their daily medicine, or on a larger scale, by requiring people to opt-out of organ donation as opposed to opting-in. The idea is that we still have the choice, without any economical or other incentives, and without forced mandates. (Yates, 2020) When this psychological tool relies on Big Data, it is called Big Nudging. This can be subtle and dangerous when people are unaware that they are being nudged, believing wrongly that they are acting within their own free will.

Political campaigners are massive culprits in this, combining profiling with big nudging to target which demographic groups individuals belong to, gathering data on what issues are most significant in order to procure support for their propositions. Big nudging has been strongly suspected to be used in many large political campaigns, such as Brexit and the 2016 US presidential election. (Wong, 2019)

“The term “big nudging” has emerged to represent using big data and AI to exploit psychological weaknesses to steer decisions — creating problems such as damaging social cohesion, democratic principles, and even human rights”. (Vinuesa et al, 2020 P3)

Big Nudging plays on our emotions, and works almost too well, especially with spreading misinformation. This may explain why one study found that false news stories were 70% more likely to be shared than true stories (Vosoughi et al. 2018), and why they often go viral. During the pandemic, nudging has been used alongside mandates for things like mask-wearing and social distancing, with varying results. (Dudás & Szántó, 2021) Some efforts were indeed used for good, such as handwashing campaigns, however, the threats of Big Nudging spreading misinformation appear to outweigh the benefits.

What can be done about Misinformation for COVID-19?

Recently, the Surgeon General of the United States, Dr. Vivek H. Murthy, put out a report on the dangers of misinformation about COVID-19 as a public health concern. As the next step, Murthy has put out a request for information “. . . on the impact and prevalence of health misinformation in the digital information environment during the COVID-19 pandemic.” (Lesko, 2022).

In the report, Murthy listed several reasons for the rapid spread of misinformation, as well as calls to action for a whole-of-society effort to combat misinformation for the pandemic and beyond. This is extremely useful and could help to curb Big Nudging on multiple fronts.

Here are the reasons misinformation tends to spread so quickly on online platforms:

  1. The emotional and sensational nature heightens psychological responses like anxiety and produces a sense of urgency to react and share.

  2. Incentivization for likes, comments, etc. rewards engagement over accuracy.

  3. Popularity and similarity to previous content are favored by algorithms, which can cause confusion and reinforce misunderstanding. (Murthy, 2021)

Distrust of the government and/or the healthcare system can further cause misinformation to flourish. It is especially prevalent in areas of significant societal division and political polarization, and for those who have experienced racism or other inequities, misinformation can spread even easier. (Murthy, 2021)

The US healthcare system is privatized and has shown bias for socioeconomic status and against minorities, so it is not difficult to understand people’s mistrust in it, however, the over-reliance on emotionally-charged misinformation leaves everyone confused and not knowing what to trust or believe. A recent analysis found that a widely used algorithm in US hospitals that helps manage the care of about 200 million people each year has been systemically discriminating against black people. However, by making changes to find other variables besides healthcare costs to calculate individual medical needs, biases were reduced by 84%. This shows that more diversity is needed in algorithm design teams, and more testing needs to be done before using these algorithms in people’s lives. (Ledford, 2019)

How can we address health misinformation, and hopefully prevent misinformation in other spheres going forward?

The Surgeon General listed some recommendations for taking action:

Equip Americans with the tools to identify misinformation, make informed choices about what information they share, and address health misinformation in their communities, in partnership with trusted local leaders.

Expand research that deepens our understanding of health misinformation, including how it spreads and evolves; how and why it impacts people; who are most susceptible; and which strategies are most effective in addressing it.

Implement product design and policy changes on technology platforms to slow the spread of misinformation.

Invest in longer-term efforts to build resilience against health misinformation, such as media, science, digital, data, and health literacy programs and training for health practitioners, journalists, librarians, and others.

Convene federal, state, local, territorial, tribal, private, nonprofit, and research partners to explore the impact of health misinformation, identify best practices to prevent and address it, issue recommendations, and find common ground on difficult questions, including appropriate legal and regulatory measures that address health misinformation while protecting user privacy and freedom of expression (Murthy, 2021)

The US Surgeon General provided many tips for healthcare workers, educators, journalists, tech companies, governments, and the public on how to combat health misinformation, including an emphasis on creating resilience to misinformation. (Murthy, 2021) Misinformation exists independently of colonialism in all of its forms, yet has been used as a tool to keep people controlled and to nudge people towards decisions that feed systems of control. These systems have been adopted by the algorithms that direct what we see online, and our own emotions do the rest of the work.

My question is this: can we apply Dr. Murthy’s advice in order to decolonize ourselves and the digital world, by building resistance to misinformation and Big Nudging and truly making our own democratic decisions for the pandemic and in the future? Can we learn from all of this and move forward stronger, armed with the knowledge that systems that are made to benefit people but are built like a business, such as the US healthcare system, are not working for us, and democratically call for better systems that truly serve all people? If we can figure out how to combat misinformation and Big Nudging, perhaps we can move toward democracy and not away from it, but to do that we must educate ourselves and be able to recognize what is false and what is manipulative and call it out, shut it out, and move on.

You can stay up to date with Accel.AI; workshops, research, and social impact initiatives through our websitemailing listmeetup groupTwitter, and Facebook.

Join us in driving #AI for #SocialImpact initiatives around the world!

References

Couldry, N., & Mejias, U. A. (2019). Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject. Television & New Media20(4), 336–349. https://doi.org/10.1177/1527476418796632

Dudás, L., & Szántó, R. (2021). Nudging in the time of coronavirus? comparing public support for soft and hard preventive measures, highlighting the role of risk perception and experience. PLOS ONE16(8). https://doi.org/10.1371/journal.pone.0256241

Gramacho, W., Turgeon, M., Kennedy, J., Stabile, M., & Mundim, P. S. (2021). Political Preferences, Knowledge, and Misinformation About COVID-19: The Case of Brazil. Frontiers in Political Science3https://doi.org/10.3389/fpos.2021.646430

Kwet, M. (2019). Digital colonialism: US empire and the new imperialism in the Global South. Race & Class60(4), 3–26. https://doi.org/10.1177/0306396818823172

Ledford, H. (2019). Millions of black people affected by racial bias in health-care algorithms. Nature574(7780), 608–609. https://doi.org/10.1038/d41586-019-03228-6

Lesko, M. (2022). Impact of health misinformation in the digital information … hhs.gov. Retrieved March 10, 2022, from https://www.federalregister.gov/documents/2022/03/07/2022-04777/impact-of-health-misinformation-in-the-digital-information-environment-in-the-united-states

Lucero, V. 2022. From CTA/CTT to voter tracing? risk of data misuse in the Philippines. (February 2022). Retrieved February 16, 2022 from https://engagemedia.org/2022/philippines-contact-voter-tracing/

Murthy, V. H. (2021). Confronting health misinformation — hhs.gov. hhs.gov. Retrieved March 10, 2022, from https://www.hhs.gov/sites/default/files/surgeon-general-misinformation-advisory.pdf

Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S., Felländer, A., Langhans, S. D., Tegmark, M., & Fuso Nerini, F. (2020). The role of artificial intelligence in achieving the Sustainable Development Goals. Nature Communications11(1), 233. https://doi.org/10.1038/s41467-019-14108-y

Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359, 1146–1151. http://doi.org/10.1126/science.aap9559

Wong, S. 2019. Filter bubbles and big nudging: Impact on Data Privacy and Civil Society. (September 2019). Retrieved February 22, 2022 from http://www.hk-lawyer.org/content/filter-bubbles-and-big-nudging-impact-data-privacy-and-civil-society#:~:text=Similar%20to%20filter%20bubbles%2C%20big%20nudging%20also%20involves,of%20nudge%20with%20the%20use%20of%20Big%20Data.

Yates, T. (2020, March 13). Why is the government relying on nudge theory to fight coronavirus? . The Guardian. Retrieved March 12, 2022, from https://www.theguardian.com/commentisfree/2020/mar/13/why-is-the-government-relying-on-nudge-theory-to-tackle-coronavirus

10


An Introduction to the Ethics of Neo-Colonialism

There are many frameworks to think about and describe ethics applied to Artificial Intelligence, but my writing on this topic thus far has lacked consideration of colonialism, which at its core and in practice is completely void of ethics. Colonialism is a deeply rooted world system of power and control that plays out in ways that become normal, yet are far from anything that would be considered ethical. In today’s world that relies so heavily on technology, colonialism within data and the digital world is a fundamental problem.

There is a strong separation between the dominant powers and the people and communities that they profit from. This is often framed by seeing the Global North as separate from the Global South. In reality, there is a separation between urban centers which are largely in the Global North, and everyone else, with those in the Global South bearing the brunt of the power imbalance. We use the terminology of Global North and Global South broadly, but this review references examples not specific to this framework. One such instance regarding digital colonialism affecting Inuit communities in Northern Canada is key to our exploration. This case study will appear in a future article.

There are two ways neo-colonialism is being discussed in sociotechnical language: digital colonialism and data colonialism. These are parallel terms and may be considered one and the same, however, we will look at how they have been described independently.

When digital technology is used for social, economic, and political domination over another nation or territory, it is considered digital colonialism. Dominant powers have ownership over digital infrastructure and knowledge, which perpetuates a state of dependency within the hierarchy, situating Big Tech firms at the top and hosting an extremely unequal division of labor, which further defines digital colonialism. (Kwet, 2021)

Data colonialism addresses Big Data in the context of the predatory practices of colonialism. Capitalism depends on the data from the Global South, which represents a new type of appropriation attached to current infrastructures of connection. (Couldry and Mejias, 2019 P1) We see a pulley system of interdependence, however, the concentration of power is clear.

We cannot address colonialism without also addressing capitalism. Colonialism came first, and historical colonialism, with its violence and brutality, paved the way for capitalism. In order to decolonize, we need to fully overhaul the systems of capitalism and consumerism. We cannot add on little bits of law or regulations to govern data and the digital world in an attempt to decolonize. We need a full system change, and it is going to take a lot of work.

We are at the dawn of a new stage of capitalism, following the path laid out by data colonialism, just as historical colonialism paved the way for industrial capitalism. We can’t yet imagine what this will look like, but we know that at its core is the appropriation of human life through data. (Couldry and Mejias, 2019 P1–2)

Not only is this a problem because it creates global inequality, capitalism notably threatens the natural environment. Its structural imperative is based on an insatiable appetite for growth and profit, causing overconsumption of Earth’s material resources, not to mention overheating the planet. (Kwet, 2020) Mining cobalt in the Congo has detrimental effects not just on the earth, but on people’s lives, utilizing harsh child labor (Lawson , 2021). The Congo is where we get over 50% of the world’s cobalt, an essential raw mineral found in cell phones, computers, and electric vehicles, as well as in lithium batteries, which will see an increase in demand alongside renewable energy systems. (Banza Lubaba Nkulu et al., 2018). Not only is data mining causing harm to people and the environment in how it is being collected but also how it is being stored long-term. Data centers alone account for 2% of human carbon emissions, rivaling airlines. There are plans and efforts to lower emissions from data centers, which need to be done across industries, alongside efforts to address the underlying issues of dependence due to capitalism and consumerism.

“What decolonial thinking, in particular, can help us grasp is that colonialism — whether in its historic or new form — can only be opposed effectively if it is attacked at its core: the underlying rationality that enables continuous appropriation to seem natural, necessary, and somehow an enhancement of, not a violence to, human development.’’ (Couldry and Mejias, 2019 P16)

Conclusion

This is merely an introduction to the topics of data colonialism and digital colonialism. In future posts, we will provide many examples that explore various corners of the world and the impact of digital and data colonialism in different ways, including data mining, and case studies in the African indigenous context as well as the scenario topical to most, contact tracing for the COVID-19 pandemic. Within data mining, we will discuss how or even if data mining is different from data sharing, as well as contextualize data mining alongside resource mining from the Earth.

Further examples include the impact of internet usage in indigenous communities such as the Inuit as well as in South America, where their local knowledge is waning due to the influence of digital colonialism. In order to have a truly ethical AI, there needs to be a large shift in the ethics of society, and the decolonization of data and the digital world is a good starting point.

References

Abebe, R., Aruleba, K., Birhane, A., Kingsley, S., Obaido, G., Remy, S. L., & Sadagopan, S. (2021). Narratives and Counternarratives on Data Sharing in Africa. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 329–341. https://doi.org/10.1145/3442188.3445897

Banza Lubaba Nkulu, C., Casas, L., Haufroid, V., De Putter, T., Saenen, N. D., Kayembe-Kitenge, T., Musa Obadia, P., Kyanika Wa Mukoma, D., Lunda Ilunga, J.-M., Nawrot, T. S., Luboya Numbi, O., Smolders, E., & Nemery, B. (2018, September). Sustainability of artisanal mining of cobalt in DR Congo. Nature sustainability. Retrieved February 12, 2022, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6166862/

Couldry, N., & Mejias, U. A. (2019). Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject. Television & New Media20(4), 336–349. https://doi.org/10.1177/1527476418796632

Data Centers. Data Centers | Better Buildings Initiative. (2021). Retrieved February 12, 2022, from https://betterbuildingssolutioncenter.energy.gov/sectors/data-centers

Fendos, J. (2020). How surveillance technology powered South Korea’s COVID-19 response. Brookings.

Kwet, M. (2021, May 6). Digital colonialism: The evolution of us empire. Longreads. 08/02/2022https://longreads.tni.org/digital-colonialism-the-evolution-of-us-empire

Lawson , M. F. F. (2021). The DRC mining industry: Child labor and formalization of small-scale mining. Wilson Center. Retrieved February 12, 2022, from https://www.wilsoncenter.org/blog-post/drc-mining-industry-child-labor-and-formalization-small-scale-mining#:~:text=Of%20the%20255%2C000%20Congolese%20mining,own%20tools%2C%20primarily%20their%20hands.

Walter, M., Kukutai, T., Carroll, S. R., & Rodriguez-Lonebear, D. (2020). Indigenous Data Sovereignty and Policy (M. Walter, T. Kukutai, S. R. Carroll, & D. Rodriguez-Lonebear, Eds.). Routledge. https://doi.org/10.4324/9780429273957

Young, J. C. (2019). The new knowledge politics of digital colonialism. Environment and Planning A: Economy and Space51(7), 1424–1441. https://doi.org/10.1177/0308518X19858998




Comment

Hybrid Models of Ethics in AI

Hybrid approaches are a blend of top-down and bottom-up methodologies for AI ethics. In this article, we dive deeper into hybrid models of ethics for AI and give two examples of how they can be applied. We will explore why hybrid models are more hopeful than top-down or bottom-up methodologies on their own for ethical AI development, and ask questions regarding what problems they may face in the future.

First, we will delve into MIT’s moral machine as one example of hybrid ethics being taught to systems for self-driving vehicles. Then we will explore a study of hybrid ethics being trained on ethical medical situations.

We conclude this exploration by further examining the meaning and construct of hybrid ethics for AI while linking the case studies as an exercise in exploring the potential positive and negative impacts of hybrid ethical AI approaches.

How do we define a hybrid model of ethics for AI?

A hybrid model of top-down and bottom-up ethics for AI has a base of rules or instructions, but then also is fed data to learn from. Real-world human ethics are complex, and a hybrid approach may minimize the limitations of top-down and bottom-up approaches to machine ethics, combining rule-based cognition and protracted ethical learning. (Suresh et al., 2014)

Hybrid AI combines the most desirable aspects of bottom-up, such as neural networks, and top-down also referred to as symbiotic AI. When huge data sets are combined, neural networks are allowed to extract patterns. Then, information can be manipulated and retrieved by rule-based systems utilizing algorithms to understand symbols. (Nataraj et al., 2021) Research has observed the complementary strengths and weaknesses of bottom-up and top-down strategies. Recently, a hybrid program synthesis approach has been developed, improving top-down interference by utilizing bottom-up analysis for web data extraction. (Raza et al., 2021) When we apply this to ethics and values, ethical concerns that arise from outside of the entity are emphasized by top-down approaches, and the cultivation of implicit values arising from within the entity is addressed by bottom-up approaches.

MIT’s Moral Machine as a Hybrid Model for AI Ethics

MIT’s Moral Machine is a hybrid model of AI ethics. It is an online judgment platform geared toward citizens from around the world portraying the moral dilemmas of unavoidable accidents involving automated vehicles (AVs), and what choices individuals would assign for them to respond. Examples include whether to spare humans versus pets or pedestrians versus passengers, with many factors to consider such as gender, age, fitness, and social status. The Moral Machine collects this data and maps it regionally to compare homogeneous vectors of moral preferences in order to provide data to engineers and policymakers in the development of AVs and to improve trust in AI. (Awad et al., 2018) This research is a hybrid of top-down and bottom-up because it collects data from citizens in a bottom-up manner, while also considering top-down morals, principles, and fundamental rules of driving.

Example from the

Moral Machine

where we see the choice between hitting the group on the left or the right. Which would you choose?

However, if the data shows that most people prefer to spare children over a single older adult, would it then become more dangerous for an elderly individual to walk around alone? What if we were to see a series of crashes to avoid groups of school children but run over an unsuspecting lone elder? The situations they give in the simulations are to choose between two scenarios, each resulting in unavoidable death. These decisions are made from the comfort of one’s home and maybe made differently if in the heat of the moment. Is it better to collect these decisions in this way, vs observing what people do in real scenarios? Where would a researcher acquire this data for training purposes? Would police accident reports or insurance claims data offer insights?

It is useful to collect this data, however, it must also be viewed alongside other considerations. Real-life scenarios will not always be so black and white. I personally despise the ‘trolley problem’ and emulations of it, which make us choose who deserves to live and who will be sacrificed. We may think we would hit one person to save a group of people, but who would want to be truly making that decision? It feels awful to be in this position, however, this is the point behind the simulation. In order to build trust in machines, ordinary people need to make these decisions to better understand their complexity and ramifications. Considering the MIT Moral Machine has collected data from over 40 million people, does this take the responsibility away from a single individual?

What they found was that although there are differences across countries and sections of the globe, there is a general preference to spare human lives over animals, spare more lives over fewer, and spare younger lives over older. Looking at the world in sections, there are trends that emerged in the West versus South versus East. For instance, in the Eastern cluster, there was more of a preference for sparing the law-abiding over the law-breaking, and in the Southern cluster, there were more tendencies toward sparing women over men. Policymakers should note that differences abound between individualistic versus collectivist cultures. Individualistic cultures value sparing the many and the young, whereas collectivist cultures value the lives of elders. How these preferences will be understood and considered by policymakers is yet to be determined. (Awad et al., 2018)

Hybrid Ethics for Moral Medical Machines

The second example we will examine is an experiment that was done using six simulations to test a moral machine that would emulate the decisions of an ethical medical practitioner in specific situations, such as with a patient refusing life-saving treatment. The decisions were based on ethics defined by Buchanan and Brock (1989), and the moral machine would copy the actions of the medical professional based on each circumstance.

(van Rysewyk & Pontier, 2014)

It appears straightforward to run an experiment based on a theoretical case study and tell the machine what a human would do, and then the machine can simply copy the same actions. However, how many simulations would it need to be trained on before it could be trusted to act on its own in real-life situations?

We may indeed come across patients refusing life-saving medication, whether due to irrational fear or religion or a host of other reasons. Additional outlying considerations include whether relatives or primary caregivers have opposing opinions to the treatment. Additionally, if there are financial constraints, there could be other complications that make each situation unique. A human medical professional would be able to consider all factors involved and approach each case anew. A moral machine would be basing predictions on past data, which may or may not be sufficient to address the unique needs of each real-life scenario.

Theoretically, the machine would learn more and more over time, andpotentially even perform better at ethical dilemmas than a human agent. However, this experiment with six basic simulations doesn’t give the utmost confidence that we are getting there quickly. Nonetheless, it gives us a good example of hybrid ethics for AI in action, since it is acting within a rule-based system as well as learning from case-based reasoning.

In these cases, they are balancing the benevolence, non-malevolence, and autonomy of the patient. (Pontier & Hoorn, 2012) (van Rysewyk & Pontier, 2014) Another paper on this topic added a fourth consideration which is Justice. They went on to describe a medical version of the trolley problem, where five people need organ transplants and one person is in a coma and has all the organs that the five people need to live. Would you kill one to save five? (Pontier et al., 2012)

Conclusion

Could a hybrid of top-down and bottom-up methodologies be the best application for ethical AI systems? Perhaps, however, we must be aware of the challenges it presents. We must examine the problems posed by hybrid approaches when meshing a combination of diverse philosophies and dissimilar architectures. (Allen et al., 2005) However, many agree that a hybridof top-down and bottom-up would be the most effective model for ethical AI development. Simultaneously, we need to question the ethics of people, both as the producers and consumers of technology, whilst we assess morality in AI.

Additionally, while hybrid systems which lack effective or advanced cognitive faculties will appear to be functional across many domains, it is essential to recognize times when additional capabilities will be required. (Allen and Wallach, 2005)

Regarding MIT’s Moral Machine, it is interesting to collect this data in service of creating more ethical driverless vehicles and to promote more trust in them from the public, however, the usefulness of it is yet to be proven. (Rahwan, 2017) AVs will be a part of all of our lives on a daily basis, so it is valuable to know that public opinions are being considered.

In the field of medicine, there is a broader sense of agreement on ethics than in something like business ethics, however, healthcare in the United States is a business, which causes decisions for and against the treatment of a patient to get very ethically blurred.

It will be vital as we move forward to identify when additional capabilities will be necessary, however functional hybrid systems may be across a variety of domains, even with limited cognitive facilities. (Allen and Wallach, 2005) AI development must be something that we keep a close eye on as it learns and adapts, we must note where it can thrive, and where a human is irreplaceable.

You can stay up to date with Accel.AI; workshops, research, and social impact initiatives through our website, mailing list, meetup group, Twitter, and Facebook.

Join us in driving #AI for #SocialImpact initiatives around the world!

If you enjoyed reading this, you could contribute good vibes (and help more people discover this post and our community) by hitting the 👏 below — it means a lot!

References

Allen, C., Smit, I., & Wallach, W. (2005). Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches. Ethics and Information Technology, 7(3), 149–155. https://doi.org/10.1007/s10676-006-0004-4

Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J. F., & Rahwan, I. (2018). The Moral Machine experiment. Nature, 563(7729), 59–64. https://doi.org/10.1038/s41586-018-0637-6

Buchanan, A. and B. D. (2004). Deciding for others: The Ethics of Surrogate Decision making. Cambridge University Press.

Pontier, M. A., & Hoorn, J. F. (2012). Toward machines that behave ethically better than humans do. Proceedings of the Annual Meeting of the Cognitive Science Society.

Pontier, M. A., Widdershoven, G. A. M., & Hoorn, J. F. (n.d.). Moral Coppélia-Combining Ratio with Affect in Ethical Reasoning.

Rahwan, I. (2018). Society-in-the-loop: programming the algorithmic social contract. Ethics and Information Technology, 20(1), 5–14. https://doi.org/10.1007/s10676-017-9430-8

Suresh, T., Assegie, T. A., Rajkumar, S., & Komal Kumar, N. (2022). A hybrid approach to medical decision-making: diagnosis of heart disease with machine-learning model. International Journal of Electrical and Computer Engineering (IJECE), 12(2), 1831. https://doi.org/10.11591/ijece.v12i2.pp1831-1838

Vachnadze, G. (2021). Reinforcement learning: Bottom-up programming for ethical machines. Marten Kaas. Meium.

van Rysewyk, S. P., & Pontier, M. (2015). A Hybrid Bottom-Up and Top-Down Approach to Machine Medical Ethics: Theory and Data (pp. 93–110). https://doi.org/10.1007/978-3-319-08108-3_7

Wallach, W., Allen, C., & Smit, I. (2020). Machine morality: bottom-up and top-down approaches for modelling human moral faculties. In Machine Ethics and Robot Ethics (pp. 249–266). Routledge. https://doi.org/10.4324/9781003074991-23

Comment

Practical Principles for AI Ethics

Principles of AI are a top-down approach to ethics for artificial intelligence (AI). Recently, we have been seeing lists of principles for AI ethics popping up everywhere. They are very useful, not only for AI and its impact but also on a larger social level. Because of AI, people are thinking about ethics in a whole new way: How do we define and digest ethics in order to codify it? 

Previously I have written an analysis of top-down and bottom-up approaches to ethics for AI, and then we explored the bottom-up method of reinforcement learning for teaching AI ethics. In this segment, we will address AI principles as a top-down method for working towards an ethical AI. 

Ethical AI Principles

Principles can be broken into two categories: principles for people who program AI systems to follow, and principles for the AI itself

Some of the principles for people, mainly programmers and data scientists, read like commandments. For instance, The Institute for Ethical AI & ML has a list of eight principles geared toward technologists. These include human augmentation, to keep a human in the loop; bias evaluation, to continually monitor bias; explainability and justification, to improve transparency; reproducibility, to ensure infrastructure that is reasonably reproducible; displacement strategy, to mitigate the impact on workers due to automation; practical accuracy, to align with domain-specific applications; trust by privacy, to protect and handle data; and data risk awareness, to consider data and model security. (The Institute for Ethical Ai & Machine Learning)


The Responsible Machine Learning Principles:

  • Human Augmentation

  • Bias Evaluation

  • Explainability and Justification

  • Reproducibility

  • Displacement Strategy

  • Practical Accuracy

  • Trust by Privacy

  • Data Risk Awareness

Other lists of principles are geared towards the ethics of AI systems themselves and what they should adhere to. One such list consists of four principles, published by the National Institute of Standards and Technology (NIST), and are intended to promote explainability. The first of these is regarding explanation: that a system can provide evidence and reasons for its processes and outputs, be readable by a human, and explain its algorithms. The remaining three expand on this. The second recommends that AI systems must be meaningful and understandable, and have methods to evaluate meaninglessness. The third principle is explanation accuracy: a system must correctly reflect the reason(s) its generated output. Finally the fourth is knowledge limits: ensuring that a system only operates under conditions for which it was designed and that it does not give overly confident answers in areas it has limited knowledge of; for example, a system programmed to classify birds being used to classify an apple. (Marengo, 2021)

Many of the principles overlap across corporations and agencies. We can see a detailed graphic and information published by the Berkman Klein Center for Internet and Society at Harvard, found here. This gives a great overview of forty-seven principles that various organizations, corporations, and other entities are adopting, where they overlap, and their definitions. 

The authors provide many lists and descriptions of ethical principles for AI and categorize them into eight thematic trends: Privacy, Accountability, Safety and security, Transparency and explainability, Fairness and non-discrimination, Human control of technology, Professional responsibility, and Promotion of human values. (Fjeld and Nagy, 2020)

The Illusion of Ethical AI

One particular principle that I see as missing from these lists regards taking care of the non-human world. As Boddington states in her book, Toward a Code of Ethics for Artificial Intelligence (2018), “. . . we are changing the world, AI will hasten these changes, and hence, we’d better have an idea of what changes count as good and what counts as bad.” (Boddington, 2018) We will all have different opinions on this, but it needs to be part of the discussion. We can’t continue to destroy the planet while trying to create super AI and still be under the illusion that our ethical principles are saving the world. 

This will also be a cautionary tale, for a lot of these principles are theoretically sound, yet act as a veil that presents the illusion of ethics. This can be dangerous because it makes us feel like we are practicing ethics while business carries on as usual. Part of the reason for this is because the field of ethical AI development is so new and not a lot of research has been done yet to ensure the overall impact is a benefit to society. “Despite the proliferation of these ‘AI principles,’ there has been little scholarly focus on understanding these efforts either individually or as contextualized within an expanding universe of principles with discernible trends.” (Fjeld and Nagy, 2020)

Principles are a double-sided coin. On one hand, making the stated effort to follow a set of ethical principles is good. It is beneficial for people to be thinking about doing what is right and ethical, and not just blindly entering code that could be detrimental in unforeseen ways.

Some principles are simple in appearance yet incredibly challenging in practice. For example, if we look at the commonly adopted principle of transparency, there is quite a difference between saying that algorithms and machine learning should be explainable and actually developing ways to see inside of the black box. As datasets get bigger, this presents more and more technical challenges. (Boddington, 2018

Furthermore, some of the principles can conflict with each other, which can land us in a less ethical place than where we started. For example, transparency can conflict with privacy, another popular principle. We can run into a lot of complex problems around this, and I hope to see this addressed quickly and thoroughly as we move forward.

Overall, we want these concepts in people's minds: such as Fairness. Accountability, and Transparency. These are the core tenets and namesake of the FAAcT conference that addresses these principles in depth. It is incredibly important for corporations and programmers to be concerned about the commonly addressed themes of bias, discrimination, oppression, and systemic violence. And yet… What can happen is that these principles make us feel like we are doing the right thing, however, how much does writing out these ideals actually change things? 

The AI Ethical Revolution We Need

In order for AI to be ethical, A LOT has to change, and not just in the tech world. There seems to be an omission of the unspoken principles: the value of money for corporations and those in power and convenience for those who can afford it. If we are trying to create fairness, accountability, and transparency in AI, we need to do some serious work on society to adjust our core principles away from money and convenience and towards taking care of everyone’s basic needs and the Earth

Could AI be a tool that has the side effect of starting an ethics revolution? 

How do we accomplish this? The language that we use is important, especially when it comes to principles. Moss and Metcalf pointed out the importance of using market-friendly terms. If we want morality to win out, we need to justify the organizational resources necessary, when more times than not, companies will choose profit over social good. (Moss and Metcalf, 2019

Whittlestone et al. describe the need to focus on areas of tension in ethics in AI, and point out the ambiguity of terms like  ‘fairness’, ‘justice’, and ‘autonomy’. The authors prompt us to question how these terms might be interpreted differently across various groups and contexts. (Whittlestone et al. 2019)

They go on to say that principles need to be formalized into standards, codes, and ultimately regulation in order to be useful in practice. Attention is drawn to the importance of acknowledging tensions between high-level goals of ethics, which can differ and even contradict each other. In order to be effective,  it is vital to include a measure of guidance on how to resolve different scenarios. In order to reflect the genuine agreement, there must be acknowledgment and accommodation of different perspectives and values as much as possible. (Whittlestone et al. 2019)

The authors then introduce four reasons that discussing tensions is beneficial and important for AI ethics:

  1. Bridging the gap between principles and practice

  2. Acknowledging differences in values

  3. Highlighting areas where new solutions are needed

  4. Identifying ambiguities and knowledge gaps

Each of these needs to be considered ongoing, as these tensions don’t get solved overnight. Particularly, creating a bridge between principles in practice is important, as I have argued above.

To wrap up, I will share this direct quote because it is incredibly profound:

We need to balance the demand to make our moral reasoning as robust as possible, with safeguarding against making it too rigid and throwing the moral baby out with the bathwater by rejecting anything we can’t immediately explain. This point is highly relevant both to drawing up codes of ethics and to the attempts to implement ethical reasoning in machines.” (Boddington, 2018 p.18-19) 

In conclusion, codes of ethics, or ethical principles for AI are important to have, and I like the conversations that are being started because of their existence. However, it can’t stop there. I am excited to see more and more ways that these principles are put into action and to see technologists and theorists working together to investigate ways to make them work. I would also hope that we can open minds to ideas beyond making money for corporations and creating conveniences, and rather toward addressing tensions and truly creating a world that works for everyone. 

Citations

ACM Conference on Fairness, accountability, and transparency (ACM FACCT). ACM FAccT. (2021). Retrieved January 7, 2022, from https://facctconference.org/

Ai Principles. Future of Life Institute. (2021, December 15). Retrieved December 30, 2021, from https://futureoflife.org/2017/08/11/ai-principles/

Berkman Klein Center Media Library. (n.d.). Retrieved January 8, 2022, from https://wilkins.law.harvard.edu/misc/ 

Boddington, Paula. (2018). Towards a code of Ethics for Artificial Intelligence. SPRINGER INTERNATIONAL PU. 

Fjeld , J., & Nagy, A. (2020). Principled artificial intelligence . Berkman Klein Center. Retrieved December 30, 2021, from https://cyber.harvard.edu/publication/2020/principled-ai

Marengo, F. (2021). Federico Marengo on linkedin: Four principles of explainable AI: 35 comments. LinkedIn. Retrieved January 7, 2022, from https://www.linkedin.com/posts/fmarengo_four-principles-of-explainable-ai-activity-6878970042382356480-updf/

Moss , E., & Metcalf, J. (2019, November 14). The ethical dilemma at the heart of Big Tech Companies. Harvard Business Review. Retrieved December 13, 2021, from https://hbr.org/2019/11/the-ethical-dilemma-at-the-heart-of-big-tech-companies.

The Institute for Ethical Ai & Machine Learning. (n.d.). The machine learning principles. The 8 principles for responsible development of AI & Machine Learning systems. Retrieved December 30, 2021, from https://ethical.institute/principles.html

Whittlestone, J., Cave, S., Alexandrova, A., & Nyrup, R. (2019). The role and limits of principles in AI Ethics: Towards a … Retrieved December 13, 2021, from http://lcfi.ac.uk/media/uploads/files/AIES-19_paper_188_Whittlestone_Nyrup_Alexandrova_Cave.pdf.

Comment

Understanding Top-Down and Bottom-Up Ethics in AI Part 2

In part two of this investigation into top-down and bottom-up ethics in Artificial Intelligence (AI), I would like to explore three different angles, including the technical perspective, the ethical viewpoint, and through a political lens while also discussing individual and hybrid approaches to implementation.

The first angle is to understand the technicalperspective, broken down into programming and applied machine learning: Essentially how to implement algorithmic policies with balanced data that will lead to fair and desirable outcomes.

The next angle is the theoretical ethics viewpoint: Ethics can work from the top-down, coming from rules, philosophies, etc., or bottom-up looking at the behaviors of people and what is socially acceptable for individuals as well as groups, which varies by culture.

Third, I want to come back to my original hypothesis that top-down implied ethics dictated from the powers that be, and bottom-up could only be derived from the demands of the people. We might call this a more political perspective.

Finally, we will connect them all back together and split them apart again, into top-down, bottom-up, and hybrid models of how ethics functions for AI. This is an exercise in exploration to reach a deeper understanding. How ethics for AI works, in reality, is a blend of all of these theories and ideas acting on and in conjunction with one another.

Technical Machine Learning Top-Down vs Bottom-Up

The technical angle of this debate is admittedly the most foreign to me, however, in my research, I have found some basic examples that I hope are helpful.

“In simple terms and in the context of AI, it is probably easiest to imagine ‘Top-down AI’ to be based on a decision tree. For example, a call center chatbot is based on a defined set of options and, depending on the user input, it guides the caller through a tree of options. What we typically refer to as AI these days — for applications such as self-driving cars or diagnostic systems in health care — would be defined as ‘Bottom-up AI’ and is based on machine learning (ML) or deep learning (DL). These are applications of AI that provide systems with the ability to automatically learn and improve from experience without being explicitly programmed.” (Eckart, 2020)

Top-down systems of learning can be very useful for some tasks that machines can be programmed to do, like the call center example above. However, if they are not monitored, they could make mistakes and it is up to us people to catch those mistakes and correct them. They may also lack exposure to sufficient data to make a decision or prediction in order to solve a problem, leading to system failure. This is the value of having a ‘human in the loop’. This gets more complicated when we move into the more theoretical world of ethics.

Bottom-up basically defines machine learning. The system is given data to learn from, and it uses that information from the past to predict and make decisions for the future. This can work quite well for many tasks. It can also have a lot of flaws built-in because the world that it learns from is flawed. We can look at the classic example of harmful bias being learned and applied, for instance in who gets a job or a loan, because the data from the past reflects biased systems in our society.

Here we will mention the use of a hybrid model of top-down and bottom-up, that has a base of rules or instructions, but then also is fed data to learn from as it goes. This method claims to be the best of both worlds and covers some of the shortcomings of both top-down and bottom-up models. For instance, self-driving cars can be programmed with laws and rules of the road, and also can learn from observing human drivers.

Theoretical Ethics Top-Down vs Bottom-Up

Now let’s move on to talk about Ethics. The first thing we need to mention in this part of the analysis is that ethics has been historically made for people, and people are complex in how they understand and apply ethics, especially top-down ethics.

“Top-down ethical systems come from a variety of sources including religion, philosophy, and literature. Examples include the Golden Rule, the Ten Commandments, consequentialist or utilitarian ethics, Kant’s moral imperative and other duty-based theories, legal codes, Aristotle’s virtues, and Asimov’s three laws for robots.” (Wallach et. al, 2005)

The one exception on this list that doesn’t apply to people is of course Asimov’s laws, which are applied precisely for AI. However, Asimov himself said that they were flawed.

“When thinking of rules for robots, Asimov’s laws come immediately to mind. On the surface, these three laws, plus a ‘zeroth’ law that he added in 1985 to place humanity’s interest above that of any individual, appear to be intuitive, straightforward, and general enough in scope to capture a broad array of ethical concerns. But in story after story Asimov demonstrates problems of prioritization and potential deadlock inherent in implementing even this small set of rules (Clark, 1994). Apparently, Asimov concluded that his laws would not work, and other theorists have extended this conclusion to encompass any rule-based ethical system implemented in AI (Lang, 2002).” (Wallach et. al, 2005)

… Asimov concluded that his laws would not work, and other theorists have extended this conclusion to encompass any rule-based ethical system implemented in AI.

A lot of science fiction doesn’t predict the future as much as warn us against its possibilities. Furthermore, the top-down approach is tricky for AI in different ways than how it is tricky for humans.

As humans, we learn ethics as we go, from those practiced by our families and community, how we react to our environment, and how others react to us. One paper made the case that “. . . while one can argue that individuals make moral choices on the basis of this or that philosophy, actual humans first acquire moral values from those who raise them, and then modify these values as they are exposed to various inputs from new groups, cultures, and subcultures, gradually developing their own personal moral mix.” (Etzioni, 2017)

This personal moral mix could be thought of as a hybrid model for ethics for humans. The question is, how easy and practical is it to take human ethics and apply them to machines?

Political Ethics Top-Down vs Bottom-Up

When I hear top-down, I imagine either government or big Business/big Tech figureheads, sitting in a room making decisions for everyone else. This has always put a bad taste in my mouth. It is how our world works, in some ways more than others, but we are also seeing it with how big Tech has approached ethics in AI.

Here are some examples of top-down ethics from the powers that be: “The Asilomar AI principles, developed in 2017 in conjunction with the Asilomar conference for Beneficial AI, outline guidelines on how research should be conducted, ethics and values that use of AI must respect, and important considerations for thinking about long-term issues (Future of Life Institute 2017). . . Around the same time, the US Association for Computing Machinery (ACM) issued a statement and set of seven principles for Algorithmic Transparency and Accountability, addressing a narrower but closely related set of issues (ACM US Public Policy Council 2017).” (Whittlestone et al. 2019)

We are also seeing some crowd-collected considerations about ethics in AI, and this is what I think of when I hear bottom-up: decisions being called for by the people. This is the grassroots ethics that I think we need to be paying attention to, especially the voices of marginalized and minoritized groups.

“Bottom-up data institutions are seen by some as mechanisms that could be revolutionary for rebalancing power between big tech corporations and communities. It was argued that there is a widespread assumption that bottom-up data institutions will always be benign and will represent everyone in society, and these assumptions underpin their promotion. It was discussed whether bottom-up data institutions are, by definition, only representative of the particular communities included within their data subjects rather than of general societal values.” (ODI, 2021)

This is an important point to keep in mind when thinking about bottom-up and grassroots ethics: there will always be different ethics coming from different groups of people, and the details of the applications of it are where the disagreements abound.

The Top-Down Method of AI Being Taught Ethics

Now we get to recombine all of the top-down angles together: The technical, the theoretical, and the political.

If we teach AI ethical core principles and expect it to live by human values and virtues, I imagine we will be sorely disappointed. There just isn’t a foreseeable way to make this work for everyone.

“Many of the principles proposed in AI ethics are too broad to be action-guiding. For example, ensuring that AI is used for “social good” or “the benefit of humanity” is a common thread among all sets of principles. These are phrases on which a great majority can agree exactly because they carry with them few if any real commitments.” (Whittlestone et al. 2019)

Furthermore, if these principles are being administered from big Tech or the government, there could be a lot that slips by because it sounds good. In my previous article, we were working with the example of fairness. Fairness is something we can all agree is good, but we can’t all agree on what it means in practice. Fair for one person or group could equate to really unfair to another.

“The strength of top-down theories lies in their defining ethical goals with a breadth that subsumes countless specific challenges. But this strength can come at a price: either the goals are defined so vaguely or abstractly that their meaning and application are subject for debate, or they get defined in a manner that is static and fails to accommodate or may even be hostile to new conditions.” (Wallach et. al, 2005)

A machine doesn’t implicitly know what ‘fairness’ means. So how can we teach it a singular definition when fairness holds a different context for everyone?

A machine doesn’t implicitly know what ‘fairness’ means. So how can we teach it a singular definition when fairness holds a different context for everyone?

The Bottom-up Method of AI Being Taught Ethics

The bottom-up approach isn’t as easy to wrap up. Sometimes we see crowd-sourced (bottom-up politically) principles (top-down theoretically) being called for, and possibly a hybrid model (technically) for applied AI. If it was purely bottom-up and learning from the ethics of people, I fear disappointment will be the end result. We as humans haven’t quite mastered ethics, let alone standardized it into something codifiable.

One paper describes bottom-up approaches as “. . . those that do not impose a specific moral theory, but which seek to provide environments in which appropriate behavior is selected or rewarded. These approaches to the development of moral sensibility entail piecemeal learning through experience, either by unconscious mechanistic trial and failure of evolution, the tinkering of programmers or engineers as they encounter new challenges or the educational development of a learning machine.” (Allen et. al. 2005)

This is very very challenging and time-consuming. And as we know, AI doesn’t learn as humans do. It lacks a solid foundation. Building on top of that and applying band-aid after band-aid is not going to help.

“Bottom-up strategies hold the promise of giving rise to skills and standards that are integral to the overall design of the system, but they are extremely difficult to evolve or develop. Evolution and learning are filled with trial and error — learning from mistakes and unsuccessful strategies. This can be a slow task, even in the accelerated world of computer processing and evolutionary algorithms.” (Allen et. al. 2005)

The Hybrid of Bottom-Up and Top-Down Ethics for AI

So if top-down is flawed and bottom-up isn’t promising, what about a hybrid model? “If no single approach meets the criteria for designating an artificial entity as a moral agent, then some hybrid will be necessary. Hybrid approaches pose the additional problems of meshing both diverse philosophies and dissimilar architectures.” (Allen et. al. 2005)

According to most experts, a hybrid model is the better choice. Rules and structure are helpful, but only to a point, and sometimes, they contradict each other. AI is good at following rules, but it struggles around ethics, which are subjective and often contradictory.

Bringing it All Together

We have taken apart top-down and bottom-up ethics in AI in three ways: technically, theoretically, and politically. Then we took a step back and looked at top-down, bottom-up, and hybrid models of ethics for AI. Well, it still seems pretty messy, like we need to all be doing a lot more research and work in this area, but I hope that this has been helpful to understand the various angles of this analysis. To leave us with a final thought: “Ethical issues are never solved, they are navigated and negotiated as part of the work of ethics owners.” (Moss and Metcalf, 2019)

“Ethical issues are never solved, they are navigated and negotiated as part of the work of ethics owners.”

You can stay up to date with Accel.AI; workshops, research, and social impact initiatives through our website, mailing list, meetup group, Twitter, and Facebook.

Join us in driving #AI for #SocialImpact initiatives around the world!

If you enjoyed reading this, you could contribute good vibes (and help more people discover this post and our community) by hitting the 👏 below — it means a lot!

Citations

Allen, C., Wallach, W., & Smit, I. (2005). Artificial morality top-down bottom-up and hybrid approaches. Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches. Retrieved December 3, 2021, from https://www.researchgate.net/profile/Wendell-Wallach/publication/225850648_Artificial_Morality_Top-down_Bottom-up_and_Hybrid_Approaches/links/02bfe50d1c8d2c733e000000/Artificial-Morality-Top-down-Bottom-up-and-Hybrid-Approaches.pdf.

Eckart, P. (2020, May 29). Top-down AI: The simpler, data-efficient AI. 10EQS. Retrieved December 13, 2021, from https://www.10eqs.com/knowledge-center/top-down-ai-or-the-simpler-data-efficient-ai/.

Etzioni, A., & Etzioni, O. (2017). Incorporating ethics into Artificial Intelligence — Philpapers. Retrieved November 30, 2021, from https://philpapers.org/archive/ETZIEI.pdf.

Google. (2021). #OPEN roundtable summary note: Experimentalism — le guin part 2. Google Docs. Retrieved December 13, 2021, from https://docs.google.com/document/d/1cMhm4Kz4y-l__2TQANClVVMLCd9X3X8qH3RfhAGghHw/edit?pli=1#.

Moss , E., & Metcalf, J. (2019, November 14). The ethical dilemma at the heart of Big Tech Companies. Harvard Business Review. Retrieved December 13, 2021, from https://hbr.org/2019/11/the-ethical-dilemma-at-the-heart-of-big-tech-companies.

Wallach, W., Smit, I., & Allen, C. (2005). Machine morality: Bottom-up and top-down approaches … — AAAI. Machine Morality: Bottom-up and Top-down Approaches for Modeling Human Moral Faculties. Retrieved December 3, 2021, from https://www.aaai.org/Papers/Symposia/Fall/2005/FS-05-06/FS05-06-015.pdf.

Whittlestone, J., Cave, S., Alexandrova, A., & Nyrup, R. (2019). The role and limits of principles in AI Ethics: Towards a … Retrieved December 13, 2021, from http://lcfi.ac.uk/media/uploads/files/AIES-19_paper_188_Whittlestone_Nyrup_Alexandrova_Cave.pdf.

Comment

Comment

Making the Case For and Against Ethics in AI

Ethics in AI is not a new concept. It is on everyone’s minds, from scientists to scholars, corporations, and users alike. Everyone seems to have an opinion, which collectively can’t quite be broken down into ‘for’ and ‘against’. So, how can we simplify debating the pros and cons in order to better understand the current state of ethics in AI?

I’d like to address the question of ethics in artificial intelligence not by focusing on what or who is in need of reform but through the lens of its systemic issues — which are truly a reflection of the larger systemic issues that prevail in our world today. As an anthropologist by training, I prefer to use a wide lens to observe complexities by zooming out, and zooming in again as needed, taking note of both macro and micro-social viewpoints.

In this article, and in all of my writing on this subject, my goal is to see AI used ultimately for the good of all people and the planet. There are a lot of changes that need to be made if we are to see a sustainable and harmonious future for the Earth and for humanity, and although these may seem impossible, I believe that using AI is the most promising way to begin implementing plans for a better future.

Why do I think this way? AI is increasingly embedded into everything we do and will continue to expand its influence in our lives. I hope to spend more time researching and highlighting the ways in which socially and ecologically responsible AI is already being practiced. For now, let’s focus on the seemingly simple question: Should ethics be built into artificial intelligence in the first place?

Are ethics necessary for AI?

The short answer is yes. To think otherwise would be unethical. But how? And whose ethics?

One article on the subject Economies of Virtue asks, “…is ethical AI possible in the current social systems, and if so, what is required of the engineering profession, company directors, users, policymakers, and others?” Phan et al. 2021 p.3

The authors come to some valuable conclusions for getting through this sticky situation: “Redefining problems collectively, rather than through mere technical parameters, is a crucial first step. Researchers must recognize that attempts to reconcile a contradiction between ethics and commercial profit usually result in ethical products being shaped to consumer demand or the business needs of ‘end users.’ This demand comes in the form of equity, diversity, and fairness ‘outputs’ from Big Tech itself, and in the form of ethical assuage to reputational deficiency from other sites of Big Capital,” Phan, et al. 2021

Now, let’s turn to the question at hand: What are the arguments for and against ethics in artificial intelligence?

Who would argue that no, ethics shouldn’t be a part of AI? I’m not going to make the argument for Bad-actors or those who are profiting from AI’s misuse. Let’s rework the question: What are the criticisms of ethics in AI?

The Shortcomings of Ethics in AI

Douglas Rushkoff, well-known media theorist, author, and Professor of Media at the City University of New York wrote, “…the reasons why I think AI won’t be developed ethically is because AI is being developed by companies looking to make money — not to improve the human condition. So, while there will be a few simple AIs used to optimize water use on farms or help manage other limited resources, I think the majority is being used on people. . . My concern is that even ethical people still think in terms of using technology on human beings instead of the other way around. So, we may develop a ‘humane’ AI, but what does that mean? It extracts value from us in the most ‘humane’ way possible?” Rainie et al. 2021

Most people, I find, just want to live their lives comfortably and be free to make their own decisions. I support that. Often, people’s personal ethics contradict those of other individuals and society at large, and, it is difficult in our current world to live completely within one’s own values. If AI is to learn through the observation of our own human ethics, I imagine it would come out quite confused.

Self-driving cars can show us some examples of potential ethical conflict. Etzioni and Etzioni pointed out that “…driverless cars could learn from the ethical decisions of millions of human drivers, through some kind of aggregation system, as a sort of groupthink or drawing on the wisdom of the crowds. One should note, however, that this may well lead cars to acquire some rather unethical preferences. . . If they learn what many people do, smart cars may well speed, tailgate, and engage in road rage . . . That is, observing people will not teach these machines what is ethical, but what is common.” Etzioni, 2017

“…That is, observing people will not teach these machines what is ethical, but what is common.”

There are others who question the practicality of AI learning ethics from humans. Marcel Fafchamps, Professor of Economics and senior fellow at the Center on Democracy, Development and the Rule of Law at Stanford University, commented, “AI is just a small cog in a big system. The main danger currently associated with AI is that machine learning reproduces past discrimination — e.g., in judicial processes for setting bail, sentencing, or parole review. But if there hadn’t been discrimination in the first place, machine learning would have worked fine. This means that AI, in this example, offers the possibility of improvement over unregulated social processes.” Rainie et al. 2021

Given our apparent tendency for individualism over collectivism, who gets to decide what codes of ethics AI follows? If it is influenced by Big Tech, which is often the case, it will serve to support the ethics of some company, which generally has the primary goal of making money.

“Big Tech has transformed ethics into a form of capital…”

The value of profit over all else needs to shift. “Big Tech has transformed ethics into a form of capital — a transactional object external to the organization, one of the many ‘things’ contemporary capitalists must tame and procure.” (Birch & Muniesa, 2020) Phan, et al. 2021 Furthermore, “…By engaging in an economy of virtue, it was not the corporation that became more ethical, but rather ethics that became corporatized. That is, ethics was reduced to a form of capital — another industrial input to maintain a system of production, which tolerated change insofar as it aligned with existing structures of profit-making.” Phan, et al. 2021

As I understand it, this is just the next phase of capitalism taking over ethics as something to be exploited and monetized as a form of currency. Values and ethics shouldn’t work that way. They aren’t something that can be monetized, exchanged, or assigned a standardized value. They are worth something on an entirely different scale, and should certainly not be for sale.

So, what can be done? Let’s turn to some of the opinions that are clearly arguing in favor of ethics being a necessary tenet in AI.

The Arguments for Ethics in AI

There are many reasons that AI needs ethical tune-ups. First, let’s try to understand what ethics really means when we are talking about ethics in AI.

“We misunderstand ethics when we think of it as a binary when we think that things can be ethical or unethical…”

As Danah Boyd, Founder and President of the Data & Society Research Institute (and Principal Researcher at Microsoft), explained, “We misunderstand ethics when we think of it as a binary when we think that things can be ethical or unethical. A true commitment to ethics is a commitment tounderstanding societal values and power dynamics — and then working toward justice.” Rainie et al. 2021

Most data-driven systems, especially AI systems, entrench existing structural inequities into their systems by using training data to build models. The key here is to actively identify and combat these biases, which requires the digital equivalent of reparations. . . These systems are also primarily being built within the context of late-stage capitalism, which fetishizes efficiency, scale, and automation. A truly ethical stance on AI requires us to focus on augmentation, localized context, and inclusion,Rainie et al. 2021

“A truly ethical stance on AI requires us to focus on augmentation, localized context, and inclusion”

I want to highlight the people and groups that are indeed working towards justice: using all of their powers for good. One example is ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) which is a yearly conference that brings together a diverse group of scholars to contemplate the fairness, accountability, and transparency of socio-technical systems, mainly AI. The work they are doing is incredible, transformative, and so necessary. FAccT 2021

Ch-ch-ch-changes

As we have seen throughout the article, there are times when people prioritize ethics for the wrong reasons, and usually, those reasons are financial.

Bias in AI is something that is widely agreed upon to need attention, and that is known to be harmful, especially to already marginalized groups of people. Hence, this is something that has stirred up debate when it comes to ethics in AI. Referencing the work of FAccT, Phan et al. state that “. . .tools for identifying and ameliorating bias have become a new class of industrial products in and of themselves; for example, AI Fairness 360 by IBM, Google Inclusive ML, and Microsoft FairLearn. These products allow firms to make the ethical claim that they have solved the bias problem. By reframing social problems as a series of technical challenges, these solutions limit the meaning of ethics to the domain of individual actions and decisions. This, again, works in favor of firms as it enables them to ‘neatly pose and resolve the problem of violence’ (Hoffman, 2020: 10). Phan, et al. 2021

Ethics has to start somewhere. Social change can happen from the bottom-up, top-down, or in most cases, a combination of both. Tech companies are paying attention to ethics because consumers are demanding it. Maybe they will end up doing some good, but to me, it looks like more concessions of capitalism will keep things as they are, while keeping up the façade of continually making things better.

What we need is a higher moral standard of ethics in AI. Higher than society’s ethics, higher than the ethics we grew up on. Ethical AI in practice needs checks and balances to keep improving, and never settling because things are always changing and AI needs to maintain its cutting edge capacity.

I call for an override of the current systems and the implementation of completely new systems, which I believe AI can achieve. Making the case for and against ethics in artificial intelligence is no easy task. I would like to think that most people can agree that ethics, social responsibility, and accountability are important to consider when designing AI. Most would likely agree that it is crucial to make sure that robots won’t hurt anyone, that algorithms shouldn’t be unfairly biased, and that self-driving cars should try not to hit anyone. The question isn’t as much if ethics should be worked into AI, but rather how, and who gets to decide. Like so many social issues, often people can agree on what is wrong, but how to fix the problem is where the disagreements abound.

“The question isn’t as much if ethics should be worked into AI, but rather how, and who gets to decide…”

You can stay up to date with Accel.AI; workshops, research, and social impact initiatives through our website, mailing list, meetup group, Twitter, and Facebook.

Join us in driving #AI for #SocialImpact initiatives around the world!

If you enjoyed reading this, you could contribute good vibes (and help more people discover this post and our community) by hitting the 👏 below — it means a lot!

Citations

36, R. I. T. A. F. (2020, June 4). Wef_global_risk_report_2020. Issuu. Retrieved November 30, 2021, from https://issuu.com/revistaitahora/docs/wef_global_risk_report_2020.

ACM Conference on Fairness, accountability, and transparency (ACM FACCT). ACM FAccT. (n.d.). Retrieved November 30, 2021, from https://facctconference.org/.

Economies of virtue: The circulation of ‘ethics’ in big tech. Taylor & Francis. Retrieved November 30, 2021, from https://www.tandfonline.com/doi/full/10.1080/09505431.2021.1990875.

Incorporating ethics into Artificial Intelligence — Philpapers. (n.d.). Retrieved November 30, 2021, from https://philpapers.org/archive/ETZIEI.pdf.

Rainie, L., Anderson, J., & Vogels, E. A. (2021, June 21). Experts doubt ethical AI design will be broadly adopted as the norm within the next decade. Pew Research Center: Internet, Science & Tech. Retrieved November 30, 2021, from https://www.pewresearch.org/internet/2021/06/16/experts-doubt-ethical-ai-design-will-be-broadly-adopted-as-the-norm-within-the-next-decade/.

Comment

Comment

Latin American Government AI Readiness Meta-Analysis

Earlier this year, I was invited to attend a regional workshop led by the Digital Latam Center and the International Development Research Centre (IDRC) in Mexico City which focused on civil society, academia, and government involvement in the future of Artificial Intelligence development for the global south. As a representative of LatinX in AI™(LXAI), I found this intimate forum with key representatives a great opportunity to connect and further our organization’s understanding of the political environment and current challenges facing Latin American countries as well as opportunities for growth and advancement through AI technology. Read a recap of our experience in their recent blog post, Artificial Intelligence, and Development in Latin America: bases for a regional initiative.

Workshop on Artificial Development in LATAM by Digital Latam and IDRC

This experience reinforced my goals to strengthen infrastructure and opportunities for Latin American researchers, institutions, and startups developing AI technology through our organization’s mission ‘Creating Opportunities for LatinX in AI’. Learn more about the origins and drive of our infrastructure and development program in our prior blog post “Developing AI Infrastructure for LATAM”.

Each country is only as prepared to take advantage of AI technology as it’s government and citizens will allow.

The notion above reiterated extensively during the workshop, is easily reflected by the US and China who have been leading the competition for the Global AI market, referred to recently as the “new space race…, where world superpowers battle to define generations of technology to come”. In 2017, China announced a 3 step plan to become a $150 billion AI global leader by the year 2030 through investments in research, military, and smart cities. Despite $10 billion in venture capital currently being funneled towards AI in Silicon Valley, the US has been losing ground, after cutbacks on funding for scientific research and tightening immigration restrictions by the Trump administration, researchers and startups have been opting for grants issued by China to fund the future of AI development.

Where does that leave Latin American countries in the Global AI race?

A recent analysis of Government AI readiness led by Oxford Insights and the IDRC listed no Latin American countries in their top 20 rankings citing three key challenges in harnessing the use of AI for the common good: policies, capacity, and adequate resources. They scored each country and territories governments according to their preparedness to use AI in the delivery of public services. They’ve stated these findings as…

“…a timely reminder of the ongoing inequality around access to AI.”

Latin American Region Comparison Geochart by LatinX in AI™, Data Source: Government AI Readiness Ranking by Oxford Insights and IDRC

Despite not making the top 20, the governments of Mexico, Uruguay, Brazil, and Colombia ranked within the top 50 countries out of 194 globally. Mexico and Uruguay being the only two South American countries developing AI policies and strategies. Mexico’s strategy released in March 2018, “Towards an Artificial Intelligence (AI) Strategy in Mexico: Taking Advantage of the IA Revolution” was carried out by Oxford Insights, C-Minds, and commissioned by the British Embassy in Mexico. Uruguay opened a public consultation of Artificial Intelligence for the Digital Government on April 22nd, 2019 and has since updated its Digital 2020 Agenda.

The ranking system created by Oxford and the IDRC, sums an average normalization of indexed metrics on a scale of 0–10, from sources including the UN, WEF, Global Open Data Index, World Bank, Gartner, Nesta, and Crunchbase, clustered under four high-level topics including:

  • Governance — indicators include whether they had privacy laws in place and a forthcoming AI strategy

  • Infrastructure and data — indicators include the availability of open sourced data, data capability within the government, and their government’s procurement of advanced technology products

  • Skills and education — indicators include digital skills among the population, innovation capability by the private sector, and the number of registered AI startups

  • Government and public services — indicators include government effectiveness, availability of digital public services, and the importance of ICTs to government vision of the future

View an index of their data and ranking assessment here.

Comparison table published by LatinX in AI™, Data Source: Government AI Readiness Ranking (0–10 scale) by Oxford Insights and IDRC

The average ranking for Latin American countries according to their analysis is 3.682, not far behind the global average of 4.032. They concluded their analysis stating that “the way forward is still uncertain” and suggesting the development of ‘AI Centers’ by connecting their academic resources to public and private capital to improve networking and innovation, but also suggesting, until clear and ethical policies for AI have been developed, Latin American governments should heed the warnings of the Latin American Initiative for Open Data, which published a research report titled Automating with Caution” in November 2018.

Examining LATAM AI Readiness Ranking against each Countries Economic Metrics

At first glance, these rankings appear intuitive, but it was surprising to find they did not account for each countries population size, unemployment rate, income equality, household income, education index, or GDP. These metrics are far more telling of a government and it’s citizen’s ability to invest in or make use of new technology and it’s potential effects on the population. I’ve compared these values to better assess the real risks and potential for integrating artificial intelligence in Latin America.

Unemployment Rate

The unemployment rates, published by the International Monetary Fund, is the number of unemployed persons as a percentage of the total labor force sourced from the World Economic Outlook in 2019. Unemployment in developing countries is often telling of a countries economy but can also be an indicator of factors outside of a government’s control. Areas with conflict may see an increase in migration as refugees flee, causing unemployment rates to spike temporarily including in neighboring cities or countries.

Comparison chart by LatinX in AI™, Data sources: Government AI Readiness Index (Oxford & IDRC), Unemployment Rate (IMF)

This can be seen most clearly in Venezuela, where the unemployment rate has jumped from 6% in 2015 to 44% in 2019. “Venezuela’s fall is the single largest economic collapse outside of war in at least 45 years, economists say”, as described in the New York Times and the largest refugee crisis of all time in Latin America. In countries like Venezuela, which used to have a thriving economy largely based on petroleum export and manufacturing, the opportunities for incorporating Artificial Intelligence were endless. Unfortunately now, due to government mismanagement, extensive surveillance and biometric data collection (similar to China’s communist regime), coupled with hyperinflation, some say the countries economy may never recover.

This disrupt has even led to some technologically savvy Venezuelan citizens to desperately turn to impersonate US citizen’s through virtual private servers (VPS)s’ on sites like Mechanical Turk where they end up undermining social science research in order to earn money to feed their families. Venezuelan citizen’s fleeing to neighboring countries like Colombia, Argentina, Chile, and Peru, have found opportunities in the local gig economies, working for companies like Rappi, an app based delivery service startup, which is thriving in part due to this influx of migrant workers. Rappi incorporates AI and machine learning techniques in every aspect of their service, their app not only offers food and groceries but also includes on-demand services ranging from personal training to healthcare to even withdrawing and delivering cash from an ATM.

Generally, unemployment rates in a country are a lagging indicator, often following economic distress or improvements and must also be adjusted for seasonal variability. Countries whose economic well-being relies upon a few industries without much room for future development may also show high unemployment rates accompanied by a low GDP per capita. Unemployment and Government AI Readiness are not directly correlated, but unemployment must be considered before implementing AI technology or automation.

Cuba, which has a historically low unemployment rate, also has the lowest Government AI Readiness score out of all other Latin American countries, according to the Oxford and IDRC ratings. Cuba’s economy is owned and run by a dictatorship government where the state employ’s most of its labor force, sets price standards and controls the access to education, healthcare, and distribution of goods to its citizens. The Cuban government also controls investments in the region, stifling the potential for progress and innovation, although recent economic reforms led by Raúl Castro’s administration, have allowed over 400,000 citizens to sign up to be entrepreneurs.

Cuba has also seen an increase in the availability of computers and mobile phones after legalization in 2008, as well as modernization of its telecommunications network, improving access to the internet. As outlined by the Lexington Institute, in their research titled, “Cuba goes digital”, $473 million of foreign investment between 1995 and 2000 had given “Cuba the potential to become a Latin American leader in information technology” as “Cuba is incubating a group of enterprises that design and export advanced business and medical software products.” Anyone knowledgeable about AI technology would know that this could be a great opportunity for incorporating Machine Learning and Deep Learning techniques as solutions for training and deploying models “on the edge” through Android and iOS platforms. You can now take advantage of frameworks like TensorFlow Lite by Google, Core ML by Apple, or Caffe2Goby Facebook.

Government acceptance and funding of these technologies for its research institutions and enterprises would have to be sanctioned and appropriately regulated prior to implementation. Government and economic stability would also be needed to warrant investment in the region, unfortunately, large numbers of Cuban citizens have been fleeing the country due to food shortages, impacted by its close ties and oil trade agreements with Venezuela and amplified by travel sanctions imposed by the Trump administration.

GDP PPP

Examining each country’s ranking alongside the Gross Domestic Product per Capita Purchasing Power Parity (GDP PPP) will help us to better understand an individual’s ability to buy the same quantity of an item in different countries. Government agencies use this metric to compare the output of countries that use different exchange rates and it can be used to forecast future real exchange rates. The GDP PPP is calculated using differences in taxes, tariffs, transportation costs, import costs, and labor costs.

The GDP PPP data, published by the Central Intelligence Agency World Fact Book, compares each countries GDP on a purchasing power parity basis divided by population as of 1 July for the same year.

Comparison chart by LatinX in AI™, Data sources: Government AI Readiness Index (Oxford & IDRC), GDP-PPP and Population (CIA World Fact Book)

Countries with high GDP PPP may not score highly on this Government AI Readiness index due to having a small population or specialized economy, lacking investment or opportunity for the high impact of technological innovation. This is the case for countries who rely heavily on tourism including Caribbean countries, the Bahamas, Barbados, Antigua and Barbuda, and Saint Kitts.

While some countries with low GDP PPP rank higher on the Government AI Readiness Index thanks in part to a growing or diversified economy combined with technological skills and data protection policies. Ecuador, Peru, Colombia, Brazil, Costa Rica, and the Dominican Republic score above the global average of 4.032 on the Government AI Readiness Index but have historically low GDP PPP.

Ecuador is the 8th largest economy in Latin America with its main industries being petroleum, food processing, textiles, wood products, chemicals —and it is also the world’s largest exporter of bananas. At a UN Summit in 2014, Ecuador was one of only five countries who called for a preemptive ban on fully autonomous weapons and in late 2017, in an effort to encourage investment in the region, the National Directorate for the Registration of Public Data in Ecuador (DINARDAP), began drafting the first Ecuadorian law which would implement regulations in order to protect public personal data. Despite these proclamations for privacy and protection, Ecuador has also implemented a nationwide surveillance and response system called ECU 911, funded by China, and making use of controversial facial recognition technology while promoting its benefits for enforcing traffic laws and reducing crime incidents.

Colombia is the 4th largest economy in Latin America, and the fastest growing globally, following China, thanks to its most thriving sectors including construction, services, and agriculture. Its other main industries include textiles, food processing, oil, clothing and footwear, beverages, chemicals, cement, gold, coal, emeralds, shipbuilding, electronics industry, and home appliances. Colombia also has the fastest growing information technology industry in the world and the longest fiber optic network in Latin America, installed by Azteca Co. in 2013.

While Colombia is lacking an official AI strategy, it has some of the most thorough data privacy laws in South America inspired by European data protection regulations. These laws and decrees, enacted between 2008 and 2014, protect its citizens by regulating the use of financial and commercial personal data in credit scoring, they also govern data processing, establish the rights of data subjects and duties of data controllers and processors, set forth requirements for international data transfers, created the National Registry of Databases and designates the Superintendence of Industry and Commerce (SIC) as the data protection authority. In 2018, the first Centre for Excellence in Artificial Intelligence was opened in Medellin, the country’s second largest city, as part of the Digital Americas Pipeline Initiative (DAPI), a collaboration between Ruta N, the center of business and innovation of Medellín and IRPA AI (The Institute for Robotic Process Automation and Artificial Intelligence).

An analogy most often used to explain GDP PPP is the Big Mac Index, which compares the price of a Big Mac in different countries in order to illustrate currencies which may be under or overvalued in purchasing power as compared to the local exchange rate. For our purposes, it would be a fruitful undertaking to explore the difference in purchasing power for an AI product and the difference in cost to develop AI across countries, but an in-depth exploration of these questions would merit a write up of its own. We’ll use a simple proxy in the interim, the cost to hire AI researchers.

Comparing the Cost to Hire an AI Researcher

The cost to hire an AI Researcher is the most telling comparable metric that governments funding research and development would need to consider in the integration of AI into their policies, products, and services. In the US, salaries of software engineers, data scientists, and researchers skilled in artificial intelligence techniques range between $100,000–$150,000 according to PayScale. These averages increase in densely populated or competitive markets like New York and San Francisco. While highly credentialled and “well-known names in the A.I. field have received compensation in salary and shares in a company’s stock that total single- or double-digit millions over a four- or five-year period.

Alternatively, in Latin America, the cost to hire engineers and researchers is significantly lower ranging between $15,000 and $30,000 dependent on years of experience and specialization. According to a 2018 Latin American Developer Survey conducted by Stack Overflow, engineers with some experience in Machine Learning or Data Science still tend to receive higher compensation. Since the job title of Artificial Intelligence engineer and researcher is only beginning to gain popularity, this is the best available historical data to show the average compensation equivalencies by comparison.

Source:

“Hiring Developers in Latin America” by Julia Silge on Stack Overflow Business Journal

Education

According to the Stack Overflow study, Latin American countries also seem to produce more academic researchers than general software engineers as compared to the rest of the world.

Source:

“Hiring Developers in Latin America” by Julia Silge on Stack Overflow Business Journal

While the Government AI Readiness Index by Oxford and the IDRC, account for technological skills, they do not look at the overall Education level of a country. The education index is an average of mean years of schooling (of adults) and expected years of schooling (of children), both expressed as an index obtained by scaling with the corresponding maxima. Published by the United Nations Development Program their calculations are developed from data by UNESCO Institute for Statistics (2018) and other sources.

Comparison chart by LatinX in AI™, Data sources: Government AI Readiness Index (Oxford & IDRC), Education Index (UNDP)

While most Latin American countries rate highly on the education index, many Latin American and Caribbean governments do not invest enough in university research and development. This coupled with unattractive pay, prestige, and working conditions leads to “Brain Drain” where the highly skilled or educated leave their country of origin. This phenomenon makes it harder for Universities in those countries to reach their research potential and limits the access to quality scientific research mentors available to share knowledge to incoming students.

A report from Americas Quarterly in 2014 cited data from Mexico’s National Council of Science and Technology indicating that 1,271 of the 4,559 Mexicans (28%) working on master’s degrees or Ph.D.s abroad in 2012 were doing so in the US. That’s one of every 19 Mexicans with a bachelor’s degree or higher living in the US.

In Argentina, scientists often strike to protest budget cuts to research and development. The directors of the National Scientific and Technical Research Council (CONICET), headquartered in Buenos Aires, which employs more than 20,000 researchers in hundreds of centers throughout the country are also fighting the cuts. They created a manifesto demanding “the immediate implementation of a plan to rescue CONICET.”

Latin American Automation Potential & Risks

All of these metrics can still only tell part of the story when it comes to a country and it’s citizen’s preparedness for Artificial Intelligence. You can’t predict an economies readiness for AI without including metrics for automation. Several reports have been published in the last five years by experts including the McKinsey Global Institute, the Economist Intelligence Unit, and the International Federation on Robotics, to name a few.

The International Federation of Robotics has been tracking and forecasting the rise of robot density globally for use in manufacturing and affiliated industries. In their 2018 Executive Summary on World Robotics, they noted that Mexico has become an important emerging market for industrial robots outpacing the rest of South America, including Brazil.

International Federation of Robotics — 

2018 Industrial Robots Executive Summary

The use of AI and automation applied to industries such as manufacturing and agriculture could help to leapfrog a developing countries economy. Countries with a growing young workforce could use these technologies to their advantage in furthering economic development with the right education.

These days, manufacturing with robotics is no longer the largest concern when describing the automation potential and its effects on an economy. Shifts in business processes and software intelligence through automation of data collection and processing will have a larger impact, especially in Latin America. In 2017, the McKinsey Global Institute published its executive summary on “Harnessing automation for a future that works”.

McKinsey Global Institute — 

A future that works: Automation, employment, and productivity

They’ve listed the countries where the potential for automation is highest by adapting current technologies. Of the Latin American countries they included in their study(countries with the largest population or high wages), Peru and Colombia have the highest automation potential at ≥53%, Brazil, Mexico, and Costa Rica the next highest ≥ 50% followed closely by Chile, Barbados, and Argentina ≥ 48%.

McKinsey Global Institute — 

A future that works: Automation, employment, and productivity

Automation Readiness Index

Meanwhile, the Economist Intelligence Unit developed their own Automation Readiness Index, accompanied by a white paper and executive summary titled “Who is ready for the coming wave of automation”. Their index, similarly to that of the IDRC and Oxford Insights, categorized metrics under 3 high-level topics including:

  1. Innovation Environment — including indicators for research and innovation, infrastructure, and ethics and safety.

  2. Education Policies — including indicators for basic education, post-compulsory education, continuous education, and learning environments.

  3. Labour Market Policies — including indicators for knowledge on automation and workforce transition programs.

The Economist Intelligence Unit — 

Automation Readiness Index

They conclude their report by comparing the global use of automation and AI technology to trial and error. Reinforcing the sentiment that “supporting basic research, clearing the way for start-ups and ensuring competitive markets are likely to be as helpful to AI and robotics innovation as they have been for past technology advances”…while, “policy directions for education systems and labor markets are less clear for the moment, as the effect of intelligent automation have yet to be widely felt”.

Incorporation of AI into industries through automation which currently relies on a large blue-collar workforce always leads to the concerns of increased unemployment, decreased GPD-PPP, increased migration, and population redistribution or density in city centers, gaps in education for highly technical skills, and increased income inequality between upper and lower class citizens. Most economists say these effects are temporary as the markets shifts and new jobs are developed to support the growth of AI economies, but governments will have to do their part in ensuring their citizens have access to education and opportunities for investment.

How can AI help Latin American Governments and citizens?

Rather than just stressing how AI can be misused by government entities for surveillance to perpetuate bias and corrupt political systems or how it may diminish the middle class and render a country’s lower class workers as unemployed, it is important to understand the benefits this technology can add to an ecosystem and economy when used responsibly.

In the public service sector, a myriad of new AI technologies is being implemented including advancing the availability of education, detecting fraud, triaging health care needs, making payments to welfare recipients, speeding immigration decisions, planning and implementation of large urban and industrial infrastructure projects and most importantly, it can reduce costs.

A great write up titled “The economics of artificial intelligence” outlines five imperatives for harnessing the power of low-cost prediction, I’ve paraphrased their descriptions slightly to be applicable to Governments rather than Corporations.

Five Imperatives for Harnessing the Power of Low-Cost Prediction

  1. Develop a thesis on time to AI impact — How fast do I think the implementation, demand, and accuracy of prediction will increase for a particularly valuable AI application in my sector?

  2. Recognize that AI progress will likely be exponential — Once appropriate data collection, processing, and prediction tools are in place for Government services, understand that progress and impact will be exponential rather than linear.

  3. Trust the machines — Where AIs have demonstrated superior performance in prediction, governments must carefully consider the conditions under which to empower humans to exercise their discretion to override the AI.

  4. Know what you want to predict — AI effectiveness is directly tied to goal-specification clarity, so knowing your desired outcomes, whether that be reducing crime rates, increasing the availability of healthcare and education, increasing employment, or reducing government overspending.

  5. Manage the learning loop — Governments need to ensure that information flows into decisions, they follow decisions to an outcome, and then they learn from the outcome and feed that learning back into the system.

The use of AI technology can actually transform the role of governments, making them better able to serve the population. As governments of developing countries continue to shift to more advanced digital platforms, they have added control over the data being collected on their citizens and how that data may be used to benefit society. Since data is the “new gold”, governments also have a responsibility to their citizens to ensure this information is being mined in the least invasive manner while still creating value for the economy.

In follow-up posts, I’ll dive deeper into each Latin American country’s economy, current AI research and development in public and private sectors including growth of startups, industrial automation potential, risks, and benefits, that could pave a way forward for LatinX in AI.

To further our understanding and efforts in Latin America, it is imperative that we gather insight into the challenges and opportunities available to develop AI infrastructure across the continent. Can you help us by completing and sharing this quick survey, to better understand the key players, barriers, and opportunities for development and innovation in AI in your region?

LATAM AI Infrastructure Development Survey: http://bit.ly/LATAM-AI-Survey

Stay Up to Date with LatinX in AI™ (LXAI)

Subscribe to our newsletter to stay up to date with our community events, research, volunteer opportunities, and job listings shared by our community and allies!

Subscribe to our Newsletter!

Join our community on:

Facebook — https://www.facebook.com/latinxinai/

Twitter — https://twitter.com/_LXAI

Linkedin — https://www.linkedin.com/company/latinx-in-ai/

Private Membership Forum — http://www.latinxinai.org/membership

If you enjoyed reading this, you can contribute good vibes (and help more people discover this post and our community) by hitting the 👏 below — it means a lot!

LatinX in AI™ (LXAI) is fiscally sponsored by the Accel AI Institute, a 501(c)3 Non-Profit. Support our work by donating to our Open Collective: https://opencollective.com/latinx-in-ai-research

Comment

Comment

Developing AI Infrastructure for LATAM

Latin America is facing unique challenges in the global AI arms race. Prior to our first LatinX in AI Research Workshop at the Neural Information Processing Systems (NeurIPS) Conference in December 2018, the representation of Latin American researchers at these elite conferences was abysmal. In the last 10 years leading up to 2016, there had only been 11 papers accepted at NeurIPS from South America, according to an investigation by the Deep learning Indaba group.

Area cartogram showing countries rescaled in proportion to their accepted NIPS papers for 2006–2016.

DeepLearningIndaba.com

For those who aren’t familiar with NeurIPS, it has poised itself to be the fastest growing and most competitive AI conference, projected to have 10,000 submissions this year which crashed it’s submissions site servers and caused a deadline extension this past weekend.

Infographic depicting NIPS submissions over time. The red bar plots fabricated data.

Approximately Correct blog by Zachary P. Lipton

.

It has also been credited with driving up Arxiv submissions for AI and Machine Learning research each year.

Arxiv submission rates tweeted by Yad Konrad.

“Since last year, ~1000 more papers published on this day. I wonder what it would look like in the next 24 hours after NeurIPS Full paper submission deadline.” tweeted Yad Konrad, a researcher in SF.

These statistics prove that it is critical to ensure that the research showcased at NeurIPS doesn’t just represent the research coming from specific regions, but from around the globe; needless to say, this urgency is not limited to NeurIPS, but also applies to similar conferences and publications. Developing nations are furthering AI and Machine learning technology in ways that can benefit even the most advanced societies. This development will lighten the burden often carried by well-resourced governments to support communities who have lacked access to technological development.

Our next big event is coming up in a week, the Official LXAI Research Workshop is co-located with the Thirty-sixth International Conference on Machine Learning ICML at the Long Beach Convention Center in Long Beach, CA on Monday, June 10th, 2019.

We chose to co-locate an official workshop with ICML, one of the fastest growing artificial intelligence conferences in the world, due to it being globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics.

LXAI Research @ ICML 2019

This is the first of our workshops completely organized and run by members of our community who have dedicated countless hours over the past six months, meeting weekly and putting together a full day’s schedule including three headlining keynotes, a panel of industry leaders, sponsored luncheon, as well as ten oral presenters and over forty poster presenters selected by a rigorous program committee review process of their submitted research abstracts.

Huge thanks to the Chairs of the LatinX in AI Research Workshop at ICML 2019:

Big thanks to our amazing sponsors:

Sponsors for the LXAI Research Workshop @ ICML 2019

For full details on this event’s programming and registration: http://www.latinxinai.org/icml-2019

We’ll be putting out a call for chairs of our next official workshop at NeurIPS 2019 shortly, please stay tuned to be a part of this amazing community.

At LatinX in AI (LXAI), we are doing our part by hosting these research workshops and launching an AI Infrastructure Development program. This idea was sparked thanks to a raffle winning by one of our board members, Pablo Samuel Castro, at the NeurIPS 2018 Nvidia AI luncheon.

After deliberating over countless responses to his Twitter thread, Pablo ultimately found a great home for this Nvidia T-Rex GPU, gifting it to Carmen Ruiz, a Professor at the Higher Polytechnic School in Guayaquil, Ecuador, his home country. Carmen was chosen as the recipient thanks to her work leading a new Ph.D. program, and her research is being used for:

  1. Natural disaster prediction and relief

  2. Political analysis

  3. Characterization of demographic groups in #Latam

  4. VR for educating people in impoverished areas focused on girls

The next opportunity for us to rehome an incredible piece of hardware came during our recent partnership with Nvidia, where they hosted a scholarship for members of LatinX in AI and Black in AI to attend their annual GPU Technology Conference in March.

Nvidia graciously gifted our organization a second GPU, this time the Titan V, heralded to be the most powerful Volta-based graphics card ever created for the PC. This time, we took nominations from our community members, asking if they could help us identify research institutions and startups who could use additional computing power like this to boost their research initiatives. Specifically, we’re looking for those working on projects that provide a large societal impact or benefit to the local community.

After reviewing all the nominations in depth and researching potential issues with mailing and customs regulations — we chose and happily delivered the GPU to an AI research team at the Centro de Investigación y Desarrollo de Tecnología Digital del Instituto Politécnico Nacional in Mexico, nominated by Professor Jessica Beltran for their work on neurodegenerative diseases.

Dr. Jessica Beltran receiving the Titan V Graphics Card from Nvidia

Unboxing the Titan V Graphics Card from Nvidia

We know their institution is going to do amazing work and we are excited to feature Dr. Jessica Beltran and her colleague Dr. Mireya Garcia on an upcoming online AI Research Discussion describing their work “Towards a Diagnoses of Alzheimer’s Disease with AI” on Friday, June 28th, 2019 at 11 am PST.

AI Research Discussion Webcast

In this talk, they will review current advances in eye movement analysis related to the diagnosis of Alzheimer’s Disease. We will discuss the challenges and future directions in this field. Additionally, they will show different projects related to AI that we conduct in their lab and research center (CITEDI-IPN, https://www.citedi.ipn.mx/portal/). These projects include pervasive healthcare and indexing of multimedia content

You can register to join us via webcast here: http://bit.ly/AI-Alzheimer-Webcast

To further our understanding and efforts in Latin America, it is imperative that we better understand the challenges and opportunities available to develop AI infrastructure across the continent. Can you help us by completing and sharing this quick survey, to better understand the key players, barriers, and opportunities for development and innovation in AI in your region?

LATAM AI Infrastructure Development Survey: http://bit.ly/LATAM-AI-Survey

LatinX in AI is continuing to take in-kind donations of new and gently used hardware or cloud computing credits to regift to research institutions and startups using AI to further their communities. Contact our board directly if you’d like to make a contribution: latinxinai @ accel.ai

Stay Up to Date with LXAI

Subscribe to our monthly newsletter to stay up to date with our community events, research, volunteer opportunities, and job listings shared by our community and allies!

Subscribe to our Newsletter!

Join our community on:

Facebook — https://www.facebook.com/latinxinai/

Twitter — https://twitter.com/_LXAI

Linkedin — https://www.linkedin.com/company/latinx-in-ai/

Private Membership Forum — http://www.latinxinai.org/membership

If you enjoyed reading this, you can contribute good vibes (and help more people discover this post and our community) by hitting the 👏 below — it means a lot!

Comment