Viewing entries in
Global AI

In consideration of Indigenous Data Sovereignty


“Indigenous Peoples have always been ‘data warriors’. Our ancient traditions recorded and protected information and knowledge through art, carving, song, chants and other practices.” (Kukutai, 2020)

Long before the advent of computers and the written word, Indigenous Peoples had a rich tradition of transmitting knowledge and data from one generation to the next. However, the concept of Indigenous Data Sovereignty (ID-SOV) is a relatively recent development, with its first official documentation dating to 2016. (Taylor & Kukutai) This post will review ID-SOV and the CARE principles of Indigenous data governance in an effort to move towards decolonizing data. 

ID-SOV can be described as the right of Indigenous Peoples to possess, manage, access, and have authority over data that originates from their communities and relates to their members, traditional knowledge, customs, or lands. (Kukutai, 2020)

To state something as a right is one thing, however to see it carried out, we must detangle from a long history of manipulation of data on Indigenous peoples, who were historically demonized to justify settler colonialism. Now, when neo-colonialism is rife, we see how this narrative continues by victimizing Indigenous peoples. I align with those who argue that this narrative needs to change. According to the Global Indigenous Data Alliance (GIDA), building strategic relationships with global bodies and mechanisms is necessary to promote ID-SOV and governance internationally by providing a visible, collective approach. (Kukutai, 2020)

Even today, sensitive COVID-19 data is being mined and reused without consent from Indigenous Americans by the media, researchers and non-governmental organizations, under the assumption that making tribes more visible would be helpful but actually causing unintentional harm in the process. (RDA COVID-19 Indigenous Data WG, 2020) Settler colonialists thought that they were ‘helping’ too, via ethnic cleansing and direct violence. Where neocolonialism is not inherently violent, it is extremely dangerous (Couldry & Mejias, 2021), and tracing the histories can help us understand how to move towards decolonizing data for the benefit of all. 

Decolonizing Data Via Self-Determination

Data and data analytics have become increasingly important and interdependent in many ways in the digital age. Even governments are heavily reliant on data for their decision making and policies. As has been the case in much of our history, the unwilling targets of policy interventions are disproportionately Indigenous Peoples, whose enduring aspirations for self-determination over their own knowledge, information systems, institutions and resources get undermined by governments. Data is extracted from Indigenous Peoples, their territories, and cultural heritage without seeking their consent or involvement in decisions regarding the collection, utilization, or application of this data. (Walter et al. 2021)

To have the conversation about ID-SOV, let us first discuss the difficulty in defining what it means to be Indigenous. As per The UN Declaration on the Rights of Indigenous Peoples (UNDRIP), indigeneity is intricately tied to the point of initial colonial contact, which can prove challenging to ascertain in regions where colonizers did not establish permanent settlements. The term 'tribes,' though sometimes practical, carries with it problematic colonial connotations. Nevertheless, the label 'indigenous' possesses a broader scope, encompassing a diverse range of ethnic groups, including tribes like the hill tribes residing in the Mekong River area of Southeast Asia (Scott, 2009). A common thread among Indigenous Peoples is their strong inclination toward preserving their autonomy. Simultaneously, they frequently confront marginalization and discrimination, often framed within a narrative that portrays them as victims. (Chung & Chung 2019 P7)

In the pursuit of decolonization, it's crucial to emphasize that the concept of 'Indigenous' itself was a construct devised by colonizers to delineate who was considered fully human and who was relegated to a status deemed less than human (Scott, 2009). It is inherently problematic that we continue to operate within the framework established by this historical perspective. When it comes to the contemporary mission of decolonizing data, a pivotal starting point lies in the recognition of Indigenous Data Sovereignty. By placing the focus on those who have endured the most severe marginalization due to colonialism, we may uncover a clearer path forward in our journey towards decolonization.

There are many concerns from Indigenous groups, such as those in the Mekong area, referred to as Indigenous ethnic minorities (IEM). Many contradictions arise that result in security risks and the impact of sharing IEM data could be both positive and negative in ways unanticipated. A balance of freedoms is required - transparency versus personal security. (Chung & Chung 2019 P12

Within this contradiction lies a major difficulty: how to have accessible and transparent data, while also ensuring the right to privacy for the subjects of that data. This presents the deeper issue which is that data does not promote change automatically nor address issues of marginalization, colonialism or discrimination, not to mention combatting imbalances of power in negotiations and consultations led by governments. (Chung & Chung 2019 P20)

Open Data initiatives raise apprehensions within ID-SOV networks because they often lack safeguards for Indigenous Peoples. There is a growing emphasis on expanded data sharing, exemplified by the widely embraced FAIR principles (Findable, Accessible, Interoperable, Reusable). Nevertheless, this trend has generated tensions when it comes to safeguarding, sharing, and utilizing data pertaining to Indigenous Peoples. To promote meaningful engagement between data collectors and users with Indigenous perspectives, the CARE Principles provide a valuable framework for deliberating upon responsible data utilization.  (Kukutai, 2020)

CARE Principles for Indigenous Data Governance 

While the FAIR principles primarily focus on data itself and overlook the ethical and socially responsible aspects of data usage, such as power imbalances and historical contexts related to data acquisition and utilization, the CARE principles prioritize the welfare of Indigenous Peoples and their data. They can be integrated alongside the FAIR Principles across the entire data lifecycle to ensure mutual advantages and address these broader ethical considerations.(RDA, 2020 P57)

CARE Principles

Collective Benefit

Data ecosystems shall be designed and function in ways that enable Indigenous Peoples to derive benefit from the data

Authority to Control

Indigenous Peoples’ rights and interests in Indigenous data must be recognised and their authority to control such data be empowered. Indigenous data governance enables Indigenous Peoples and governing bodies to determine how Indigenous Peoples, as well as Indigenous lands, territories, resources, knowledges and geographical indicators, are represented and identified within data

Responsibility

Those working with Indigenous data have a responsibility to share how those data are used to support Indigenous Peoples’ self determination and collective benefit. Accountability requires meaningful and openly available evidence of these efforts and the benefits accruing to Indigenous Peoples. 

Ethics

Indigenous Peoples’ rights and wellbeing should be the primary concern at all stages of the data life cycle and across the data ecosystem

(Carroll et al. 2020


If these principles can be integrated into systems of open data, it could truly turn towards decolonizing data, however, the need to be more than just principles. If we center on the CARE principles and Indigenous data sovereignty for data governance on a global scale, perhaps we can steer away from harmful colonial data mining and towards a more balanced relationship with data. 





 Resources

Carroll, S. R., Garba, I., Figueroa-Rodríguez, O. L., Holbrook, J., Lovett, R., Materechera, S., Parsons, M., Raseroka, K., Rodriguez-Lonebear, D., Rowe, R., Sara, R., Walker, J. D., Anderson, J., & Hudson, M. (2020). The CARE Principles for Indigenous Data Governance. Data Science Journal, 19. https://doi.org/10.5334/dsj-2020-043

Chung, P., & Chung, M. (2019). INDIGENOUS DATA SOVEREIGNTY IN THE MEKONG REGION. 2019 WORLD BANK CONFERENCE ON LAND AND POVERTY”.

Nick Couldry & Ulises Ali Mejias (2021): The decolonial turn in data and technology research: what is at stake and where is it heading?, Information, Communication & Society, DOI: 10.1080/1369118X.2021.1986102 

Kukutai, T., Carroll , S. R., & Walter , M. (2020). Indigenous data sovereignty . eprints.utas.edu.au. Retrieved March 5, 2022, from https://eprints.utas.edu.au/34971/2/140589-Indigenous%20data%20sovereignty.pdf 

RDA COVID-19 Indigenous Data WG. "Data sharing respecting Indigenous data sovereignty." In RDA COVID-19 Working Group (2020). Recommendations and guidelines on data sharing. Research Data Alliance. https://doi.org/10.15497/rda00052

Taylor, J., & Kukutai, T. (2016). Indigenous data sovereignty toward an agenda. Australian National University Press. 

Walter, M., Kukutai, T., Russo Carroll, S., & Rodriguez-Lonebear, D. (2021). INDIGENOUS DATA SOVEREIGNTY AND POLICY.


The Precarious Human Work Behind AI

AI is now everywhere, but it doesn’t exist as autonomously as it makes it seem. AI is increasingly prevalent in a large variety of industries, many which hide the countless workers behind the curtain making it function, and I am not just talking about the engineers who create it.

It is important to acknowledge the human work behind AI development and maintenance, from grueling content moderation to rideshare drivers to all of us whose data serves to profit large corporations. This leaves countless workers in precarious positions, stuck in survival mode and forced to adapt as best as they can, with low wages and the threat of job loss looming as tasks continue to be automated.

Anything done in the name of ‘safety and trustworthiness’ of AI is truly an afterthought to capital gain for corporations. In a podcast with engineers from Open AI, they were laughing about how ‘Trust and Safety’ (T&S) more so stands for ‘Tradeoffs and Sadness.’ This is a fundamental problem for multiple reasons. Here in this blog, we will discuss the areas where the rapid development and deployment of AI is affecting precarious work in various ways.  

The Human Work Behind Data

Data is the foundation of AI and is generated by people. Each day, approximately 328.77 million terabytes of data are created. The work done to produce data is almost never compensated, although it is massively profited off of by large corporations. How could companies compensate their users for the data that they use and profit from? What kind of laws or policies could be created to solve this problem, and how would it work on a global scale? These are still questions that we are grappling with as a society. 

Data is the fuel of AI. There is a stark lack of control and ownership over data, which brings up some serious ethical considerations which include but are not limited to privacy, and which are barely covered by inconsistent and often unenforced data protection laws.  

What should be done about this aspect of human work behind AI? This could be seen as  a form of ghost work. Should it be compensated? How would this be implemented? There are some companies which are taking some initiatives in this and paying very small amounts to users for their data, but the issue is much bigger than that. The data collected is used to target advertising at users, which means further exploitation. Not to mention that it can be used to feed AI that replaces human work, so that your own data which you aren’t paid for could be used to put you out of a job, while also being used to sell you things. 

In 2017, it was estimated that the transaction of giving up personal details to companies like Facebook came to about $1,000 per person per year, but this is quickly rising. (Madsbjerg, 2017) The exact value of our data is unclear, even to Google, but is often used for targeted advertising, as well as being sold to data brokers who sell it as a commodity to advertisers, retailers, marketers, government agencies, and other data brokerages. According to a report by SecurityMadeSimple.org, the data brokerage industry generates over $200 billion of revenue yearly and continues to grow annually. Another report by MAXIMIZE MARKET RESEARCH states that the Data Broker Market size was valued at $257.16 billion in 2021 and the total Data Broker revenue is expected to grow at 4.5% from 2022 to 2029, reaching nearly $365.71 billion. When will we as users and providers of data ever see any of these profits? 

One proposed answer would be universal basic income based on the data we produce. This idea is not  new, and was first presented by Jaron Lainer in his 2013 book, Who owns the future? The book criticizes the accumulations and evaluation of consumer data in the tech industry which fails to acknowledge any monetary debt to the people for all this free information they create and give. 

The Exploitation of Workers in AI Moderation and Content Labeling

Now, we will leave that can of worms crawling around and discuss the low-paid gig work that goes into moderating AI systems, such as scanning content for violence and hate speech or endlessly labeling data. These jobs are often outsourced to workers in the Global South who are repeatedly exposed to traumatic content and receive little compensation. This is highly exploitative work, with little room for workers to organize and demand worker’s rights. 

Take for example the story of Sama, which claims to be an “ethical AI” outsourcing company. Sama is headquartered in California and handles content moderation for Facebook. Its Kenya office pays its foreign employees a monthly pre-tax salary of around $528, which includes a monthly bonus for relocating from elsewhere in Africa. After tax, this amounts to around $440 per month. Based on a 45-hour work week, this equates to a take-home wage of roughly $2.20 per hour. Sama employees from within Kenya who are not paid the monthly relocation bonus receive a take-home wage equivalent to around $1.46 per hour after tax. (Perrigo, 2022) 

Time published a report  on Sama which detailed a failed worker uprising. The workers faced the trauma of viewing hundreds of horrific pieces of content every day, with the goal of determining if they were Facebook appropriate within 50 seconds for each, while living hand-to-mouth on low salaries and not given the appropriate support needed for this PTSD-inducing job. When workers organized in protest and planned a strike, high-paid executives flew in from San Francisco to ‘deal’ with the situation. They isolated the spearheader of the worker’s alliance, and terminated him, making him look like the bully who forced 100 other workers to sign a petition against the company. (Perrigo, 2022) The real bullies got away with this, as the ultimate goal is to make Facebook happy. It suits them to have low-waged workers with no other options to suffer life-long trauma everyday, all day long. But these workers need fair pay and worker’s rights. They need real support for their labor which is what makes Facebook a safer space, with less hate speech and violent content. They deserve to have a voice. 

Another example is Mechanical Turk, or MTurk, which is a marketplace for human intelligence micro-tasks which are extremely low-paid (with no guarantee of pay), not to mention poor labor protection and high exploitation, and involves tasks such as tedious image labeling. As of December 2019 MTurk’s workers’ portal had 536,832 visitors, and although the work is demoralizing and pays pennies, many depend on it over no work at all. (Mehrotra, 2020) MTurk has been functioning since 2005, still with no worker protections. 

The Human Intervention Required for AI Systems Case Studies: Spotlight on the Global South

Taking a deeper peek behind the curtain, we see that AI systems often require unseen human intervention and workarounds to operate effectively. This goes beyond the desks of technologists, and drives through the streets of nearly every city. 

One study looked into the operations of two startups, Gojek and Grab, which entered Jakarta in 2015 with the aim of digitizing the city’s motorbike taxi market. (Qadri, & D’Ignazio, 2022) They found that the platform’s view of the city is idealized and flattened, with no consideration for frictions such as traffic, parking delays, or blocked roads. The routes assigned to drivers are often inappropriate or dangerous due to the platform’s lack of consideration for these variables, which local drivers develop work-arounds for that remain invisible and unacknowledged by the platforms. The drivers know the safest ways through their own city, despite what the app says. 

The authors compared this to Donna Haraway’s “god trick” (1988) because it places the viewer in the impossible position of a disembodied, all-knowing eye looking down at the city. (Qadri, & D’Ignazio, 2022) The startups’ discourse often casts technology as the central organizer and optimizer of activity, while other forms of (human) intelligence are considered inferior. And to further demonstrate the dehumanization at play, Grab’s blog refers to drivers as “supply” units that can be moved around like goods or trucks. (Garg, et al., 2019) In reality, it is the human drivers who have knowledge of the city in its ever-changing state which makes the taxi service work, but the “AI” technology gets all the credit and the company owners benefit the most profit.

Workers rights remain an issue for lots of new areas of precarious occupation behind AI. As stated in a paper on work regulations for platform food delivery workers in Colombia, a neoliberal discourse on entrepreneurship is deepening the crisis of platform workers who are characterized as “self-employed” and therefore excluded from employment rights guaranteed for “employed workers” in local labor legislation. (Wood et al., 2019) (Vargas et al, 2022,  p..38)

What is desperately needed are people to care about people. AI has no capability of systems to actually care about people, even if it were based on human systems that did. Algorithms are programmed with the ultimate goal to promote business. This leads to human workers being treated more and more like machines. With humans working under control of algorithms, digital workers are excluded from the benefits of the value chain in which they are one of the most important subjects. (Vargas et al, 2022 p.34)

Discussion

In a Harvard Business Review article on the subject of the humans behind the curtain of AI, the authors spoke of the paradox of automation’s last mile, which is the ever-moving frontier of AI’s development. (Gray & Suri, 2017) This is all the more relevant today. As AI makes progress, it creates and destroys temporary labor markets for new types of humans-in-the-loop tasks at a rapid pace

Contract workers are needed to train algorithms to make important decisions about content. They are also responsible for making snap decisions about what stays on a site and what’s deleted. This is a new form of employment that should be valued. (Gray & Suri, 2017) However, this work is not only still largely invisible, but the workers are not valued and the work is unreliable, low-paid, and often traumatizing. 

Adrienne Williams, Milagros Miceli and Timnet Gebru wrote an essay late last year which argued that the idea of a world where AI is the primary source of labor is still far from being realized. The push towards this goal has created a group of people who are performing what is called “ghost work”, a term introduced by anthropologist Mary L. Gray and computational social scientist Siddharth Suri. This refers to the human labor that is often overlooked and undervalued but is actually driving AI. Companies that have branded themselves as “AI first” rely heavily on gig workers such as data labelers, delivery drivers and content moderators who are underpaid and often subject to heavy surveillance. (Williams, Milagros and Gebru, 2022)

Recommendations from Williams, Milagros and Gebru:

  1. Funding for research and public initiatives which highlight labor and AI issues.

  2. Analysis of causes and consequences of unjust labor conditions of harmful AI systems.

  3. Consideration for the use of precarious crowdworkers to advance careers of AI researchers and practitioners and shift power into the hands of workers.

  4. Co-create research agendas based on worker’s needs.

  5. Support for cross-geographical labor organizing efforts.

  6. Ensuring that research findings are accessible to workers rather than confined to academic publications. 

  7. Journalists, artists and scientists can foster solidarity by drawing clear connections between harmful AI products and labor exploitation. (Williams, Milagros and Gebru, 2022)

Recommendations from Gray and Suri:

  1. Require more transparency from tech companies that have been selling AI as devoid of human labor.

  2. Demand truth in advertising with regard to where humans have been brought in to benefit us.

  3. Recognize the value of human labor in the loop.

  4. Understand the training and support that informed their decision-making, especially if their work touches on the public interest. (Gray & Suri, 2017) 

Conclusion

I can’t stress enough the importance of acknowledging the human work behind AI. There is a need to ensure that those who contribute to the development of AI are fairly compensated and protected. When trust and safety are dismissed as tradeoffs and sadness, with no question that the ends might not be justifying the means, there are some fundamental changes necessary to the approach on this. We might even question the end goals while we are at it. 

We need to be humanized. It is arguable that AI was started back in the day to try to eventually replace human slavery. This is inherently problematic, as master/slave relations are built on exploitation, subjugation and dehumanization, which extends to the workers behind AI and not just to the AI itself. Although there are many benefits to AI replacing, changing, or accompanying work, it must be done in a way that is not exploitative and is centered on the betterment of all people and the planet, not in a speed-race for AI. 

While AI has the potential to revolutionize many industries, it is important to acknowledge the human work that goes behind its development and maintenance. From data collection to system maintenance, humans play a critical role in the AI ecosystem. It is essential that we recognize and value this work, and understand the real harms that are already happening around AI. 

It is easy to have a lot of fear for what AI can bring, how many jobs it is going to take. The reality is that most jobs will need to adapt to AI, and also that AI is creating so many new jobs at various skill levels. This would be much better news if it was something that everyone could benefit from, instead of being a product of exploitation and techno-solutionism. 



Sources 

Garg A, Yim LP and Phang C (2019) Understanding Supply & Demand in Ride-hailing Through the Lens of Data. In: Grab Tech. Available at: https://engineering.grab.com/ understanding-supply-demand-ride-hailing-data (accessed 6 October 2021).

Gray, M. L., & Suri, S. (2017). The humans working behind the AI curtain. Harvard Business Review. https://hbr.org/2017/01/the-humans-working-behind-the-ai-curtain

Haraway, D. (1988). Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist Studies, 14(3), 575-599.

Fagen, R. (2023). GPT4: Eldritch abomination or intern? A discussion with OpenAI — Integrity Institute. Integrity Institute. https://integrityinstitute.org/podcast/trust-in-tech-e19-eldritch-open-ai-gpt

Lanier, J. (2013). Who Owns the Future? Simon and Schuster.

Mehrotra, D. (2020, January 28). Horror Stories From Inside Amazon’s Mechanical Turk. Gizmodo. https://gizmodo.com/horror-stories-from-inside-amazons-mechanical-turk-1840878041#:~:text=The%20workers%20of%20Mechanical%20Turk,numbers%20and%20other%20personal%20data

Perrigo, B. (2022, February 17). Inside Facebook’s African Sweatshop. Time. https://time.com/6147458/facebook-africa-content-moderation-employee-treatment/

Qadri, R., & D’Ignazio, C. (2022). Seeing like a driver: How workers repair, resist, and reinforce the platform’s algorithmic visions. Big Data & Society, 9(2), 205395172211337. https://doi.org/10.1177/20539517221133780

Should tech companies pay us for our data? (2022, May 20). World Economic Forum. https://www.weforum.org/agenda/2018/12/tech-companies-should-pay-us-for-our-data/

Vargas, D. S., Castañeda, O. C., & Hernández, M. R. (2022). Technolegal Expulsions: Platform Food Delivery Workers and Work Regulations in Colombia. Journal of Labor and Society, 1–27. https://doi.org/10.1163/24714607-bja10009

Wood, A.J, Graham, M., Lehdonvirta, V. and Hjorth, I. “Good Gig, Bad Gig: Autonomy and Algorithmic Control in the Global Gig Economy.” Work, Employment and Society 33(1) (2019), 56–75. https://doi.org/10.1177/0950017018785616.

Williams, A., Miceli, M., & Gebru, T. (2022, December 10). The Exploited Labor Behind Artificial Intelligence. NOEMA. https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/

Comparative analysis: Ubuntu Collectivism versus Western Ethics in AI development

When AI technologies affect everyone globally, wouldn’t it be nice if they were built with the collective in mind?

In my last blog, I introduced the African collectivist philosophy of Ubuntu and how it could be applied to Artificial Intelligence (AI) ethics for the benefit of all, based on the works of Mhlambi in 2020 and Gwagwa in 2022. The word ubuntu means “human-ness” or “being human” in Zulu and Xhosa languages spoken in South Africa and Zimbabwe respectively. Here I dig deeper into some of the key concepts of Ubuntu which either parallel or stand in opposition to Western ethics such as utilitarianism, and addresses the flaws of individualism and why we should move away from it. 

What draws me personally to Ubuntu as an ethical theory for AI Governance 

Learning about Ubuntu was a breath of fresh air, as Western ideals such as individualism never sat well with me. I confronted individualism in my master’s thesis research, but didn’t come across Ubuntu until rather recently, in connection to my work in ethics in AI. It is exactly what I was looking for: an alternative ethical system which relates personhood to how we are all connected, that a person is a person through other people. It relates to mutual aid (Kropotkin, 1902) and the sense of care in the grand sense of care, caring about how everything affects everything, not just for oneself. The idea that this level of care and collectivism could be applied to AI ethics blew me away, and the papers I have read on it, especially one by Sabelo Mhlambi, really drove this home. 

A snippet of my story 

Nearly five years ago, I chose to leave the Western world behind and live in South East Asia, after also spending time in Western Africa. My decision was fueled by the distasteful air of individualism in the West, which promotes greed and putting others down to get ahead. No amount of personal comfort could erase that ever present feeling of disconnection I feel when in the US, Europe or Australia. When I visit my hometown, everyone always asks me, why do I live so far away? It is a difficult question to answer, but I think it comes down to this notion of the isolation caused by individualism that puts everyone in toxic competition with each other and in situations where your success means that you are climbing over others. I look around and see constant valuing of profit over life. The fact that AI has been born from this ideology is extremely problematic, as it has this baseline of individualism built in. 

From my travels and living abroad, I have seen that the world is rich with diversity, and that diversity is a beautiful thing and should be celebrated, not discriminated against. White men are not actually the majority in the world, as much as everyone else is marginalized and minoritized. Women are minoritized, and we are over half of the population. The world has been running on systems that make zero sense. As we breathe life into artificial intelligence, it is overdue that we have a re-haul on how we relate to one another and the world around us. It is time that we turn to non-Western-centric ideals and embrace the diversity of the world when deploying technologies that affect everyone globally. 

The rest of this article will engage more deeply with Mhlambi’s work on utilizing Ubuntu as an ethical framework for AI governance moving forward, something I endorse completely. 

Ubuntu: an African value of the collectivism of communities

Alternative ethics systems such as Ubuntu are not currently included in the exclusive discourse on ethics and AI. The default is Western ethics, which are burdened with individualism and greed, and are not adequate to address technological and algorithmic harms. (Mhlambi,  2020 p. 23) Individualism and greed also stand in opposition to Ubuntu’s foundations of interconnectedness, empathy, and generosity. (Mhlambi,  2020 p. 24) These are the values that AI development would benefit from immensely, which would make individualistic values irrelevant. How can this be implemented for the governance of AI?

Ethical Leadership: Ubuntu promotes cooperation and helping each other

Ethical governance requires a closer look at leadership. Cooperation and participation are requirements for Ubuntu, particularly when it comes to leadership, as it rejects elite power concentrations. (Mhlambi,  2020 p. 15-16) The current leadership in AI consists of concentrations of power amongst a few elites, which could be something that gets in the way of Ubuntu truly working. One Ubuntu quote “Inkosi yinkosi ngaba-Ntu” translates to “A leader derives power from the consent and will of the governed. (Mhlambi,  2020 p. 15-16) Government and other powers should be acting in service to the people. This is the purpose of leadership. 

However, it is not what we see from most leaders. Following Ubuntu, rulership is collaborative. That is how things should really be done within governance, by being in service to the people. 

How do we make this value-shift happen and balance power structures?

Focusing on Inclusion to combat exclusion

Arthur Gwagwa suggested that there be more of a focus in research and policy work on “Ubuntu-based action guiding principles for all AI stakeholders.” (Gwagwa, 2022 p. 1) He gave an example of providing guidance to reconcile ethical dilemmas in AI design, including conflicting or competing cultural values. (Gwagwa, 2022 p. 1) This would support the notion of inclusivity that Ubuntu ethics would bring to AI design. 

Gwagwa went on to provide a useful definition of exclusion: ‘‘the inability to participate effectively in economic, social, political, and cultural life, and, in some characterizations, alienation and distance from the mainstream society.’’ (Duffy, 1995) (Gwagwa, 2022 p. 2) This is something that is important to keep in mind, also when thinking about digital identity. 

Rationality vs. Relationality

While reading about Ubuntu and AI ethics, the comparison was continually brought up between rationality versus relationality as to the question, how do we define personhood?  

Personhood as rationality traditionally comes from a Western viewpoint, which is what has modeled machine intelligence, and “has always been marked by contradictions, exclusions, and inequality.” (Mhlambi, 2020) How do we situate what it means to be a person when contemplating “artificial or mechanical personhood”? (Mhlambi, 2020)

Looking to Western ethics, utilitarianism, which tends to be very rationalizing, doesn’t always play out appropriately. Utilitarianism as applied to AI ethics aims to maximize what is good for people and minimize what is bad for them in the long run. (Shulman et. al, 2009) (Gwagwa, 2022 p. 5) This ultimately still leaves some people excluded and disadvantaged, and they continue to be those that are perpetually marginalized. 

Taking a bottom-up approach, African philosophy could address both the disproportionate negative effects of AI on people and work towards global equality and protections. (Mhlambi,  2020 p. 6)

Contrasting Collectivism and Individualism

Individualism, something that I have butted heads with in my own research over the years, desperately needs to be challenged, as it has many flaws. Generally, individualism is the idea that the central point of value in society is the self-complete, autonomous-self individual. (Mhlambi,  2020 p. 7) 

Mhlambi lists several flaws of individualism, including:

  1. Justification of inequality

  2. Power asymmetries and exploitation which disallow upward social mobility

  3. Worsening of inequalities due to lack of upward mobility

  4. Increased inequality and prioritized private interests of those in power causes cycles of political instability (Mhlambi,  2020 p. 7, 10)

These harms are ultimately produced by any system based on individualistic principles. (Mhlambi,  2020 p. 10) My question is, does individualism really fit in with any ethical system? When will we realize that individualism is unethical?

Ethics beyond the human-centered world

Western ethics at best is people-centered, and ignores any connection between us and the Earth; rather, it allows for exploitation of it. “Mastery of nature” was the Enlightenment’s goal of self-realization, which some say that today has transformed into “the mastery of bits and cyberspace.” (Kennington, 1978) (Mhlambi, 2020 p. 9) These ideals “tolerate the inevitability of inequality.” (Mhlambi, 2020 p. 9) Justification of exploitation is incredibly unethical, and for this ideal to be adopted by AI could cause unimaginable problems, where instead technologies should be used to support and protect humanity and the Earth. 

What is currently valued in AI development?

One of the most highly valued and problematic aspects of AI is speed, where perhaps it shouldn’t be the most important thing. In the world of AI, speed can equate to success. It is said that similarity creates speed. However like individualism, similarity has many flaws, including:

  1. Decreased diversity

  2. Filter bubbles

  3. May lead to discrimination ex: race, gender (Mhlambi,  2020 p. 20)

This ties in with individualism coming from a monoculture of Silicon Valley, which promotes excessiveness and greedy competition, as self-interest takes center stage. (Murobe, 2000)  (Mhlambi,  2020 p. 9) Theoretically, this goes against Western ethics as well, which would lead us to act in the best interest of all humans and not put ourselves above others. However, this is not how it works in reality, arguably, because of individualism. 

So where do we turn? In the unique balance which is absent from Western individualism as well as Eastern communism, we find African Ubuntu, which “seeks to avoid the worst of extreme systems.” (Mhlambi,  2020 p. 17)

Ubuntu is about human connectedness with other people, living things and the universe at large.

Ubuntu views humanity as how a person relates in meaningful ways in relation with other persons. “A shared humanity, a oneness and indissoluble interconnectedness between all humans, needs to be the paramount human identity and positionality from which we organize our societies, and produce the technological advances that maintain social harmony.” (Mhlambi,  2020 p. 21)

This is not to say that there is no concept of the individual within Ubuntu ideology. Rather, the individual has many important roles to play. These include:

  1. Doing one’s part to maximize public good

  2. Affirming the dignity of all and restoring breaks in harmony 

  3. Creating the necessary environment for all to thrive (Mhlambi,  2020 p. 24)

My conclusions from Mhlambi’s work lead me to reiterate that inclusion cannot be complete as long as inequality exists. (Mhlambi,  2020 p. 24) 

Ubuntu is a philosophy that encourages us to help each other: Can we apply that to building AI?

Technology is not lacking ethics. Societal values are ever-present in the creation and use of technology: but what ethics are included matters. This gives us a clear view of where society’s ethics stand: with those in power. Compassion, equity and relationality are missing in this, and that is a problem. If actions are taken to shift to these crucial values of Ubuntu and collectivism, this change could start with AI and radiate out to benefit everyone as well as the planet. 

“Personhood must be extended to all human beings, informed by the awareness that one’s personhood is directly connected to the personhood of others.” (Mhlambi,  2020 p. 7)

Resources

Duffy, K. (1995). Social Exclusion and Human Dignity in Europe: Background Report for the Proposed Initiative by the Council of Europe (Strasbourg: Council of Europe)

Gwagwa, A.E. (2021). Africa’s contribution to an intercultural reflective turn

in the ethics of technology in the era of disruption. https://www.academia.

edu/51050494/Africas_contribution_to_an_intercultural_reflective_turn_

in_the_ethics_of_te

Gwagwa, A., Kazim, E., & Hilliard, A. (2022). The role of the African value of Ubuntu in global AI inclusion discourse: A normative ethics perspective. In Patterns (Vol. 3, Issue 4). Cell Press. https://doi.org/10.1016/j.patter.2022.100462

Kennington R. “Descartes and Mastery of Nature.” In: Spicker S.F. (eds) Organism, Medicine, and Metaphysics. Philosophy and Medicine,

vol 7. Springer, Dordrecht, 1978.

Kropotkin, Piotr Alexeievich. Mutual Aid: A Factor or Evolution. New York: McClure Phillips and Co., 1902. 

Mhlambi, S., & School, H. K. (2020). Sabelo Mhlambi Carr Center Discussion Paper Ubuntu as an Ethical & Human Rights Framework for Artificial Intelligence Governance Technology and Human Rights Fellow Carr Center for Human Rights 

Shulman, C., Jonsson, H., and Tarleton, N. (2009). Which consequentialism? Machine ethics and moral divergence. Asia-Pacific Conf. Comput.

Philos. 23–25. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.

1.1.363.2419& rep=rep1& type=pdf.

Murobe, M.F. ‘Globalization and African Renaissance: An ethical reflection’, in Problematising the African Renaissance, E. Maloka and

E.Le Roux (eds). Pretoria: Africa Institute of South Africa, 2000, pp. 43–67.

Introduction to Ubuntu Collectivism Theory Applied to AI Ethics

Justice, inclusivity and solidarity, can we consider these to be universal values?

 

These are some of the main values in the Sub-Saharan African philosophy of Ubuntu, which clarifies many of the core ethics that we find across cultures, such as the age-old golden rule: do unto others as you would want them to do unto you. In essence, it is seeing one’s humanity in the humanity of others. (Gwagwa, 2022 p. 2) 

 

In order to work in a values-first manner, Ubuntu can be useful for informing artificial intelligence (AI) ethics, with an emphasis on inclusivity which is key for AI principles and guidelines that are universally applied. (Gwagwa, 2022 p. 1) Sub-Saharan Africa has historically been excluded from the benefits of past industrial revolutions, as “... its people and their resources and aspirations have been objectified through slavery, colonialism, imperialism, and neo-colonialism.” (Gwagwa, 2022 p. 2) Could Ubuntu inform AI ethics in an effort to create a more inclusive future?

 

One of the core principles of Ubuntu is: “Umu-Ntu ngumu-Ntu nga ba-Ntu” – A person is a person through other persons. What this means is that how one relates to others is interconnected with one’s personhood and measure of ethics. Under this notion, relationality is emphasized, and the successes and failures of individuals are equally the successes and failures of the community. (Mhlambi,  2020 p. 15-16)

 

 The way that ethics is measured in Ubuntu is through how a person relates to others as well as to the environment and all other independent parts. Ubuntu can be described as relational personhood, where relationality means accepting the interconnectedness of others while recognizing their individuality, and generally the connection of people, nature, and the spiritual. (Mhlambi, 2022, p. 13) We could take the classic saying about raising children, that it takes a village, as opposed to individual family units as found in Western cultures. This is a practical example of Ubuntu. One would not ignore a misbehaving child, any nearby adult would reprimand them, as opposed to Western cultures where this would never happen. Another example from an Ubuntu proverb states that you would not walk by a house being built without lending a hand. (“Wadlula ngendl’isakhiwa kayibeka qaza” He passed by a hut being built and did not tie a knot) (Mhlambi, 2022, p. 14)

 

When someone is acting ethically, they are said to “have Ubuntu” or considered “unoBuntu.” Someone acting unethically, by only considering themselves and being distant or unhelpful to others, is thought to not have Ubuntu, or be “akala ubu-Ntu.” If the word Ubuntu is broken down, “Ubu” stands for “a state of being and becoming” and “Ntu” in essence means “the idea of a continuous being or becoming a person oriented towards cosmic unity and creative purpose.” (Mhlambi,  2020 p. 13-14)

 

The question is, what can we learn from Ubuntu when thinking through ethics for AI? This type of relational ethics is important to consider when we think about ethics in AI because of how such powerful technology affects people and the world around us. This brings up a lot of questions. How does AI affect people and the world, and why is it important to have a relational type of ethics for AI? Also, how do values in different parts of the world play a role in relational ethical AI development?

 

AI is shaped by the dominant economic, political, and social inequalities fueled by neocolonial thought and resulting in assaults on human dignity. This can be countered by postcolonial African philosophy when creating AI. (Mhlambi, 2020) Greater inclusion and diversity in global discourse on AI ethics is non-negotiable, and we should be collecting the best tools we can to achieve this. Ubuntu is especially helpful for the inclusion of African voices. (Gwagwa, 2021) (Gwagwa, 2022 p. 5) The importance of collective identity in the struggles of African peoples is stressed by Africanist scholars, (Hall, 2012) (Gwagwa, 2022 p. 5) and this must be considered ongoing as technology affects everyone globally. 

Postcolonial African philosophy’s relevance to the ethics of artificial intelligence is that, as a response to the traumatic encounter between the African world and European modernity, it puts in clear view modernity’s dependency on marginalization and exposes the weaponization of rationality veiled as moral benevolence." (Eze, 1997) (Mhlambi,  2020 p. 6) By starting from a point of relationality, things that are ultimately harmful to fellow human beings and the world around us cannot be rationalized. 

 

A unanimous consensus was reached at the UN Global Pulse in Ghana and Tunisia (Pizzi & Romanoff, 2020), which was that the mistakes of the Global North regarding the development of technologies could be a lesson for Africa to learn from and not repeat: first, formulate a set of values to guide technology, as opposed to thinking of values as an afterthought. “Africans advocated for the need for human control of technology and the promotion of human values, something which has been reactionary rather than proactive in global principles.” (Fjeld & Nagy, 2020) (Gwagwa, 2022 p. 4)

By linking one person’s personhood to the personhood of others, Ubuntu reconciles ethical limitations of rationality as personhood. One cannot be rational when one is only considering oneself. “Rationality is not an individual product or endeavor of a consistent formal system but is a result of interconnected consistent formal systems. Rationality is thus a product of relationality.” (Mhlambi, 2020 p. 3)

Can computers understand relationality? Computers have difficulty around social contexts, particularly racial and gender norms, and automated systems that have access to all of this data end up perpetuating racism and gender stereotypes because the data cannot interpret itself, nevermind be informative on how to respond to and avoid moral dilemmas. (Mhlambi,  2020 p. 4)

Automated decision making systems (ADMS) have five general critiques that are in direct violation of Ubuntu. As listed by Mhlambi (2020, p. 8) these critiques include:

1) the exclusion of marginalized communities and their interests in the design, development, decision making, and funding of ADMS

2) biases resulting in the selection of features in ADMS and biases entrenched in the data that generate these systems

3) power asymmetries worsened by the use of ADMS

4) dehumanization that occurs from the commodification of our digital selves

5) the centralization of the resources and power necessary in designing and using ADMS. (Mhlambi,  2020 p. 8)

Solutions would start by correcting these violations at a fundamental level, and at all points throughout AI, machine learning and ADMS development, production, use and application. 

Here is a list of suggestions from Sabelo Mhlambi that would include the values of Ubuntu going forward:

1) Address the harms to climate change which much of ADMS relies on via cloud computing. (Greenpeace, 2010)

2) Normalize the eradication of inequality through the participation of the most disenfranchised at the start of creating technology.

3) Use data which powers ADMS for public good.

4) Make data publicly available whilst protecting privacy and promoting societal wellbeing.

5) Treat community data as intellectual property, with the ability to be licensed or revoked from online platforms. 

6) Fund and provide access to technical skill sets for the most disenfranchised. 

7) Allow users to directly shape the way they receive recommendations from algorithms. 

8) Tailor technology companies’ recommendations according to agreed upon social ideals which are based on human dignity and social cohesion. (Mhlambi,  2020 p. 25)

Ubuntu is just one representation of non-Western ethics that decentralizes individualism and recenters the importance of relationality and inclusion. It is sometimes difficult to understand when we have been so overexposed to individualism and the rationality that comes from putting the individual above all else. However, by looking for ethical systems outside of the Western world, perhaps the development of technology that affects everyone could benefit more than just the few, and break cycles of colonialism for good. 

Resources

Eze, Emmanuel Chukwudi. Postcolonial African Philosophy: A Critical Reader. Cambridge, Mass.: Blackwell, 1997.

Fjeld, J., and Nagy, A. (2020). Principled Artificial Intelligence: mapping

consensur in ethical and rights-based appraiches to principles for AI.

https://cyber.harvard.edu/publication/2020/principled-ai.

Greenpeace. “Make IT Green: Cloud Computing and Its Contribution to Climate Change.” 2010.

Gwagwa, A.E. (2021). Africa’s contribution to an intercultural reflective turn

in the ethics of technology in the era of disruption. https://www.academia.

edu/51050494/Africas_contribution_to_an_intercultural_reflective_turn_

in_the_ethics_of_te

Gwagwa, A., Kazim, E., & Hilliard, A. (2022). The role of the African value of Ubuntu in global AI inclusion discourse: A normative ethics perspective. In Patterns (Vol. 3, Issue 4). Cell Press. https://doi.org/10.1016/j.patter.2022.100462

Mhlambi, S., & School, H. K. (2020). Sabelo Mhlambi Carr Center Discussion Paper Ubuntu as an Ethical & Human Rights Framework for Artificial Intelligence Governance Technology and Human Rights Fellow Carr Center for Human Rights Policy From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance.

Pizzi, M., and Romanoff, M. (2020). Governance of AI in Global Pulse’s policy work: zooming in on human rights and ethical frameworks. https://www.

unglobalpulse.org/2020/12/governance-of-ai-in-global-pulses-policywork-zooming-in-on-human-rights-and-ethical-frameworks/.


Global Data Law and Decolonisation

Anywhere on earth with an internet connection, AI systems can be accessed from the cloud, and teams from different countries can work together to develop AI models, relying on many datasets from across the planet and cutting edge machine learning techniques. (Engler, 2022


This global nature of AI can perpetuate ongoing marginalisation and exploitation of the people behind the data that machine learning relies on. Data protection laws vary per country, and in the U.S., per state, making it difficult for businesses who are trying to do the right thing (Barber, 2021), while simultaneously creating targets, mostly in the under-protected ‘Global South,’ for large corporations to mine data freely and gain power (Couldry & Mejias, 2021), keeping inequality strong. What can we do about this? Is the development of a Global Data Law the answer?


As the US and the EU work to try to align on data protection (Engler, 2022), there is a small call for a re-enlivening of the Non-Aligned Movement (NAM) in the digital sphere. (Mejias, 2020) The NAM is an anti-colonial/anti-imperialist movement consisting of 120 countries currently, which are not aligned with any major world powers. It was founded in 1961 to oppose military blocs in the Cold War, and hasn’t yet been adopted for opposing data colonialism, however, some say it should. (Reddy & Soni, 2021) I love this idea, because we are working with a plethora of cultures which have variant value systems, and aligning with Big government and corporate powers is not beneficial across the board; in fact, it can be quite harmful. When approaching the idea of global data law, we must support the rights of people who are the most marginalised. “What we need is a Non-Aligned Technologies Movement (NATM), an alliance not necessarily of nations, but of multiple actors already working towards the same goals, coming together to declare their non-alignment with the US and China.” (Mejias, 2020) Mejias calls for NATM to be more society and community driven than from state powers, which makes it all the more viable for the current situation.


Central to this approach to global data law is the concept of decolonization. If we are not careful, neo-colonialism will lead to yet another stage of capitalism that is fueled by data. That is why it is essential to involve Indigenous and marginalised peoples, not at the margins but at the centre of debates on global standards for law and data, or else efforts to decolonize will only reinforce colonialism. (Couldry & Mejias, 2021) 


This is tricky. Colonialism and neo-colonialism are strong systems which feed imperial powers, whether they be government or corporate. At the end of the day, ‘good business’ wins out over what is actually good for everyone involved. This fundamentally needs to change. 


What is Global Data Law?

Global data law is an area of great tension, for it is necessary to regulate and protect sensitive data from around the world, whether people live in the EU under protection of the General Data Protection Regulation (GDPR), in other countries with strong data protection laws, or in areas with less data protection. Creating a “one-size fits all” system of global data laws will not work in our diverse world with varying tolerances for oversight and surveillance. 

 

The GDPR is being used as a model for data protection, but it only protects the privacy for EU citizens, no matter where in the world the data is used. (Reddy & Soni, 2021, p.8) Compliance with the GDPR is a must, requiring complete transformation in the way organisations collect, process, store, share and wipe personal data securely, or face exorbitant fines in the tens of millions of dollars. (DLA Piper, 2022) Several other countries have developed specific data protection laws in the last few years, splattered across the world, such as in Canada and Brazil, however the protections vary greatly. 

 

We can also consider the UN recommendations, such as to move away from data ownership and towards data stewardship for data collectors, meanwhile protecting privacy and ensuring self-determination of peoples’ own data. (The UN, 2022) They stress that there is a need to protect basic rights of peoples’ data not being used or sold without permission and in ways that could cause undue harm, with respect to what this means across cultures. 

 

The UN Roadmap for Digital Cooperation highlights: 

-global digital cooperation

-digital trust and security

-digital human rights

-human and institutional capacity building

-an inclusive digital economy and society

(The UN, 2022)

 

However, what we are seeing in global data law is governments developing their own systems unilaterally, making compliance complicated. For example, in the first years of the GDPR, thousands of online newspapers in the US simply decided to block users from the EU rather than face compliance risks. (Freuler, 2020) (South, 2018) Those in the EU were not able to access information previously available to them. Businesses that rely on customers from the EU had to trade off the risk of the compliance liability to the loss of income.

 

Global data law is indeed complex, and to complicate matters further, next we will briefly dive deeper into the Digital Non-Alignment movement. 

Situating the Digital Non-Aligned Movement

The original Non-Aligned Movement (NAM) was formed by leaders of many countries, mainly from the Global South, which sought a political space to counter central powers through coordinated solidarity and exercise strategic autonomy, resisting control from the US and the USSR during the Cold War. (Reddy & Soni, 2021) (Freuler, 2020) Now, in the digital age, there is a call to recentre on the NAM in the digital realms to protect against not only government powers, but Big Tech as well. (Freuler, 2020

A Non-Aligned Technologies movement would empower civil societies across the globe to act in consort to meet their shared objectives while putting pressure on their respective governments to change the way they deal with Big Tech. The primary goal of NATM would be to transition from technologies that are against the interest of society to technologies that are in the interest of society. (Mejias, 2020)

Current members of the Non-Aligned Movement are in dark blue. The light-blue colour denotes countries with observer-status.

By Maxronneland - https://en.wikipedia.org/wiki/File:Map_of_NAM_Members_and_Observer_states.svg, CC0, https://commons.wikimedia.org/w/index.php?curid=105867196

 

“NAM must once again come together to ensure the free flow of technology and data, while simultaneously guaranteeing protection to the sovereign interests of nations.” (Reddy & Soni, 2021, p.4)

This is incredibly valid, and countries represented within the NAM need to have a voice in this discussion about global data law. The US, China, the EU, and other wealthy nations should not be solely responsible for regulating open data and sovereignty globally. However, sovereignty is not just for states, which is why Indigenous Data Sovereignty (ID-SOV) should also be used for guiding global data law towards decolonization: if we are going to decolonize, we must centre on the rights of those who have been the most colonised.

Turning to Indigenous Data Sovereignty to Inform Global Data Law

Indigenous Peoples’ focus on self-determination is continuously burdened with the implications of data collected and used against them. The UN Declaration of the Rights of Indiginous Peoples (UNDRIP) states that the authority to control Indigenous cultural heritage (i.e. Indigenous data: their languages, knowledge, practices, technologies, natural resources and territories) should belong to Indigenous communities. (Carroll et al. 2020) It proves to be extremely difficult to break free from colonial and neo-colonial structures of power imbalances however; this is exactly what must be the focus in order to decolonise data practices and data law. 


We are up against a long history of extraction and exploitation of value through data, representing a new form of resource appropriation that could be compared to the historical colonial land-grab, where not only land and resources but human bodies and labour were seized, often very violently. The lack of upfront violence in today’s data colonialism doesn’t negate its danger. “The absence of physical violence in today’s data colonialism merely confirms the multiplicity of means by which dispossession can, as before, unfold.” (Couldry & Mejias, 2021)


Contemporary data relations are laced with unquestionable racism (Couldry & Mejias, 2021), along with intersectional discrimination against all those considered marginalised, which is why we turn next to a report that addresses these issues directly, with a focus on surveillance and criminalization via data in the US. 


Highlighting Technologies for Liberation


I love this report, titled Technologies for Liberation, which arose from the need to better understand the disproportionate impact of surveillance and criminalization of  Queer, Trans, Two-Spirit, Black, Indigenous, and People of Color (QT2SBIPOC) communities and provide a resource for these communities to push back and protect themselves at all levels, from the state-endorsed to the corporate-led. (Neves & Srivastava, 2020) Technologies for Liberation aims to decolonise at a grassroots, community level, focusing on organisers and movement technologists which visualise demilitarised, community-driven technologies that support movements of liberation. This is transformative justice at work, centering on safety and shifting power to communities. (Neves & Srivastava, 2020) This is the bottom-up influence on data protection that we need to be turning towards to inform global data law that won’t leave people on the margins. 


Conclusion

There are endless areas that need consideration when discussing global data law and decolonisation. We have already touched on a few areas and highlighted movements such as the digital NAM, ID-SOV, and Technologies for Liberation, after introducing the GDPR and the UN recommendations. This article has discussed potential avenues for solutions, including organisational principles and values which are key to this discussion. There are large power imbalances that need to be addressed and deeply rooted systems that need to be reimagined, and we must start with listening to the voices who have been the most silenced, and guiding everyone involved to do the right thing.


“The time has come for us to develop a set of basic principles on which countries can agree so that consumers worldwide are protected and businesses know what is required of them in any geography.” (Barber, 2021)


Resources

Barber, D. (2021, October 2). Navigating data privacy legislation in a global society. TechCrunch. Retrieved March 26, 2022, from https://techcrunch.com/2021/10/02/navigating-data-privacy-legislation-in-a-global-society/

DLA Piper. (2022). EU General Data Protection Regulation - key changes: DLA Piper Global Law Firm. DLA Piper. Retrieved April 8, 2022, from https://www.dlapiper.com/en/asiapacific/focus/eu-data-protection-regulation/key-changes/ 

 Nick Couldry & Ulises Ali Mejias (2021): The decolonial turn in data and technology research: what is at stake and where is it heading?, Information, Communication & Society, DOI: 10.1080/1369118X.2021.1986102

 

Engler, A. (2022, March 9). The EU and U.S. are starting to align on AI Regulation. Brookings. Retrieved March 26, 2022, from https://www-brookings-edu.cdn.ampproject.org/c/s/www.brookings.edu/blog/techtank/2022/02/01/the-eu-and-u-s-are-starting-to-align-on-ai-regulation/amp/

 

Freuler, J. O. (2020, June 27). The case for a Digital non-aligned Movement. openDemocracy. Retrieved March 26, 2022, from https://www.opendemocracy.net/en/oureconomy/case-digital-non-aligned-movement/ 

Mejias, U. A. (2020, September 8). To fight data colonialism, we need a non-aligned Tech Movement. Science and Technology | Al Jazeera. Retrieved April 7, 2022, from https://www.aljazeera.com/opinions/2020/9/8/to-fight-data-colonialism-we-need-a-non-aligned-tech-movement 

Neves, B. S., & Srivastava, M. (2020). Technologies for Liberation. Technologies for Liberation: Toward abolionist futures. Retrieved March 26, 2022, from https://www.astraeafoundation.org/FundAbolitionTech/ 

Reddy, L., & Soni, A. (2021, September). Is there space for a Digital Non-Aligned Movement? - HCSS.NL. New Conditions and Constellations in Cyber . Retrieved April 2, 2022, from https://hcss.nl/wp-content/uploads/2021/09/Is-There-Space-for-a-Digital-Non-Aligned-Movement.pdf 

South, J. (2018, August 7). More than 1,000 U.S. news sites are still unavailable in Europe, two months after GDPR took effect. Nieman Lab. Retrieved April 3, 2022, from https://www.niemanlab.org/2018/08/more-than-1000-u-s-news-sites-are-still-unavailable-in-europe-two-months-after-gdpr-took-effect/ 

The UN. (2022). UN secretary-General's Data Strategy. United Nations. Retrieved April 3, 2022, from https://www.un.org/en/content/datastrategy/index.shtml 


Comment

Latin American Government AI Readiness Meta-Analysis

Earlier this year, I was invited to attend a regional workshop led by the Digital Latam Center and the International Development Research Centre (IDRC) in Mexico City which focused on civil society, academia, and government involvement in the future of Artificial Intelligence development for the global south. As a representative of LatinX in AI™(LXAI), I found this intimate forum with key representatives a great opportunity to connect and further our organization’s understanding of the political environment and current challenges facing Latin American countries as well as opportunities for growth and advancement through AI technology. Read a recap of our experience in their recent blog post, Artificial Intelligence, and Development in Latin America: bases for a regional initiative.

Workshop on Artificial Development in LATAM by Digital Latam and IDRC

This experience reinforced my goals to strengthen infrastructure and opportunities for Latin American researchers, institutions, and startups developing AI technology through our organization’s mission ‘Creating Opportunities for LatinX in AI’. Learn more about the origins and drive of our infrastructure and development program in our prior blog post “Developing AI Infrastructure for LATAM”.

Each country is only as prepared to take advantage of AI technology as it’s government and citizens will allow.

The notion above reiterated extensively during the workshop, is easily reflected by the US and China who have been leading the competition for the Global AI market, referred to recently as the “new space race…, where world superpowers battle to define generations of technology to come”. In 2017, China announced a 3 step plan to become a $150 billion AI global leader by the year 2030 through investments in research, military, and smart cities. Despite $10 billion in venture capital currently being funneled towards AI in Silicon Valley, the US has been losing ground, after cutbacks on funding for scientific research and tightening immigration restrictions by the Trump administration, researchers and startups have been opting for grants issued by China to fund the future of AI development.

Where does that leave Latin American countries in the Global AI race?

A recent analysis of Government AI readiness led by Oxford Insights and the IDRC listed no Latin American countries in their top 20 rankings citing three key challenges in harnessing the use of AI for the common good: policies, capacity, and adequate resources. They scored each country and territories governments according to their preparedness to use AI in the delivery of public services. They’ve stated these findings as…

“…a timely reminder of the ongoing inequality around access to AI.”

Latin American Region Comparison Geochart by LatinX in AI™, Data Source: Government AI Readiness Ranking by Oxford Insights and IDRC

Despite not making the top 20, the governments of Mexico, Uruguay, Brazil, and Colombia ranked within the top 50 countries out of 194 globally. Mexico and Uruguay being the only two South American countries developing AI policies and strategies. Mexico’s strategy released in March 2018, “Towards an Artificial Intelligence (AI) Strategy in Mexico: Taking Advantage of the IA Revolution” was carried out by Oxford Insights, C-Minds, and commissioned by the British Embassy in Mexico. Uruguay opened a public consultation of Artificial Intelligence for the Digital Government on April 22nd, 2019 and has since updated its Digital 2020 Agenda.

The ranking system created by Oxford and the IDRC, sums an average normalization of indexed metrics on a scale of 0–10, from sources including the UN, WEF, Global Open Data Index, World Bank, Gartner, Nesta, and Crunchbase, clustered under four high-level topics including:

  • Governance — indicators include whether they had privacy laws in place and a forthcoming AI strategy

  • Infrastructure and data — indicators include the availability of open sourced data, data capability within the government, and their government’s procurement of advanced technology products

  • Skills and education — indicators include digital skills among the population, innovation capability by the private sector, and the number of registered AI startups

  • Government and public services — indicators include government effectiveness, availability of digital public services, and the importance of ICTs to government vision of the future

View an index of their data and ranking assessment here.

Comparison table published by LatinX in AI™, Data Source: Government AI Readiness Ranking (0–10 scale) by Oxford Insights and IDRC

The average ranking for Latin American countries according to their analysis is 3.682, not far behind the global average of 4.032. They concluded their analysis stating that “the way forward is still uncertain” and suggesting the development of ‘AI Centers’ by connecting their academic resources to public and private capital to improve networking and innovation, but also suggesting, until clear and ethical policies for AI have been developed, Latin American governments should heed the warnings of the Latin American Initiative for Open Data, which published a research report titled Automating with Caution” in November 2018.

Examining LATAM AI Readiness Ranking against each Countries Economic Metrics

At first glance, these rankings appear intuitive, but it was surprising to find they did not account for each countries population size, unemployment rate, income equality, household income, education index, or GDP. These metrics are far more telling of a government and it’s citizen’s ability to invest in or make use of new technology and it’s potential effects on the population. I’ve compared these values to better assess the real risks and potential for integrating artificial intelligence in Latin America.

Unemployment Rate

The unemployment rates, published by the International Monetary Fund, is the number of unemployed persons as a percentage of the total labor force sourced from the World Economic Outlook in 2019. Unemployment in developing countries is often telling of a countries economy but can also be an indicator of factors outside of a government’s control. Areas with conflict may see an increase in migration as refugees flee, causing unemployment rates to spike temporarily including in neighboring cities or countries.

Comparison chart by LatinX in AI™, Data sources: Government AI Readiness Index (Oxford & IDRC), Unemployment Rate (IMF)

This can be seen most clearly in Venezuela, where the unemployment rate has jumped from 6% in 2015 to 44% in 2019. “Venezuela’s fall is the single largest economic collapse outside of war in at least 45 years, economists say”, as described in the New York Times and the largest refugee crisis of all time in Latin America. In countries like Venezuela, which used to have a thriving economy largely based on petroleum export and manufacturing, the opportunities for incorporating Artificial Intelligence were endless. Unfortunately now, due to government mismanagement, extensive surveillance and biometric data collection (similar to China’s communist regime), coupled with hyperinflation, some say the countries economy may never recover.

This disrupt has even led to some technologically savvy Venezuelan citizens to desperately turn to impersonate US citizen’s through virtual private servers (VPS)s’ on sites like Mechanical Turk where they end up undermining social science research in order to earn money to feed their families. Venezuelan citizen’s fleeing to neighboring countries like Colombia, Argentina, Chile, and Peru, have found opportunities in the local gig economies, working for companies like Rappi, an app based delivery service startup, which is thriving in part due to this influx of migrant workers. Rappi incorporates AI and machine learning techniques in every aspect of their service, their app not only offers food and groceries but also includes on-demand services ranging from personal training to healthcare to even withdrawing and delivering cash from an ATM.

Generally, unemployment rates in a country are a lagging indicator, often following economic distress or improvements and must also be adjusted for seasonal variability. Countries whose economic well-being relies upon a few industries without much room for future development may also show high unemployment rates accompanied by a low GDP per capita. Unemployment and Government AI Readiness are not directly correlated, but unemployment must be considered before implementing AI technology or automation.

Cuba, which has a historically low unemployment rate, also has the lowest Government AI Readiness score out of all other Latin American countries, according to the Oxford and IDRC ratings. Cuba’s economy is owned and run by a dictatorship government where the state employ’s most of its labor force, sets price standards and controls the access to education, healthcare, and distribution of goods to its citizens. The Cuban government also controls investments in the region, stifling the potential for progress and innovation, although recent economic reforms led by Raúl Castro’s administration, have allowed over 400,000 citizens to sign up to be entrepreneurs.

Cuba has also seen an increase in the availability of computers and mobile phones after legalization in 2008, as well as modernization of its telecommunications network, improving access to the internet. As outlined by the Lexington Institute, in their research titled, “Cuba goes digital”, $473 million of foreign investment between 1995 and 2000 had given “Cuba the potential to become a Latin American leader in information technology” as “Cuba is incubating a group of enterprises that design and export advanced business and medical software products.” Anyone knowledgeable about AI technology would know that this could be a great opportunity for incorporating Machine Learning and Deep Learning techniques as solutions for training and deploying models “on the edge” through Android and iOS platforms. You can now take advantage of frameworks like TensorFlow Lite by Google, Core ML by Apple, or Caffe2Goby Facebook.

Government acceptance and funding of these technologies for its research institutions and enterprises would have to be sanctioned and appropriately regulated prior to implementation. Government and economic stability would also be needed to warrant investment in the region, unfortunately, large numbers of Cuban citizens have been fleeing the country due to food shortages, impacted by its close ties and oil trade agreements with Venezuela and amplified by travel sanctions imposed by the Trump administration.

GDP PPP

Examining each country’s ranking alongside the Gross Domestic Product per Capita Purchasing Power Parity (GDP PPP) will help us to better understand an individual’s ability to buy the same quantity of an item in different countries. Government agencies use this metric to compare the output of countries that use different exchange rates and it can be used to forecast future real exchange rates. The GDP PPP is calculated using differences in taxes, tariffs, transportation costs, import costs, and labor costs.

The GDP PPP data, published by the Central Intelligence Agency World Fact Book, compares each countries GDP on a purchasing power parity basis divided by population as of 1 July for the same year.

Comparison chart by LatinX in AI™, Data sources: Government AI Readiness Index (Oxford & IDRC), GDP-PPP and Population (CIA World Fact Book)

Countries with high GDP PPP may not score highly on this Government AI Readiness index due to having a small population or specialized economy, lacking investment or opportunity for the high impact of technological innovation. This is the case for countries who rely heavily on tourism including Caribbean countries, the Bahamas, Barbados, Antigua and Barbuda, and Saint Kitts.

While some countries with low GDP PPP rank higher on the Government AI Readiness Index thanks in part to a growing or diversified economy combined with technological skills and data protection policies. Ecuador, Peru, Colombia, Brazil, Costa Rica, and the Dominican Republic score above the global average of 4.032 on the Government AI Readiness Index but have historically low GDP PPP.

Ecuador is the 8th largest economy in Latin America with its main industries being petroleum, food processing, textiles, wood products, chemicals —and it is also the world’s largest exporter of bananas. At a UN Summit in 2014, Ecuador was one of only five countries who called for a preemptive ban on fully autonomous weapons and in late 2017, in an effort to encourage investment in the region, the National Directorate for the Registration of Public Data in Ecuador (DINARDAP), began drafting the first Ecuadorian law which would implement regulations in order to protect public personal data. Despite these proclamations for privacy and protection, Ecuador has also implemented a nationwide surveillance and response system called ECU 911, funded by China, and making use of controversial facial recognition technology while promoting its benefits for enforcing traffic laws and reducing crime incidents.

Colombia is the 4th largest economy in Latin America, and the fastest growing globally, following China, thanks to its most thriving sectors including construction, services, and agriculture. Its other main industries include textiles, food processing, oil, clothing and footwear, beverages, chemicals, cement, gold, coal, emeralds, shipbuilding, electronics industry, and home appliances. Colombia also has the fastest growing information technology industry in the world and the longest fiber optic network in Latin America, installed by Azteca Co. in 2013.

While Colombia is lacking an official AI strategy, it has some of the most thorough data privacy laws in South America inspired by European data protection regulations. These laws and decrees, enacted between 2008 and 2014, protect its citizens by regulating the use of financial and commercial personal data in credit scoring, they also govern data processing, establish the rights of data subjects and duties of data controllers and processors, set forth requirements for international data transfers, created the National Registry of Databases and designates the Superintendence of Industry and Commerce (SIC) as the data protection authority. In 2018, the first Centre for Excellence in Artificial Intelligence was opened in Medellin, the country’s second largest city, as part of the Digital Americas Pipeline Initiative (DAPI), a collaboration between Ruta N, the center of business and innovation of Medellín and IRPA AI (The Institute for Robotic Process Automation and Artificial Intelligence).

An analogy most often used to explain GDP PPP is the Big Mac Index, which compares the price of a Big Mac in different countries in order to illustrate currencies which may be under or overvalued in purchasing power as compared to the local exchange rate. For our purposes, it would be a fruitful undertaking to explore the difference in purchasing power for an AI product and the difference in cost to develop AI across countries, but an in-depth exploration of these questions would merit a write up of its own. We’ll use a simple proxy in the interim, the cost to hire AI researchers.

Comparing the Cost to Hire an AI Researcher

The cost to hire an AI Researcher is the most telling comparable metric that governments funding research and development would need to consider in the integration of AI into their policies, products, and services. In the US, salaries of software engineers, data scientists, and researchers skilled in artificial intelligence techniques range between $100,000–$150,000 according to PayScale. These averages increase in densely populated or competitive markets like New York and San Francisco. While highly credentialled and “well-known names in the A.I. field have received compensation in salary and shares in a company’s stock that total single- or double-digit millions over a four- or five-year period.

Alternatively, in Latin America, the cost to hire engineers and researchers is significantly lower ranging between $15,000 and $30,000 dependent on years of experience and specialization. According to a 2018 Latin American Developer Survey conducted by Stack Overflow, engineers with some experience in Machine Learning or Data Science still tend to receive higher compensation. Since the job title of Artificial Intelligence engineer and researcher is only beginning to gain popularity, this is the best available historical data to show the average compensation equivalencies by comparison.

Source:

“Hiring Developers in Latin America” by Julia Silge on Stack Overflow Business Journal

Education

According to the Stack Overflow study, Latin American countries also seem to produce more academic researchers than general software engineers as compared to the rest of the world.

Source:

“Hiring Developers in Latin America” by Julia Silge on Stack Overflow Business Journal

While the Government AI Readiness Index by Oxford and the IDRC, account for technological skills, they do not look at the overall Education level of a country. The education index is an average of mean years of schooling (of adults) and expected years of schooling (of children), both expressed as an index obtained by scaling with the corresponding maxima. Published by the United Nations Development Program their calculations are developed from data by UNESCO Institute for Statistics (2018) and other sources.

Comparison chart by LatinX in AI™, Data sources: Government AI Readiness Index (Oxford & IDRC), Education Index (UNDP)

While most Latin American countries rate highly on the education index, many Latin American and Caribbean governments do not invest enough in university research and development. This coupled with unattractive pay, prestige, and working conditions leads to “Brain Drain” where the highly skilled or educated leave their country of origin. This phenomenon makes it harder for Universities in those countries to reach their research potential and limits the access to quality scientific research mentors available to share knowledge to incoming students.

A report from Americas Quarterly in 2014 cited data from Mexico’s National Council of Science and Technology indicating that 1,271 of the 4,559 Mexicans (28%) working on master’s degrees or Ph.D.s abroad in 2012 were doing so in the US. That’s one of every 19 Mexicans with a bachelor’s degree or higher living in the US.

In Argentina, scientists often strike to protest budget cuts to research and development. The directors of the National Scientific and Technical Research Council (CONICET), headquartered in Buenos Aires, which employs more than 20,000 researchers in hundreds of centers throughout the country are also fighting the cuts. They created a manifesto demanding “the immediate implementation of a plan to rescue CONICET.”

Latin American Automation Potential & Risks

All of these metrics can still only tell part of the story when it comes to a country and it’s citizen’s preparedness for Artificial Intelligence. You can’t predict an economies readiness for AI without including metrics for automation. Several reports have been published in the last five years by experts including the McKinsey Global Institute, the Economist Intelligence Unit, and the International Federation on Robotics, to name a few.

The International Federation of Robotics has been tracking and forecasting the rise of robot density globally for use in manufacturing and affiliated industries. In their 2018 Executive Summary on World Robotics, they noted that Mexico has become an important emerging market for industrial robots outpacing the rest of South America, including Brazil.

International Federation of Robotics — 

2018 Industrial Robots Executive Summary

The use of AI and automation applied to industries such as manufacturing and agriculture could help to leapfrog a developing countries economy. Countries with a growing young workforce could use these technologies to their advantage in furthering economic development with the right education.

These days, manufacturing with robotics is no longer the largest concern when describing the automation potential and its effects on an economy. Shifts in business processes and software intelligence through automation of data collection and processing will have a larger impact, especially in Latin America. In 2017, the McKinsey Global Institute published its executive summary on “Harnessing automation for a future that works”.

McKinsey Global Institute — 

A future that works: Automation, employment, and productivity

They’ve listed the countries where the potential for automation is highest by adapting current technologies. Of the Latin American countries they included in their study(countries with the largest population or high wages), Peru and Colombia have the highest automation potential at ≥53%, Brazil, Mexico, and Costa Rica the next highest ≥ 50% followed closely by Chile, Barbados, and Argentina ≥ 48%.

McKinsey Global Institute — 

A future that works: Automation, employment, and productivity

Automation Readiness Index

Meanwhile, the Economist Intelligence Unit developed their own Automation Readiness Index, accompanied by a white paper and executive summary titled “Who is ready for the coming wave of automation”. Their index, similarly to that of the IDRC and Oxford Insights, categorized metrics under 3 high-level topics including:

  1. Innovation Environment — including indicators for research and innovation, infrastructure, and ethics and safety.

  2. Education Policies — including indicators for basic education, post-compulsory education, continuous education, and learning environments.

  3. Labour Market Policies — including indicators for knowledge on automation and workforce transition programs.

The Economist Intelligence Unit — 

Automation Readiness Index

They conclude their report by comparing the global use of automation and AI technology to trial and error. Reinforcing the sentiment that “supporting basic research, clearing the way for start-ups and ensuring competitive markets are likely to be as helpful to AI and robotics innovation as they have been for past technology advances”…while, “policy directions for education systems and labor markets are less clear for the moment, as the effect of intelligent automation have yet to be widely felt”.

Incorporation of AI into industries through automation which currently relies on a large blue-collar workforce always leads to the concerns of increased unemployment, decreased GPD-PPP, increased migration, and population redistribution or density in city centers, gaps in education for highly technical skills, and increased income inequality between upper and lower class citizens. Most economists say these effects are temporary as the markets shifts and new jobs are developed to support the growth of AI economies, but governments will have to do their part in ensuring their citizens have access to education and opportunities for investment.

How can AI help Latin American Governments and citizens?

Rather than just stressing how AI can be misused by government entities for surveillance to perpetuate bias and corrupt political systems or how it may diminish the middle class and render a country’s lower class workers as unemployed, it is important to understand the benefits this technology can add to an ecosystem and economy when used responsibly.

In the public service sector, a myriad of new AI technologies is being implemented including advancing the availability of education, detecting fraud, triaging health care needs, making payments to welfare recipients, speeding immigration decisions, planning and implementation of large urban and industrial infrastructure projects and most importantly, it can reduce costs.

A great write up titled “The economics of artificial intelligence” outlines five imperatives for harnessing the power of low-cost prediction, I’ve paraphrased their descriptions slightly to be applicable to Governments rather than Corporations.

Five Imperatives for Harnessing the Power of Low-Cost Prediction

  1. Develop a thesis on time to AI impact — How fast do I think the implementation, demand, and accuracy of prediction will increase for a particularly valuable AI application in my sector?

  2. Recognize that AI progress will likely be exponential — Once appropriate data collection, processing, and prediction tools are in place for Government services, understand that progress and impact will be exponential rather than linear.

  3. Trust the machines — Where AIs have demonstrated superior performance in prediction, governments must carefully consider the conditions under which to empower humans to exercise their discretion to override the AI.

  4. Know what you want to predict — AI effectiveness is directly tied to goal-specification clarity, so knowing your desired outcomes, whether that be reducing crime rates, increasing the availability of healthcare and education, increasing employment, or reducing government overspending.

  5. Manage the learning loop — Governments need to ensure that information flows into decisions, they follow decisions to an outcome, and then they learn from the outcome and feed that learning back into the system.

The use of AI technology can actually transform the role of governments, making them better able to serve the population. As governments of developing countries continue to shift to more advanced digital platforms, they have added control over the data being collected on their citizens and how that data may be used to benefit society. Since data is the “new gold”, governments also have a responsibility to their citizens to ensure this information is being mined in the least invasive manner while still creating value for the economy.

In follow-up posts, I’ll dive deeper into each Latin American country’s economy, current AI research and development in public and private sectors including growth of startups, industrial automation potential, risks, and benefits, that could pave a way forward for LatinX in AI.

To further our understanding and efforts in Latin America, it is imperative that we gather insight into the challenges and opportunities available to develop AI infrastructure across the continent. Can you help us by completing and sharing this quick survey, to better understand the key players, barriers, and opportunities for development and innovation in AI in your region?

LATAM AI Infrastructure Development Survey: http://bit.ly/LATAM-AI-Survey

Stay Up to Date with LatinX in AI™ (LXAI)

Subscribe to our newsletter to stay up to date with our community events, research, volunteer opportunities, and job listings shared by our community and allies!

Subscribe to our Newsletter!

Join our community on:

Facebook — https://www.facebook.com/latinxinai/

Twitter — https://twitter.com/_LXAI

Linkedin — https://www.linkedin.com/company/latinx-in-ai/

Private Membership Forum — http://www.latinxinai.org/membership

If you enjoyed reading this, you can contribute good vibes (and help more people discover this post and our community) by hitting the 👏 below — it means a lot!

LatinX in AI™ (LXAI) is fiscally sponsored by the Accel AI Institute, a 501(c)3 Non-Profit. Support our work by donating to our Open Collective: https://opencollective.com/latinx-in-ai-research

Comment

Comment

Developing AI Infrastructure for LATAM

Latin America is facing unique challenges in the global AI arms race. Prior to our first LatinX in AI Research Workshop at the Neural Information Processing Systems (NeurIPS) Conference in December 2018, the representation of Latin American researchers at these elite conferences was abysmal. In the last 10 years leading up to 2016, there had only been 11 papers accepted at NeurIPS from South America, according to an investigation by the Deep learning Indaba group.

Area cartogram showing countries rescaled in proportion to their accepted NIPS papers for 2006–2016.

DeepLearningIndaba.com

For those who aren’t familiar with NeurIPS, it has poised itself to be the fastest growing and most competitive AI conference, projected to have 10,000 submissions this year which crashed it’s submissions site servers and caused a deadline extension this past weekend.

Infographic depicting NIPS submissions over time. The red bar plots fabricated data.

Approximately Correct blog by Zachary P. Lipton

.

It has also been credited with driving up Arxiv submissions for AI and Machine Learning research each year.

Arxiv submission rates tweeted by Yad Konrad.

“Since last year, ~1000 more papers published on this day. I wonder what it would look like in the next 24 hours after NeurIPS Full paper submission deadline.” tweeted Yad Konrad, a researcher in SF.

These statistics prove that it is critical to ensure that the research showcased at NeurIPS doesn’t just represent the research coming from specific regions, but from around the globe; needless to say, this urgency is not limited to NeurIPS, but also applies to similar conferences and publications. Developing nations are furthering AI and Machine learning technology in ways that can benefit even the most advanced societies. This development will lighten the burden often carried by well-resourced governments to support communities who have lacked access to technological development.

Our next big event is coming up in a week, the Official LXAI Research Workshop is co-located with the Thirty-sixth International Conference on Machine Learning ICML at the Long Beach Convention Center in Long Beach, CA on Monday, June 10th, 2019.

We chose to co-locate an official workshop with ICML, one of the fastest growing artificial intelligence conferences in the world, due to it being globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics.

LXAI Research @ ICML 2019

This is the first of our workshops completely organized and run by members of our community who have dedicated countless hours over the past six months, meeting weekly and putting together a full day’s schedule including three headlining keynotes, a panel of industry leaders, sponsored luncheon, as well as ten oral presenters and over forty poster presenters selected by a rigorous program committee review process of their submitted research abstracts.

Huge thanks to the Chairs of the LatinX in AI Research Workshop at ICML 2019:

Big thanks to our amazing sponsors:

Sponsors for the LXAI Research Workshop @ ICML 2019

For full details on this event’s programming and registration: http://www.latinxinai.org/icml-2019

We’ll be putting out a call for chairs of our next official workshop at NeurIPS 2019 shortly, please stay tuned to be a part of this amazing community.

At LatinX in AI (LXAI), we are doing our part by hosting these research workshops and launching an AI Infrastructure Development program. This idea was sparked thanks to a raffle winning by one of our board members, Pablo Samuel Castro, at the NeurIPS 2018 Nvidia AI luncheon.

After deliberating over countless responses to his Twitter thread, Pablo ultimately found a great home for this Nvidia T-Rex GPU, gifting it to Carmen Ruiz, a Professor at the Higher Polytechnic School in Guayaquil, Ecuador, his home country. Carmen was chosen as the recipient thanks to her work leading a new Ph.D. program, and her research is being used for:

  1. Natural disaster prediction and relief

  2. Political analysis

  3. Characterization of demographic groups in #Latam

  4. VR for educating people in impoverished areas focused on girls

The next opportunity for us to rehome an incredible piece of hardware came during our recent partnership with Nvidia, where they hosted a scholarship for members of LatinX in AI and Black in AI to attend their annual GPU Technology Conference in March.

Nvidia graciously gifted our organization a second GPU, this time the Titan V, heralded to be the most powerful Volta-based graphics card ever created for the PC. This time, we took nominations from our community members, asking if they could help us identify research institutions and startups who could use additional computing power like this to boost their research initiatives. Specifically, we’re looking for those working on projects that provide a large societal impact or benefit to the local community.

After reviewing all the nominations in depth and researching potential issues with mailing and customs regulations — we chose and happily delivered the GPU to an AI research team at the Centro de Investigación y Desarrollo de Tecnología Digital del Instituto Politécnico Nacional in Mexico, nominated by Professor Jessica Beltran for their work on neurodegenerative diseases.

Dr. Jessica Beltran receiving the Titan V Graphics Card from Nvidia

Unboxing the Titan V Graphics Card from Nvidia

We know their institution is going to do amazing work and we are excited to feature Dr. Jessica Beltran and her colleague Dr. Mireya Garcia on an upcoming online AI Research Discussion describing their work “Towards a Diagnoses of Alzheimer’s Disease with AI” on Friday, June 28th, 2019 at 11 am PST.

AI Research Discussion Webcast

In this talk, they will review current advances in eye movement analysis related to the diagnosis of Alzheimer’s Disease. We will discuss the challenges and future directions in this field. Additionally, they will show different projects related to AI that we conduct in their lab and research center (CITEDI-IPN, https://www.citedi.ipn.mx/portal/). These projects include pervasive healthcare and indexing of multimedia content

You can register to join us via webcast here: http://bit.ly/AI-Alzheimer-Webcast

To further our understanding and efforts in Latin America, it is imperative that we better understand the challenges and opportunities available to develop AI infrastructure across the continent. Can you help us by completing and sharing this quick survey, to better understand the key players, barriers, and opportunities for development and innovation in AI in your region?

LATAM AI Infrastructure Development Survey: http://bit.ly/LATAM-AI-Survey

LatinX in AI is continuing to take in-kind donations of new and gently used hardware or cloud computing credits to regift to research institutions and startups using AI to further their communities. Contact our board directly if you’d like to make a contribution: latinxinai @ accel.ai

Stay Up to Date with LXAI

Subscribe to our monthly newsletter to stay up to date with our community events, research, volunteer opportunities, and job listings shared by our community and allies!

Subscribe to our Newsletter!

Join our community on:

Facebook — https://www.facebook.com/latinxinai/

Twitter — https://twitter.com/_LXAI

Linkedin — https://www.linkedin.com/company/latinx-in-ai/

Private Membership Forum — http://www.latinxinai.org/membership

If you enjoyed reading this, you can contribute good vibes (and help more people discover this post and our community) by hitting the 👏 below — it means a lot!

Comment

Comment

Effects of Automation & Retraining for the 4th Industrial Revolution

This post is an adaptation of a talk I recently gave for the Global AI Mind Web Series hosted and organized by Wetogether.co and GirlsinTech Taiwan.

[embed]https://youtu.be/etgljmHfIhg[/embed]

Think about it for a moment….

Whether you realize it or not, you are living in a world that has been dramatically altered due to artificial intelligence and automation. Advances in technology have been improving the efficiency of everyday processes and recreating our workforce in incremental ways.

It’s easy to adapt to these small changes as they are introduced, since they provide convenience and eliminate tedious tasks but as small changes add up they quickly create a new version of reality.

When is the last time you had to filter spam from your email inbox? Why are your favorite color of shoes always at the top of your search query on amazon? How does facebook know to show you ads for that new pizza place you were just discussing with your spouse? These examples are just a few of the capabilities that AI has today.

In order to help you truly understand the future effects of Artificial Intelligence, I want to introduce you to the hypothetical Chen family. Mrs. Chen is a lawyer, and Mr. Chen is a Doctor. These are highly skilled professions that require many years of schooling, logical reasoning, and accuracy to perform well at and be successful. When the Chen children grow up, the jobs their parents do will look much different than what they look like today.

Why is that? What will change in the next 10 or 20 years for these and other occupations?

Automation Revolutionizing our Economy

The answer is the onset of exponential growth in automated technology due to advances in Artificial Intelligence. To better understand the rate of change taking place now, we have to understand how automation has affected our economies in the past.

We have had several industrial revolutions led by automation.

Starting in the late 1700s, mechanization, water power, steam power, and railroads drastically changed the capabilities of our workforce and methods of transportation for goods around the world. In the mid to late 1800s, the invention and scale of electricity led to mass production leveraging human assembly lines in manufacturing. In the mid to late 1900s, electronics and computerization led to automated manufacturing.

We are now seeing the beginning of the fourth wave of an industrial revolution due to digitization, big data, internet of things and cognificationof everything.

Robot vs Human without Neural Networks

In the past there have been clear advantages to integrating robotics and automation with our workforce.

Robot’s have a higher capacity for:

  • Strength

  • Accuracy

  • Speed

  • They do not tire

  • They can do repetitive tasks

  • They can take and report measurements

On the other hand Humans have always had an advantage in:

  • Intelligence

  • Flexibility

  • Adaptability

  • Ability to Estimate

  • Skill Improvement over time (the ability to learn)

  • Emotional Awareness

How Automation Benefits Us

Automation has incredible benefits, the most prevalent being productivity.

Between 1850–1910, productivity grew 0.3% due to the introduction of the steam engine. Jumping ahead to 1993–2007, early robotics led to a 0.4% increase in productivity and in the IT sector, automation increased productivity 0.6% between 1995–2005. It’s predicted that automation due to Artificial Intelligence and Machine Learning will improve productivity by 0.8–1.4% on a global scale between 2015–2065.

Automation has been around for decades, what is different this time?

Deep Learning in AI

The difference is the onset of advances in Deep Learning in AI.

Deep learning is part of a broader family of machine learning methods based on learning data representations (also known as unsupervised learning), as opposed to task-specific algorithms (commonly found in supervised learning techniques).

The idea of using neural networks to perform deep learning tasks has been around since the 1950’s, formerly described using the concept of a perceptron. The difference being that Deep learning involves stacking many perceptrons or neural networks together through hidden layers — which perform multiple calculations and feature extractions within a fraction of time.

With the onset of Deep learning techniques, machines can now perceive and understand the world on their own.

These advances began with specific perceptual tasks using computer vision, training a neural network to recognize images using large datasets to perform feature extraction.

Deep learning has become the most popular approach in developing artificial intelligence today.

Why does this matter?

As Sundar Pichai, the CEO of Google stated, “The last 10 years have been about building a world that is mobile-first. In the next 10 years, we will shift to a world that is AI-first.”

AI will continue to be integrated with every aspect of our lives and every industry.

There are many predicted future benefits of this integration. Along with increase in productivity & efficiency, we will see an increase in:

  • Accuracy

  • Increasing safety or lowering risk

  • Decrease cost in labor and production

  • Increased throughput

  • Increased quality

  • Increase customer satisfaction

  • Allowance for high value activities — enabling us more time to do the things in life we appreciate most.

Areas Most Likely to See Automation

There is no industry that does not have partial automation potential. It’s estimated that about half of all the activities people are paid to do in the world’s workforce could potentially be automated by adapting currently demonstrated technologies. That amounts to almost $15 trillion in wages.

The activities most susceptible to automation are physical ones in highly structured and predictable environments, as well as data collection and processing.

It isn’t just blue collar work that is at risk for being automated. At an Executive level, a quarter to third of a CEOs time could be automated.

Automation on a Global Scale

According to the McKinsey Global Institute, areas of the world that have the highest potential for automation include — China, India, and the US.

  • China: 395.3 million employees potentially automatable

  • India: 235.1 million employees potentially automatable

  • United States: 60.6 million employees potentially automatable

The industries with the highest propensity for automation in those countries include:

  • Accommodation and food services — Almost 70%

  • Manufacturing — Almost 65%

  • Transportation & Warehousing — 60%

  • Retail trade — 55%

  • Agriculture, forestry, fishing and hunting — 50%

Where machines could replace humans — and where they can’t (yet)

Future Work Automation

The onset of advanced technology in Artificial Intelligence allows for exponential breakthroughs that’ll catapult our society into the future.

Within only a few years, we now have self driving cars powered by LIDAR systems. Automated food purchasing, food preparation, and food service. Autonomous manufacturing plants and delivery systems. Even autonomous medical and surgical systems which can perform with higher accuracy than humans.

[embed]https://youtu.be/89JojY5Ou8g[/embed]

Risks of Automation

Cybersecurity

Just as organizations can use artificial intelligence to enhance their security posture, cybercriminals may begin to use it to build smarter malware.

In the future, we may have attacker/defender AI scenarios play out.

This new generation of malware will be situation-aware, meaning that it will understand the environment it is in and make calculated decisions about what to do next. In many ways, malware will begin to behave like a human attacker: performing reconnaissance, identifying targets, choosing methods of attack, and intelligently evading detection.

With networked systems such as self driving cars — this creates huge potential for terrorism and disaster without proper security measures put in place for defense.

Increased Inequality

Wage gaps between the upper and lower class have been shown to increase dramatically with advances in automation. They will continue to do so at the exponential rate of advancement we are facing, forcing the middle class that we think of today to be nonexistent in the future.

Due to automation and advances in Artificial Intelligence, 47% of Jobs Will Disappear in the next 25 Years, According to Oxford University.

Those affected most will be low-skilled, low-wage workers.

Education and job training are more crucial than ever. The less you make in hourly wages, the more likely your job will be replaced by automation.

Bias in Machine Learning

Another huge risk in machine learning and automation is bias being perpetuated by AI systems.

These two pioneering women in tech and Artificial Intelligence, Melinda Gates and Fei Fei Li, have recognized this problem.

The phenomena of creating bias systems with AI has already been proven. It isn’t the fault of the algorithms or the machines, but the lack of proper awareness and oversight by engineers and researchers who are trained to look for these indicators.

If you don’t believe this is possible, just pick up a copy of Cathy O’Neil’s book Weapon’s of Math Destruction where she clearly outlines mathematical models or algorithms that claim to quantify important traits: teacher quality, recidivism risk, employability, and creditworthiness but have harmful outcomes and often reinforce inequality, keeping the poor poor and the rich rich. She gives in depth examples on how corruption that has been seen in finance is now being perpetuated by Big Data.

“…big data increases inequality and threatens democracy” — Cathy O’Neil

She also describes in depth how these systems create vicious feedback loops, much like our society.

“A person who scores as ‘high risk’ is likely to be unemployed and to come from a neighborhood where many of his friends and family have had run-ins with the law. Thanks in part to the resulting high score on the evaluation, he gets a longer sentence, locking him away for more years in a prison where he’s surrounded by fellow criminals — which raises the likelihood that he’ll return to prison. He is finally released into the same poor neighborhood, this time with a criminal record, which makes it that much harder to find a job. If he commits another crime, the recidivism model can claim another success. But in fact the model itself contributes to a toxic cycle and helps to sustain it.” — Cathy O’Neil

Datasets are already bias from systemic racism and our history of oppression. We need more underrepresented groups in tech and AI who can bring their unique life experiences to the table.

An “Existential Threat”

Founder and CEO of Tesla and SpaceX, Elon Musk has even warned that AI is our most “existential threat” to human civilization — potentially more dangerous than nukes.

Musk has called for precautionary, proactive government intervention. He thinks by the time we are reactive in AI regulation, it’ll be too late.

What can’t be automated?

So what is the light at the end of the tunnel? How do we prepare ourselves for this new age of automation? We must ask ourselves — what can’t be automated? We have to identify the things that are at core to our humanity.

I am talking about the things that are unique to our experience in the world. Specifically — Creativity and Empathy.

For now, machines cannot compete with us when it comes to tackling novel situations, and this puts a fundamental limit on the human tasks that machines will automate. We must embrace the novel, the uniqueness, the unconditional human condition.

A great example of creative evolution due to advances in technology is the invention of photography. When photography was invented, art, especially painting, changed. Portrait paintings went out of style. Did this mean that artists stopped painting? No, advances in technology allowed time for a series of art movements from pointillism, cubism, surrealism, abstract art and so on to emerge. This is likely what will happen with AI, it’s not that people will stop producing work, it will just get more creative and expressive, since there will be autonomous processes completing monotonous tasks, such as painting portrait after portrait.

Future Skills

So what does this mean for the future of work? The future state of any single job lies in the answer to a single question: To what extent is that job reducible to frequent, high-volume tasks, and to what extent does it involve tackling novel situations? On frequent, high-volume tasks, machines are getting smarter and smarter.

Other than in oversight and regulation of machines, humans will still be needed to perform in unique and creative situations. We’ll be needed to connect with one another, provide entertainment, creative output, and continue to design the future as the world evolves.

In this way — Technology Is Only Making Social Skills More Important.

Nearly all job growth will be in occupations that are relatively social skill-intensive. High-skilled, hard-to-automate jobs will increasingly demand social adeptness.

The Harvard Business Review has reported, since 1980, “Job and wage growth has been strongest in occupations requiring both high cognitive and high social skills.” While demand for jobs requiring routine skills has declined.

They have also seen higher earnings increase for “multiskilled” individuals in the labor force.

The Development of AI

A high demand for engineers who can develop autonomous systems, coupled with massive job losses due to automation — means we have a great opportunity to retrain and upskill our workforce. According to a study from Paysa, in 2017, U.S. companies are planning to allocate more than $650 million to fuel the AI talent race.

Demand is so high for AI, large tech companies are competing to poach talent from Universities all over the country. Many in academia are making the switch to the corporate sector. Startups who need to incorporate AI into their platform just to be competitive in the market are vying for anyone who understands what AI even means today.

Design Principles in Industry 4.0

Along with the technical skills in AI engineering, individuals will be needed to create and implement principles around design and user experience. There are 4 design principles in what some call — Industry 4.0 as described by wikipedia:

Interoperability: The ability of machines, devices, sensors, and people to connect and communicate with each other via the Internet of Things (IoT) or the Internet of People (IoP)

Information transparency: The ability of information systems to create a virtual copy of the physical world by enriching digital plant models with sensor data. This requires the aggregation of raw sensor data to higher-value context information.

Technical assistance: First, the ability of assistance systems to support humans by aggregating and visualizing information comprehensibly for making informed decisions and solving urgent problems on short notice. Second, the ability of cyber physical systems to physically support humans by conducting a range of tasks that are unpleasant, too exhausting, or unsafe for their human co-workers.

Decentralized decisions: The ability of cyber physical systems to make decisions on their own and to perform their tasks as autonomously as possible. Only in the case of exceptions, interferences, or conflicting goals, are tasks delegated to a higher level.

Cooperation between humans and technology

So, how can we upskill to create a world where we can live and work alongside machines? Can we stop viewing automation as the enemy and learn to embrace its benefits while being adequately prepared for the risks?

Sustainable AI Development

Can we embrace new job opportunities to engineer advanced robots and systems?

Can we acknowledge that sustainable AI development means ethics, design, user experience, and long term effects on our society have to be part of the conversation?

In this critical moment of preparing machines to take on many of the responsibilities we have formerly entrusted to other human beings, we must look at ourselves, at our companies, at our societies, and ask;

Who are we? Who represents us? And who do we represent?

We must have more than a single voice, a uniform experience, a typical approach. We must work harder to have technology represent every one among us.

Remember the Chen Family?

In the near future nearly all legal research will be performed by algorithms and Ms. Chen would only be expected to argue trial cases. In this world it will be her ability to communicate the facts as researched by AI with the jury that will be the most important part of the job. As for Mr. Chen, much of the prescribing work and diagnosis will be completed by AI, further more Dr. Chen will be accompanied by nurse and physical therapy bots that will take over much of the medical manual labor performed by humans today.

Should their children choose to pursue the traditional careers of their parents, their work will be greatly integrated with machine learning systems and they will need not only the skills of their profession but an ability to be extraordinarily human in a machine heavy environment.

If instead they choose to pursue careers in AI and computer science their son will likely go on to learn the skills needed to be successful alongside advances in this technology, but unless things drastically change, their daughter will have a much harder time — as women only make up 18 percent of CS majors today.

Priming our workforce for the 4th Industrial Revolution

At Accel.AI, we are excited to launch a new set of workshops focusing on Human Development for AI Engineering. Starting with a workshop on Engineering Mindset for AI, led by our personal development mentor Jen Shae Roberts.

This 90 minute workshop will be diving into some of the well-researched tools to gain insight into understanding mindset, what our own mindset is, what it means, how it plays out in work and life, and what we can do about it. There will be four sections, which will later be developed into full day workshops: growth mindset, learning how to learn, values and goals, and mindfulness. We will be applying theory and practice with a combination of teaching and interactive exercises, giving the participants something solid to come away with as they start on their journey in becoming AI engineers.

Register for the workshop!

Thank you!

You can stay up to date on our progress, workshops, and plans going forward through our website, mailing list, meetup group, twitter and facebook page.

Join us in shaping the next generation of AI engineers and enthusiasts around the world!

References:

If you enjoyed reading this, you can contribute good vibes (and help more people discover this post and our community) by hitting the 👏 below — it means a lot!

Comment