Viewing entries in
Programs

Comment

Developing AI Infrastructure for LATAM

Latin America is facing unique challenges in the global AI arms race. Prior to our first LatinX in AI Research Workshop at the Neural Information Processing Systems (NeurIPS) Conference in December 2018, the representation of Latin American researchers at these elite conferences was abysmal. In the last 10 years leading up to 2016, there had only been 11 papers accepted at NeurIPS from South America, according to an investigation by the Deep learning Indaba group.

Area cartogram showing countries rescaled in proportion to their accepted NIPS papers for 2006–2016.

DeepLearningIndaba.com

For those who aren’t familiar with NeurIPS, it has poised itself to be the fastest growing and most competitive AI conference, projected to have 10,000 submissions this year which crashed it’s submissions site servers and caused a deadline extension this past weekend.

Infographic depicting NIPS submissions over time. The red bar plots fabricated data.

Approximately Correct blog by Zachary P. Lipton

.

It has also been credited with driving up Arxiv submissions for AI and Machine Learning research each year.

Arxiv submission rates tweeted by Yad Konrad.

“Since last year, ~1000 more papers published on this day. I wonder what it would look like in the next 24 hours after NeurIPS Full paper submission deadline.” tweeted Yad Konrad, a researcher in SF.

These statistics prove that it is critical to ensure that the research showcased at NeurIPS doesn’t just represent the research coming from specific regions, but from around the globe; needless to say, this urgency is not limited to NeurIPS, but also applies to similar conferences and publications. Developing nations are furthering AI and Machine learning technology in ways that can benefit even the most advanced societies. This development will lighten the burden often carried by well-resourced governments to support communities who have lacked access to technological development.

Our next big event is coming up in a week, the Official LXAI Research Workshop is co-located with the Thirty-sixth International Conference on Machine Learning ICML at the Long Beach Convention Center in Long Beach, CA on Monday, June 10th, 2019.

We chose to co-locate an official workshop with ICML, one of the fastest growing artificial intelligence conferences in the world, due to it being globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics.

LXAI Research @ ICML 2019

This is the first of our workshops completely organized and run by members of our community who have dedicated countless hours over the past six months, meeting weekly and putting together a full day’s schedule including three headlining keynotes, a panel of industry leaders, sponsored luncheon, as well as ten oral presenters and over forty poster presenters selected by a rigorous program committee review process of their submitted research abstracts.

Huge thanks to the Chairs of the LatinX in AI Research Workshop at ICML 2019:

Big thanks to our amazing sponsors:

Sponsors for the LXAI Research Workshop @ ICML 2019

For full details on this event’s programming and registration: http://www.latinxinai.org/icml-2019

We’ll be putting out a call for chairs of our next official workshop at NeurIPS 2019 shortly, please stay tuned to be a part of this amazing community.

At LatinX in AI (LXAI), we are doing our part by hosting these research workshops and launching an AI Infrastructure Development program. This idea was sparked thanks to a raffle winning by one of our board members, Pablo Samuel Castro, at the NeurIPS 2018 Nvidia AI luncheon.

After deliberating over countless responses to his Twitter thread, Pablo ultimately found a great home for this Nvidia T-Rex GPU, gifting it to Carmen Ruiz, a Professor at the Higher Polytechnic School in Guayaquil, Ecuador, his home country. Carmen was chosen as the recipient thanks to her work leading a new Ph.D. program, and her research is being used for:

  1. Natural disaster prediction and relief

  2. Political analysis

  3. Characterization of demographic groups in #Latam

  4. VR for educating people in impoverished areas focused on girls

The next opportunity for us to rehome an incredible piece of hardware came during our recent partnership with Nvidia, where they hosted a scholarship for members of LatinX in AI and Black in AI to attend their annual GPU Technology Conference in March.

Nvidia graciously gifted our organization a second GPU, this time the Titan V, heralded to be the most powerful Volta-based graphics card ever created for the PC. This time, we took nominations from our community members, asking if they could help us identify research institutions and startups who could use additional computing power like this to boost their research initiatives. Specifically, we’re looking for those working on projects that provide a large societal impact or benefit to the local community.

After reviewing all the nominations in depth and researching potential issues with mailing and customs regulations — we chose and happily delivered the GPU to an AI research team at the Centro de Investigación y Desarrollo de Tecnología Digital del Instituto Politécnico Nacional in Mexico, nominated by Professor Jessica Beltran for their work on neurodegenerative diseases.

Dr. Jessica Beltran receiving the Titan V Graphics Card from Nvidia

Unboxing the Titan V Graphics Card from Nvidia

We know their institution is going to do amazing work and we are excited to feature Dr. Jessica Beltran and her colleague Dr. Mireya Garcia on an upcoming online AI Research Discussion describing their work “Towards a Diagnoses of Alzheimer’s Disease with AI” on Friday, June 28th, 2019 at 11 am PST.

AI Research Discussion Webcast

In this talk, they will review current advances in eye movement analysis related to the diagnosis of Alzheimer’s Disease. We will discuss the challenges and future directions in this field. Additionally, they will show different projects related to AI that we conduct in their lab and research center (CITEDI-IPN, https://www.citedi.ipn.mx/portal/). These projects include pervasive healthcare and indexing of multimedia content

You can register to join us via webcast here: http://bit.ly/AI-Alzheimer-Webcast

To further our understanding and efforts in Latin America, it is imperative that we better understand the challenges and opportunities available to develop AI infrastructure across the continent. Can you help us by completing and sharing this quick survey, to better understand the key players, barriers, and opportunities for development and innovation in AI in your region?

LATAM AI Infrastructure Development Survey: http://bit.ly/LATAM-AI-Survey

LatinX in AI is continuing to take in-kind donations of new and gently used hardware or cloud computing credits to regift to research institutions and startups using AI to further their communities. Contact our board directly if you’d like to make a contribution: latinxinai @ accel.ai

Stay Up to Date with LXAI

Subscribe to our monthly newsletter to stay up to date with our community events, research, volunteer opportunities, and job listings shared by our community and allies!

Subscribe to our Newsletter!

Join our community on:

Facebook — https://www.facebook.com/latinxinai/

Twitter — https://twitter.com/_LXAI

Linkedin — https://www.linkedin.com/company/latinx-in-ai/

Private Membership Forum — http://www.latinxinai.org/membership

If you enjoyed reading this, you can contribute good vibes (and help more people discover this post and our community) by hitting the 👏 below — it means a lot!

Comment

Comment

Best Wishes for a Happy New Year!

Looking back on 2017…

After our launch in September of 2016, this past year we focused on cultivating a rich and diverse community of Artificial Intelligence and Deep Learning Enthusiasts interested in sharing their growth and learning with you!

We were so happy to bring you applied AI and social impact workshops and events which lower the barriers to entry in engineering artificial intelligence while fostering inclusion:

Demystifying Artificial Intelligence Symposiums

  • Hosted Quarterly in the Bay Area

  • Averaging 150–200 attendees per event, we have successfully “Demystified AI” for over 1,500 attendees!

  • Met our target demographics in the US, with attendees averaging 70% non-caucasian and 50% non-male identifying

  • Provided scholarships and 80% discounts from market rate entry price to over 500 underrepresented individuals

  • Hosted 2 symposiums abroad with our partners in Oslo, Nordic Impact, and Noroff Education

  • Partnered with Women Who Code, Techtonica, dev/Mission, PyLadies, Google Launchpad, Nordic Impact, Noroff Education, Katapult Accelerator, and SINTEF

  • Sponsors included Kapor Center for Social Impact, Devlabs, East Bay Community Foundation, Datalog.ai, Google, Mozilla

  • Check out this awesome video compilation by our community partner TecnoLatinx

  • See past speaker presentations on Youtube

Startup Weekend AI | AR | VR

  • Sponsored and helped organize this Techstars community event

  • Partnered with Kapor Center for Social Impact

  • Maxed out the conference space with 150 attendees

  • See videos of the final pitches on Youtube.

  • Check out photos for all the fun we had!

AI & Genomics Hackathon

  • Partnered with SVAI and the NF2 Project

  • Sponsors included Google Launchpad, Google Cloud, Google Genomics, NVIDIA, Recursion Pharmaceuticals, NCBI, genlife, and Iris.ai

  • Maxed out the conference space at 160 attendees

  • Had 6 final teams compete for the top prizes, three Titan X Pascal boards valued at ~$1,500 a piece, compliments of NVIDIA

  • Check out the team’s research projects on GitHub

  • Watch the final presentations on Youtube

Hosted free weekly Deep Learning Book Study Sessions

Hosted free impromptu AI Research Discussion Sessions

Launched our preliminary Applied AI Technical Workshop

Hosted our preliminary AI Web series

Launched our preliminary Mindset Training for AI Engineers & AI Ethical and Social Impact Discussions

Presented for the Global AI Mind series for WeTogether.co and GirlsinTech Taiwan

Successfully registered as a 501c3 Non-Profit Organization

Get involved with Accel AI in 2018!

Join our global community online:

Share your time and expertise:

Donate to our Non-Profit so that we may continue offering low cost and free entry to workshops, events, and discussions:

All donations are tax-deductible now that we are officially registered as a 501c3 Non-profit! Contact us for a receipt for your contribution.

If you enjoyed reading this, you can contribute good vibes (and help more people discover this post and our community) by hitting the 👏 below — it means a lot!

Comment

Comment

Mindset Training for AI Engineers

As machines are becoming more human, are humans becoming more like machines?

We need to keep the humans human, and it starts with those who are designing artificial intelligence.

The number of people drawn to artificial intelligence (AI) engineering is seeing the same exponential growth as AI itself. Simultaneously, we are seeing an increase in the number of people who are looking at the world and seeing the need for change, for movement in the direction of a world that works for everyone. Whether or not you are interested in AI or in social change, I think we can all agree that AI systems have an effect on all of us, and this trend is something worth paying attention to.

Each new development by AI engineers has great potential to profoundly impact our society and the experiences of individuals within it. Engineers spend a great deal of time looking at lines and lines of code on a screen, getting deep into a project. Engineering intangible software has the potential to feel extremely isolating and at some point, people can start to think like the machine itself. This is the point at which the mindset of the people creating AI systems becomes vitally important, and where I see a need to take a step back from the computer screen and take a look at our humanity.

Mindset describes the various ways to approach life and the way that we experience the events in our lives. There is an isolation that comes with engineering, but isolation is a much bigger theme in our culture that affects us all. As we all suffer with the realities we are dealt, we are inundated by mainstream individualized ideals that often involve putting others down to get ahead, remaining straight faced and striving to fit the cookie cutter roles we think we choose for ourselves. Mindset can feel like an automatic response to the world around us, like something that is not fully within our control. The truth about mindset, however, is that with some effort and encouragement, it can be changed- and this can alter the future for the individual and for the world.

I believe that AI has the potential to be an actionable space for creating change in how we relate to each other, how we interact with our environment, and in ensuring a future where there is a habitable and equitable world for the coming generations. AI is already affecting these things. How it affects them can be influenced by the people who are driving it- if we can understand and utilize this power with intention and mindfulness.

When engineering AI systems and looking at data, the numbers on the screen typically represent living breathing humans that we don’t know and may never meet. We must face the reality that we are creating artificial intelligence systems that will affect people’s lives in ways that we don’t understand.

This is a monumental power, and a huge responsibility. Engineers, AI leaders, and data scientists are making decisions that affect a very large population. All of the various differences and needs of people are boiled down to cold hard data. There is a lot of fear around this, which we will address later in this article.

What is the solution to quell this fear?

In an effort to preserve and protect the humanity of the people creating the technology that is increasingly automating our world, I am leading workshops and trainings in Mastering Mindset for AI Engineers. This article digs into what these trainings entail, as well as the results I hope to see by doing this work.

(This photo is actually from my second workshop, at Google Launchpad during the Demystifying AI Event hosted by Accel.AI)

My First Mastering Mindset Workshop

The first time I got up in front of a room full of AI engineers, I felt like I was talking to people from an entirely different culture who spoke an entirely different language. In fact, most of them did understand multiple languages, ones they learned through tedious practice — the languages of coding.

My own language, based in a soft science from my years as an undergrad studying alternative medicine and then completing a master’s in anthropology and social change, felt like it provided a rough translation at best. They didn’t laugh at my jokes, but no one left the room, and everyone engaged with the exercises I gave them and with the research I presented. They listened. It is an interesting sensation when people start listening to you. Since that first workshop, I have learned and continue to learn about the culture of AI engineering.

Why would an anthropologist of social change want to hold mindset training workshops for AI engineers? Because I want to see real change happen in the world. And the fastest way to make change is through technology and especially artificial intelligence, which is moving at an exponential pace.

This rapid advancement is not without side effects. There is a lot that is broken in the world and these systemic issues are being repeatedly built into new AI systems, predominantly under the radar of the engineers creating it and testing it.

Bias, particularly from systemic racism, has already been built into algorithms we trust to make some pretty serious decisions. In a recent study on the effects of recidivism algorithms, ProPublica uncovered concerning trends in the population of people who are receiving longer prison sentences, showing that sentencing is heavily imbalanced based on race. Cathy O’Neil also addresses issues such as this in her recent book, Weapons of Math Destruction. She observed that, “Unless we specifically make sure that the models do not unfairly punish poor people or black people, we will end up with systems that do. And that is what we are seeing.”

This is something that we need to join forces on and find a way to do better.

What’s in the Workshops

Learning AI is not just about the technological aspects, but about the social implications, the emotional and personal impact, and the responsibility of wielding this power. What I can do is provide AI engineers with insight, research, and the tools that will help them to see the importance and the impact their work can and will have. Here is a small snapshot of the workshops I lead:

We start with Carol Dweck’s work on growth mindset: what it is, how to apply it to AI engineering, how to recognize where we are stuck, and push through as a team. We then turn to Barbara Oakley’s work on learning how to learn and strengthening the mind, then explore neuroplasticity and how the brain actually learns. We touch on imposter syndrome, prevalent in engineers, although we don’t like to talk about it. We look to Cathy O’Neil’s work on what she calls ‘weapons of math destruction’ to understand the serious implications of well meaning algorithms that create real harm to the underprivileged. We talk about values that affect AI. We set goals for our own careers, and we start to take steps to work towards them. And of course we get deep into mindfulness, and how it affects our work, our bodies, and our lives.

In my workshops I present research to support and encourage not only learning but compassionate humanity in the humans creating AI, as we understand how AI is affecting our shared world. What drives this all home is that we get up and do several group exercises and writing exercises to combine theory with practice, then talk about how this works and the dialectic of logic and emotion.

Through the workshops, I pose the question: Can we use this knowledge paired with practice to not only work for harm reduction, but towards something better?

(Another shot from my second Mindset workshop of engineers participating in a group exercise)

I Believe that People Care, and Through Care, Intention, and Hard Work, We Can Change the World.

Through these mindset trainings, I support people who care about our future and who want to work toward the equality of all people. I work to prove that we can put that care into action with the tools and skills available in AI engineering, using well researched methods of developing change and growth in mindset, while getting clear on what values and goals we hold by finding compassion around our differences.

We are in exciting times. There are many passionate and motivated people at the wheel, steering the way to the future- and AI is a powerful tool that will help to get us there.

There is nothing moving faster than the development of AI. If we can catch that train and take over the engine room, can we steer in the direction to equality and connection?

This is a call to action. How do we move from theory into practice? How do we create the change that we know needs to happen, now? We have the tools, the fire is built, all it needs is a little spark. I am offering that spark.

Check out my Mastering Mindset workshops. Contact me if you’d like to hold one for your group or company, and keep an eye on Accel.AI to see when workshops are happening- both for mindset training and technical AI training. Let’s combine forces, learn from each other, and work together towards a better world for everyone.

Comment

Comment

Working Together to Create Social Change through Deep Learning and AI

Tl:dr{ We are striving to create deep fundamental change, in the way that we learn and perceive ourselves and the world around us, in order to keep up with exponential technologies, and in order to relate to our pasts in new ways that serve to help us grow instead of damage us, so that we can work together to create a better world that works for everyone. To get plugged in, check out Accel.AI}

Lack of Diversity / Oppression in Tech

Currently we live in a very sick world, with twisted systems that don’t work for everyone, but for only a few; creating a hierarchy of worth of people and a categorization and simplification of everything. We are seeing these problems perpetuated through models and bias systems utilizing Artificial Intelligence and Machine Learning techniques, even if the intentions are to solve these same issues.

Cathy O’Neil, data scientist and author of Weapons of Math Destruction (2016), is fighting on the frontlines of this issue, along with Joy Buolamwini, a researcher and ‘poet of code’ who explores the intersection of social impact technology and inclusion. Buolamwini, having personally had to put a white mask over her black face to be recognized by a widely used computer facial recognition program, points out that “. . . training sets don’t just materialize out of nowhere, we actually can create them. There’s an opportunity to create full spectrum training sets that reflect a richer portrait of humanity.” ¹ O’Neil adds that “. . . unless we specifically make sure that the models do not unfairly punish poor people or black people, we will end up with systems that do. And that is what we are seeing.” ²

Contradictions in Tech / Diversity Numbers / AI Benefits

I have been hearing people calling each other hypocrites and calling out contradictions, which I think is good to notice, however, we also must take note that in these times of great change, where we are trying to live in ways that are quite different than how we were raised, we are all walking contradictions. We are all hypocrites. We must be kind to ourselves and others. We must work together across our differences and complexities.

The use of algorithmic systems to improve efficiency in job placement, prison sentencing, loans and healthcare would be great if they were truly fair. Unfortunately, they are proving not to be, and are often created by private companies who don’t always disclose their process, creating an impenetrable “black box” which may have unfair implications for all of humankind.

For example, ProPublica did an analysis of a for-profit company, Northpointe, which created algorithms to determine which criminals are more likely to commit future crimes. Guess what? It is inherently biased that black people are more likely to commit crimes than they actually are, and transversely, it shows that white people are less likely to commit crimes than in reality. Judges around the country have been ill-advisedly using this technology in sentencing, among other things. You can see how problematic this is, and how it perpetuates instead of remedies the disproportionate number of black folks in US prisons. In ProPublica’s article on this, they said that “. . . when a full range of crimes were taken into account — including misdemeanors such as driving with an expired license — the algorithm was somewhat more accurate than a coin flip.” ³ I don’t like those odds.

I am not trying to point fingers and call out the ‘real bad guys.’ As I said, we all make mistakes. As painful, embarrassing, and unpleasant as it is, we must admit to mistakes, big and small, and even share them with others, so that we can learn from our own mistakes as well as from the mistakes of our peers. These mistakes affect the lives of way too many people for them to be glazed over.

Growth Mindset as a Path to Learning / Developing Diversity

One super practical approach to learning and developing diversity is found in Carol Dweck’s work on growth mindset. “The growth mindset is the belief that you can cultivate and improve upon your abilities through practice and effort. Someone with a fixed mindset believes these abilities are predetermined and largely unchangeable.” ⁴ Growth mindset is fundamental. It also must be understood that the greater world and culture, what we have learned in schools, from parents, and in various social and work environments is all very focused around fixed mindset. This makes it a hard transition- even if you agree with growth mindset. Hence, we need to be using growth mindset to apply growth mindset.

It must be further acknowledged that there is a lot of very real oppression, discrimination, and threat of violence that many people face, both internally and externally, that make it a lot harder and more complicated a process than simply saying- “just change your mindset.” However, it is possible, and if we work together to effect the systems that stem from AI models and bias data sets , we can simultaneously teach the machines to learn as we learn, utilizing growth mindset.

Summary

This stuff is both incredibly simple and utterly complex. That is the beauty of life. When we are approaching AI, deep learning, and ever advancing automation, we cannot lose sight of this beauty and complexity. We also cannot forget the past that we are coming from; however I believe that in using these tools, we do not have to be damned by it. There are serious threats of the oppressive, dominating and discriminating nature of the systems that have been affecting us for far too long creeping their way into AI and perpetuating these abusive and despairing realities. However there is also an ability for us, as learners, to learn how to learn, and use that understanding to program AI systems into new ways of seeing the world that celebrates diversity, empowers those who have been oppressed, and creates a more egalitarian existence. To again quote Joy Buolamwini: “We now have the opportunity to unlock greater equality If we make social change a priority and not an afterthought.” ¹ We can do this by building platforms that identify bias, working with diverse teams to catch each other’s blind spots, and “. . .start thinking about how we create more inclusive code and employ inclusive coding practices.” ¹

Basically, let’s all work together to create a world that works for everyone. One of the places that we are striving for this is Accel.AI, which is a career accelerator program focused on inclusion, teaching the technical skills to enter the AI workforce as well as creating a holistic environment that can support learners no matter your background, emphasizing the power of diversity and the importance of not just what we do, but why.

Comment