AI For Good Summit –– Geneva, Switzerland

A blog post by Graduate Research Assistant Iago Bojczuk.

My first year of graduate school at MIT came to an end with my participation at the 3rd edition of the “AI For Good Summit” in Europe. Organized by the International Telecommunication Union (ITU)––the United Nations specialized agency for information and communication technology (ICT)––in partnership with the XPRIZE Foundation, the Association for Computing Machinery (ACM) and 37 sister United Nations agencies, the AI for Good series is the leading United Nations platform for inclusive dialogue on AI. During my time in Geneva, Switzerland, I had a chance to visit the second-largest office sites of the United Nations. Known in French as the Palais des Nations, this building initially hosted the League of Nations in the early 1930s. 

Photo of the Allé des Nations, with the flags of the member countries. Photo of the UN main building in Geneva.

The United Nations (UN) International Telecommunications Union (ITU), headquartered in Geneva, Switzerland, is the oldest UN specialized agency. Founded in Paris in 1865 as the International Telegraph Union, it became an official specialized UN agency after World War II. While its initial focus was on the telegraph––the dominant communication and media technology of the time––the ITU now encompasses all ICT-related technologies, with a special interest on global sustainable development and the maintenance of peace as nations develop technologically. 

ITU’s reach permeates across several sectors, from corporate governance to companies to civil society to educational institutions, and the organization has a strong commitment to strengthening the public-private relationship and to setting global standards in telecommunication technology. For example, the yearly statistical reports by ITU is highly useful for researchers and institutions dealing with ICT, and I often turn to their vast database to obtain data on mobile internet and broadband as part of my thesis research on connectivity in Brazil. 

As soon as I arrived in Geneva, I was very impressed by the robustness of institutional partnerships in the summit’s organization, which started in 2017 and is now in its 3rd version. Unlike UN conferences I had previously attended––often marked by the presence of NGOs and representatives of the public sector––the AI For Good Summit held more than 300 speakers and 2000 participants from a multitude of research areas: creative industries, computer science, engineering, and policymaking as well as scholars, students, and entrepreneurs. 

Also in Geneva, I had the chance to meet an interesting group of graduate students from Mexico and Lebanon who are currently studying at Eidgenössische Technische Hochschule (ETH) in Zürich, Switzerland. I got to spend a lot of time chatting with them and learned more about their work on various projects. 

Research Assistant Iago Bojczuk With other graduate students from ETH, Zurich

Moreover, I was able to attend lectures and have discussions with very highly esteemed people, such as the creator of Second Life, a French Fields Medal winner, one of the writers of Star Trek Voyager, high-ranking representatives of the UN, and many other figures.

As highlighted on the first day of the summit, there are three major areas in which the UN has the power to contribute to AI: I) to propose, regulate, and scale the application of AI to global and sustainable development related to the 17 Sustainable Development Goals (SDGs); II) to formulate values around AI and its applications; and III) to create a nimble of adaptable multi-stakeholders systems. In order to explore these major areas, as well as to better explore how AI technologies can be implemented to tackle the 17 SDGs, the summit’s lectures, meetings, and activities were divided into five major pillars, which correspond to different tracks based on the participant’s interests. 

These pillars included AI for Education and Learning; Good Health and Well Being; AI, human dignity and Inclusive Societies; Scaling AI For good; and AI for Space. Each participant was supposed to pick one track and contribute to the discussions of that particular working group. Given that I was curious to learn more across different fields, most of the panels I attended were about inclusiveness and AI, as well as education and AI in the Global South, all focused on the UN Sustainable Development Goals. 

Source: United Nations

Jean-Philippe Courtois, EVP and President, Microsoft Global Sales, Marketing & Operations, gave the opening keynote at the summit, emphasizing two important things. First, he discussed the broader need to establish a strong tech skills ecosystem to fuel the changes we are already experiencing. Second, he encouraged an interdisciplinary approach that would include people from all fields to navigate current technological change. 

While I appreciated that the corporate side acknowledged that no single industry, nation, organization or entity can do it alone, I feel that more international representation throughout the summit was needed to effectively correspond to what Courtois proposed. The critical analysis of global representation and the inherent problems around the ‘global village’ discourse is something that I am passionate about as I research media systems, and I think that it was something missing in the summit––though improvements occurred in comparison to its first edition.

I raise this recurring issue of global representation in conferences or summits when discussing ICT4D or ‘tech for good’ because I grew up being shaped by such a narrative that we (in the community I grew up in Brazil) had no option but to be subjugated to the “good” that was provided for us, without having our voices heard in the design, development, deployment or circulation of new media technologies. 

Whilst I firmly believe that summits and conferences should be events of high importance for scholars and civil society actors, I also think that we need to spend more time to acknowledge pre-existing societal issues that often get overshadowed in the hype of new corporate-led products––such as the well-known digital divide or the energy sector––as we tackle more complex issues such as AI. While they must not be mutually exclusive, we need to at least be aware of the diverse circumstances around the world so that we can better establish partnerships and gather global talents to assure a more inclusive and participatory scenario for future decades. 

Photo of one of the plenary sessions at ITU

As we discuss in the GMTaC lab, there is so much to learn about media technologies beyond the centers of industrial and political power. In fact, as it has already been pinpointed by scholars, very little consultation is actually done with people and communities that actually live in developing regions and who remain disconnected. 

As shown in the book Whose Global Village?: Rethinking How Technology Shapes Our World, UCLA Professor Ramesh Srinivasan argues that “such populations are often spoken for rather than listened to.”1 But how can we be inclusive and talk about AI or machine learning when communities in many countries still lack digital infrastructures, media literacy or even access to electricity? This is certainly some food for thought that we need more scholars, practitioners, and communities to continue working on.

On a more practical and project-oriented side, one case study that was well-discussed in the opening remarks included FarmBeats, one of Microsoft’s AI for Earth lighthouse projects. Working as an AI + IoT  (sensor, drones, data analytics, AI and connectivity solutions), FarmBeats’ goal is to lower overall costs and reduce the environmental impact of agriculture production, arguably critical for developing nations. 

 

While I have not looked deeper into the efficacy of projects like those presented by Microsoft, I enjoyed the emphasis given to ethics in AI, which later in the summit was expanded under the title of ‘Unexpected Consequences of AI’. I learned a great deal of the ethical principles that regulate Microsoft’s AI-enabled projects, detailed below:

Source: https://www.microsoft.com/en-us/ai/our-approach-to-ai
 
For future AI summits, it would be great if we participants could collaborate and further external monitoring of the ethics and “good” aspects of such initiatives. In doing so comparatively, we would be able to critically assess how different companies deal with ethical issues and, therefore, foster an ecosystem where those discussions are valued and not seen as threats to technological development.

Another interesting moment happened in one panel about government innovation, when Marten Kaevats, National Digital advisor of Estonia, discussed his country’s digitalization actions, involving high stakes conversations (often dealing with digital IDs, passports, facial recognition technologies and so forth). When addressing a question concerning privacy and digitization of personal information, he said that “it is not a political question. It’s an engineering question.”  I immediately thought: What about when inequalities are exacerbated through AI biases? What about legislation and languages we use to deal with technological advancements? What about when police and security forces deploy automated facial recognition systems as a way of identifying criminals and terrorists? Who should be accountable for mistakes, biases and reinforcement of prejudices?

Issues like these ones have already been brought to public discussion by numerous scholars, tech critics, and human rights activists––as we find in books such as Weapons of Math Destruction by mathematician Cathy O’Neil, Algorithms of Oppression by information studies scholar Safiya Noble as well as in the projects by MIT Media Lab researcher Joy Buolamwini (Ain’t AI a woman?), among others.

However, it did not take long until someone spoke up and corrected him by exposing the argument that whenever a technological matter involves people, communities, and power, it is always a political question. The tense conversation continued to flow as presentations were followed by Q&A. Despite the large size of the summit and the numerous speakers, I appreciated that the smaller sessions––divided between one of the five pillars I mentioned above––allowed for a closer interaction with participants; occasions that often involved disagreements but still worked towards a possible consensus of how to expand the summit’s vision of AI For Good.  

Photos of the summit at the ITU headquarters in Geneva, Switzerland

Overall, the three major takeaways from the summit were:

  1. Global diversity is key when discussing applicability, ethics, and the importance of AI for all. In a world when only a few companies operating in urban centers in the so-called “Global North” are at the forefront of technological advancements in AI, it is crucial that we work toward a more inclusive framework to allow global voices to engage in AI-related programs. In doing so, we will also be better equipped to think of AI on a broader and historical sense, weighing the impact of successful policies and the reality of most people who still lack access to ICT, media literacy or even access to electricity. 
  2. Interdisciplinarity is a must when thinking of AI applications. From the arts to policymakers and from the computer engineers to historians, it is essential that interdisciplinary teams work together in the conceptual and technical development of AI technologies. This is because societal breakthroughs often depend on diversity, and unintended consequences are better prevented when working across expertise, interests, and motivations. Given the global impact that AI promises in the coming years, it is impossible not to include all segments of human creativity and human potential to better prepare for the coming years.
  3. Governments should work more closely with corporations to regulate AI research and development. If companies work isolated in the research and development phases of AI, we are doomed to exist under the guidelines that may work well for companies and for the capitalist system that drive their revenues, and not necessarily for the ones who can benefit from such technologies. That said, it is essential that elected officials who serve in governments should be educated in areas of AI so that they can help to develop appropriate regulations of AI tools in the public interest. Accordingly, a more active awareness of the civil society is also crucial, though is also dependent on how much importance we give not only to tech-related classes but also to the social science dimensions that dictate the world we navigate.

As I learned from other participants and speakers at the summit, it would be virtually impossible to foster impact on a global scale without having a common enabling infrastructure. Bearing that in mind, the end of the summit gave rise to ‘AI Commons’, which will comprise of “shared knowledge, data, resources and problem-solving approaches to stimulate the development and application of ‘AI for Good’ projects, with the overarching goal to create the new partnerships required to ensure that high-potential ‘AI for Good’ projects achieve an impact on a global scale.2 Its three major values encompass the following: open and collaborative, diverse and inclusive, and human-centered.

As described on an ITU press release, the ‘AI Commons’ will provide an open framework for collaboration and a decentralized system to democratize problem-solving with AI. Accordingly, AI adopters will connect with AI specialists and data owners to align incentives for innovation and develop AI solutions to precisely defined problems. Therefore, AI development and application will build on the state of the art, enabling AI solutions to scale with the help of shared datasets, testing and simulation environments, AI models and associated software, and storage and computing resources.3  

“AI will have the greatest impact when everyone can access its benefits. On the other hand, every government, company, university, international institution, civil society organization, and every single one of us should consider how best to work together to ensure AI serves as a positive force for humanity,” said ITU Secretary-General Houlin Zhao. “At the core of this is data. AI and data need to be a shared resource if we are serious about scaling AI for good. The community supporting the summit is creating infrastructure to scale-up their collaboration – to convert the principles underlying the summit into global impact.”

Prof. Lisa Parks had previously written how the broad reach of AI tools raises questions of justice and global impact as part of MIT’s Ethics, Computing and AI series.

“Given the power of AI tools to impact human behavior and shape planetary conditions, it is vital that a political, economic, and materialist analysis of the technology’s relation to global trade, governance, natural environments, and culture be conducted. This involves adopting an infrastructural disposition and specifying AI’s constitutive parts, processes, and effects as they take shape across diverse world contexts. Only then can the public understand the technology well enough to democratically deliberate its relation to ethics and policy.”4

Finally, Chaesub Lee, Director of the ITU Telecommunication Standardization Bureau, expressed that summits like the ‘AI for Social Good’ is essential because it brings together a wide range of innovators, investors, data owners, and humanitarian actors. He goes on to explain that, “inclusive dialogue has helped them to clarify their respective roles in ensuring that AI fulfills its potential to act as a force for good, clarity that will continue to increase with the development of the ‘AI Commons’ ––the embodiment of the AI for Good community’s commitment to collaboration.” 

To read more about the ‘AI Commons’ initiative and find out how to get involved, click here: https://aicommons.com 

Although my work does not focus on AI, I believe that everyone should be willing to learn more about it, thus going beyond what we read in the news headlines and trying to understand how these technologies really work, including the infrastructures, stakeholders and power dynamics that enable them. That is how we will be better prepared to predict AI’s [un] intended consequences and gauge its potential for the benefit of communities in different contexts.

Find below a list of ITU resources related to the AI For Good Summit  to explore: 

1Srinivasan R (2017) Whose Global Village? Rethinking How Technology Shapes Our World. New York University Press.

2AI Commons, aicommons.com/about-us/.

3“3rd AI for Good Global Summit Gives Rise to ‘AI Commons.’” ITU Press Release, www.itu.int/en/mediacentre/Pages/2019-PR10.aspx. 

4“Addressing the Societal Implications of AI.” MIT SHASS: News – 2019 – Ethics and AI – AI’s Impact on Society – Lisa Parks, shass.mit.edu/news/news-2019-ethics-and-ai-ais-impact-society-lisa-parks.