Artificial Intelligence is a reflection of society

A critical view on discussions at IGF 2019

By Avik Majumdar

The three core principles guiding emerging technologies should be responsible artificial intelligence (AI), obedience to human centric laws and mechanisms that tie the technologies together with human need – a consensus within the Internet governance community.

With these core principles in mind, the second day of the Internet Governance Forum (IGF) in Berlin brought together a panel of high level experts on the role and future of human resources in AI, as concerns have been echoing since the evolution of machine learning and AI first surfaced.

Peggy Hicks, Director of Human Rights at United Nations, stated “for me responsible AI is something my grandmother understands, my teacher can explain and I’m not worried about it being applied to my children. That means it has to be understandable, transparent and accountable.”

Yoichi Iida, member of the Ministry of Internal Affairs and Communication of Japan, whose thoughts on responsible AI were based on concepts of inclusivity, transparency, robustness and accountability, were confirmed by Mina Hanna, co-chair of the IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems. The latter lauded the UN’S Guiding Principles for Business on Human Rights (UNGPs), calling them the pillars on which responsible AI should be developed alongside the pillar of advancement of human rights and the agency of political self-determination.

Caroline Nguyen, Director of Technology Policy at Microsoft, called for discussion about data that affects AI, and for empathy, problem solving training and education on AI. 

Incentives to support responsible AI development and usage by corporations was discussed as part of universal regulations that could be employed to ensure that AI serves its three core principles.

However, the highlight of the show was an ‘intelligent’ robot called Koala that had been silently listening in on the conversation taking place onstage. At the end of the deliberations, Koala articulately echoed what had been discussed onstage. In a way, Koala embodied what the entire discussion had centered around, which was inclusivity. 

Critical remarks from the audience also pointed to the lack of representation of “common global citizens” regarding the development of the guidelines on human rights. In addition, one member of the audience brought up the plight of the population with disabilities in the dialogue on AI development. 

And another critical view was voiced regarding data, which is the essence of AI: Provided the existence of feasible data ecosystems in developing nations, AI could benefit less powerful groups. However, as it stands, the people behind the development of AI are 80% white males from Silicon Valley. Thus AI may be catering only to the elite. Subsequently, the data sets employed by said developers may only target an audience like themselves, not taking into account “the downtrodden and the biased”.

On a critical side note, one might say that using French and English as examples of “opposite ends of the language spectrum” to demonstrate inclusion may not have been the best choice. Also from the perspective of marginalized communities, which IGF 2019 explicitly intends to include, Koala’s seat may have been better occupied by a human being with a perspective distinctly different from the ones offered by the panel.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s