Research & Reports

UN Reports

Report of the Special Rapporteur to the Human Rights Council on the use of encryption and anonymity to exercise the rights to freedom of opinion and expression in the digital age (A/HRC/29/322015)

In the present report, submitted in accordance with Human Rights Council resolution25/2, the Special Rapporteur addresses the use of encryption and anonymity in digital communications. Drawing from research on international and national norms and jurisprudence, and the input of States and civil society, the report concludesthat encryption and anonymity enable individuals to exercise their rights to freedom of opinion and expression in the digital age and, as such, deserve strong protection.

(Summary)

Promotion and protection of the right to freedom of opinion and expression (as it relates to artificial intelligence)

The present report does not pretend to be the last word in (AI) and human rights. Rather, it tries to do three things: define key terms essential to a human rights discussion about AI; identify the human rights legal framework relevant to AI; and present some preliminary recommendations to ensure that, as the technologies comprising AIevolve, human rights considerations are baked into that process. The report should be read as a companion to my most recent report to the Human Rights Council (A/HRC/38/35), in which a human rights approach to online content moderation was presented.

(Intro)

Steering AI and Advanced ICTs for Knowledge Societies: A Rights, Openness, Access, and Multi-stakeholder Perspective (UNESCO)

UNESCO’s mandate to build inclusive knowledge societies is centered on its efforts to promote freedom of expression and access to information, alongside quality edu-cation and respect for cultural and linguistic diversity. The digital transformation un-derway in society is touching all spheres of human activity, and it is timely to reflect on the key challenges and opportunities created by digital technologies like artificial intelligence (AI).

The title of this publication is a call for ‘Steering AI and Advanced ICTs for Knowledge Societies’ from the perspective of human Rights, Openness, Access and Mul-ti-stakeholder governance (the ROAM principles). Such steering should also sup-port gender equality and Africa, the two global priorities of UNESCO. Technological change and advancement is important for sustainable development, yet belief in technological determinism risks neglecting social, economic and other drivers. Ins-tead, the challenge is to harness human agency to shape the trajectory of AI and related information and communication technologies (ICTs).

While there is no single definition of ‘artificial intelligence’, this publication focuses on what UNESCO’s World Commission on the Ethics of Scientific Knowledge and Tech-nology (COMEST) describes as “machines capable of imitating certain functionalities of human intelligence, including such features as perception, learning, reasoning, problem solving, language interaction, and even producing creative work” (COMEST, 2019). AI and its constitutive elements of data, algorithms, hardware, connectivity and sto-rage exponentially increase the power of ICT. This is a major opportunity for sustai-nable development, with concomitant risks that also need to be addressed. To steer AI accordingly, we need to recognize the uneven but dynamic distribution of AI power across multiple and dispersed centres within governments, the private sector, the technical community, civil society and other stakeholders worldwide. It is for this rea-son that multi-stakeholder engagement around AI is vital. This perspective aligns with the approach to ICT governance as per the World Summit on the Information Society (WSIS) principles and processes that are led by the United Nations (UN).

Spotlight on Artificial Intelligence and Freedom of Expression (OSCE)

This Paper addresses these challenges, building on the initial work of the OSCE Representative on Freedom of the Media. It maps the key challenges to freedom of expression presented by AI across the OSCE region, in light of international and regional standards on human rights and AI. It identifies a number of overarching problems that AI poses to freedom of expression and human rights in general, in particular:

  • The limited understanding of the implications for freedom of expression caused by AI, in particular machine learning;

  • Lack of respect for freedom of expression in content moderation and curation;•State and non-State actors circumventing due process and rule of law in AI-powered content moderation;

  • Lack of transparency regarding the entire process of AI design, deployment and implementation;

  • Lack of accountability and independent oversight over AI systems;

  • Lack of effective remedies for violation of the right to freedom of expression in relation to AI.This Paper observes that these problems became more pronounced in the first months of 2020, when the COVID-19 pandemic incentivized States and the private sector to use AI even more, as part of measures introduced in response to the pandemic. A tendency to revert to technocratic solutions, including AI-powered tools, without adequate societal debate or democratic scrutiny was witnessed.

New powers, new responsibilities: A global survey of journalism and artificial intelligence (LSE)

Artificial intelligence (AI) is a significant part of journalism already but it is unevenly distributed. The technologies that come under the umbrella term of AI range from everyday functions such as search, to complex algorithms drawing upon deep learning to create text or videos. AI technologies are rapidly developing, alongside other radical changes in media production and business models. The future impact of AI is uncertain but it has the potential for wide-ranging and profound influence on how journalism is made and consumed. Using AI is not as dramatic a step as when news organisations first went online. It has more in common with the adoption of social media as a source, production tool, and as a distribution and engagement vehicle for journalism.

AI has the potential to enhance journalism throughout the process in significant ways that may, over the next few years, have structural effects. However, even the newsrooms we surveyed that are furthest ahead in the adoption of AI described it as additional, supplementary and catalytic, not yet transformational. When fully integrated, pervasive, and operating at scale, AI could have high value in certain areas such as audience engagement, story discovery, and labour efficiency. Some of the possible uses, such as automated translation and text generation, may enable leaps forward into new areas of journalism, marketing and product development. But overall, our respondents, while often enthusiastic, said that they don’t expect an immediate revolution at scale through AI, compared to the advances that might occur in other fields such as security, retail, or health. This is partly because of the special nature and needs of journalism but also because of the relative lack of resources for research and development.

Mixed Messages? The Limits of Automated Social Media Content Analysis

Governments and companies are turning to automated tools to make sense of what people post on social media, for everything ranging from hate speech detection to law enforcement investigations. Policymakers routinely call for social media companies to identify and take down hate speech, terrorist propaganda, harassment, “fake news” or disinformation, and other forms of problematic speech. Other policy proposals have focused on mining social media to inform law enforcement and immigration decisions. But these proposals wrongly assume that automated technology can accomplish on a large scale the kind of nuanced analysis that humans can accomplish on a small scale.

Governance with teeth: How human rights can strengthen FAT and ethics initiatives on artificial intelligence

As artificial intelligence (AI) is increasingly integrated into societies, its potential impact on democracy and society has given rise to important debates about how AI systems should be governed. Some stakeholders have put their focus on building normative ethical principles, while others have gravitated towards a technical discussion of how to build fair, accountable, and transparent AI systems. A third approach has been to apply existing legal human rights frameworks to guide the development of AI that is human rights-respecting through design, development and deployment.In this paper, ARTICLE 19 considers the ethical and technical approaches in the field so far. We identify the contours and limitations of these parallel discussions, and then propose a human rights-based approach to each of them. The intention behind this paper is to explore how a human rights-based approach can constructively inform and enhance these efforts and present key recommendations to stakeholders.

GIS Watch 2019 – Artificial intelligence: Human rights, social justice and development (APC)

Much has been written about the ways in which artificial intelligence (AI) systems have a part to play in our societies, today and in the future. Given access to huge amounts of data, affordable compu-tational power, and investment in the technology, AI systems can produce decisions, predictions and classifications across a range of sectors. This profoundly affects (positively and negatively) eco-nomic development, social justice and the exercise of human rights.

Contrary to popular belief that AI is neutral, in-fallible and efficient, it is a socio-technical system with significant limitations, and can be flawed. One possible explanation is that the data used to train these systems emerges from a world that is dis-criminatory and unfair, and so what the algorithm learns as ground truth is problematic to begin with. Another explanation is that the humans building these systems have their unique biases and train systems in a way that is flawed. Another possible explanation is that there is no true understanding of why and how some systems are flawed – some algorithms are inherently inscrutable and opaque, and/or operate on spurious correlations that make no sense to an observer. But there is a fourth cross-cutting explanation that concerns the global power relations in which these systems are built. AI systems, and the deliberations surrounding AI, are flawed because they amplify some voices at the ex-pense of others, and are built by a few people and imposed on others. In other words, the design, de-velopment, deployment and deliberation around AI systems are profoundly political. The 2019 edition of GISWatch seeks to engage at the core of this issue – what does the use of AI sys-tems promise in jurisdictions across the world, what do these systems deliver, and what evidence do we have of their actual impact? Given the subjectivity that pervades this field, we focus on jurisdictions that have been hitherto excluded from mainstream con-versations and deliberations around this technology, in the hope that we can work towards a well-informed, nuanced and truly global conversation.

Human Rights in the Age of Artificial Intelligence (Access Now)

As artificial intelligence continues to find its way into our daily lives, its propensity to interfere with human rights only gets more severe. With this in mind, and noting that the technology is still in its infant stages, Access Now conducts this preliminary study to scope the potential range of human rights issues that may be raised today or in the near future. Many of the issues that arise in examinations of this area are not new, but they are greatly exacerbated by the scale, proliferation, and real-life impact that artificial intelligence facilitates. Because of this, the potential of artificial intelligence to both help and harm people is much greater than from technologies that came before. While we have already seen some of these consequences, the impacts will only continue to grow in severity and scope. However, by starting now to examine what safeguards and structures are necessary to address problems and abuses, the worst harms—including those that disproportionately impact marginalized people—may be prevented and mitigated.

There are several lenses through which experts examine artificial intelligence. The use of international human rights law and its well-developed standards and institutions to examine artificial intelligence systems can contribute to the conversations already happening, and provide a universal vocabulary and forums established to address power differentials. Additionally, human rights laws contribute a framework for solutions, which we provide here in the form of recommendations. Our recommendations fall within four general categories: data protection rules to protect rights in the data sets used to develop and feed artificial intelligence systems; special safeguards for government uses of artificial intelligence; safeguards for private sector uses of artificial intelligence systems; and investment in more research to continue to examine the future of artificial intelligence and its potential interferences with human rights.

Automating Society: Taking Stock of Automated Decision-Making in the EU (AlgorithmWatch)

Imagine you’re looking for a job. The company you are applying to says you can have a much easier application process if you provide them with your username and password for your personal email account. They can then just scan all your emails and develop a personality profile based on the result. No need to waste time filling out a boring questionnaire and, because it’s much harder to manipulate all your past emails than to try to give the ‘correct’ answers to a questionnaire, the results of the email scan will be much more accurate and truthful than any conventional personality profiling. Wouldn’t that be great? Everyone wins—the company looking for new personnel, because they can recruit people on the basis of more accurate profiles, you, because you save time and effort and don’t end up in a job you don’t like, and the company offering the profiling service because they have a cool new business model.

The focus on four different issues in this report:

  • How is society discussing automated decision-making? Here we look at the debates initiated by governments and legislators on the one hand, like AI strategies, parliamentary commissions and the like, while on the other hand we list civil society organisations that engage in the debate, outlining their positions with regard to ADM.

  • What regulatory proposals exist? Here, we include the full range of possible governance measures, not just laws. So we ask whether there are ideas for self-regulation floating around, a code of conduct being developed, technical standards to address the issue, and of course whether there is legislation in place or proposed to deal with automated decision-making.

  • What oversight institutions and mechanisms are in place? Oversight is seen as a key factor in the democratic control of automated decision-making systems. At the same time, many existing oversight bodies are still trying to work out what sectors and processes they are responsible for and how to approach the task. We looked for examples of those who took the initiative.

  • Last but not least: What ADM systems are already in use? We call this section ADM in Action to highlight a lot of examples of automated decision-making already being used all around us. Here, we tried to make sure that we looked in all directions: do we see cases where automation poses more of a risk, or more of an opportunity? Is the system developed and used by the public sector, or by private companies?

Artificial Intelligence: The Promises and the Threats (UNESCO Courier)

More than ushering in a Fourth Industrial Revolution, AI is provoking a cultural revolution. It is undeniably destined to transform our future, but we don’t know exactly how, yet. This is why it inspires both fascination and fear. In this issue, the Courier presents its investigation to the reader, elaborating on several aspects of this cutting-edge technology at the frontiers of computer science, engineering and philosophy. It sets the record straight on a number of points along the way. Because, let’s be clear – as things stand, the AI cannot think. And we are very far from being able to download all the components of a human being into a computer! A robot obeys a set of routines that allows it to interact with us humans, but outside the very precise framework within which it is supposed to interact, it cannot forge a genuine social relationship. Even so, some of AI’s applications are already questionable – data collection that intrudes on privacy, facial recognition algorithms that are supposed to identify hostile behaviour or are imbued with racial prejudice, military drones and autonomous lethal weapons, etc. The ethical problems that AI raises – and will undoubtedly continue to raise tomorrow, with greater gravity – are numerous. While research is moving full speed ahead on the technical side of AI, not much headway has been made on the ethical front. Though many researchers have expressed concern about this, and some countries are starting to give it serious thought, there is no legal framework to guide future research on ethics on a global scale

Artificial​ ​Intelligence: Practice​ ​and Implications​ ​for Journalism (Tow Center for Journalism)

The ​​increasing ​​presence​​ of ​​artificial ​​intelligence ​​and ​​automated ​​technology ​​is ​​changing journalism.​​ While ​​the​​ term ​​​​artificial ​​intelligence ​​dates ​​back ​​to ​​the​​ 1950s, ​​and​​ has​​ since ​​acquired several ​​meanings,​​ there​​ is​​ a​​ general ​​consensus ​​around ​​the ​​nature ​​of ​​AI​​as ​​the ​​theory ​​and development ​​of ​​computer ​​systems​​able​​ to ​​perform ​​tasks​​ normally​​ requiring​​ human ​​intelligence. Since​​ many ​​of ​​the​​ AI ​​tools ​​journalists​​ are ​​now​​ using​​ come ​​from​​ other​​ disciplines — computer science,​​statistics,​​ and​​ engineering,​​ for​​ example— they​​ tend ​​to​​ be general​​ purpose. Now​​that ​​journalists​​ are​​ using​​ AI ​​in ​​the ​​newsroom, ​​what ​​must ​​they ​​know ​​about ​​these technologies,​​ and ​​what​​ must ​​technologists​​ know​​ about ​​journalistic ​​standards​​ when ​​building them? On​​ June​​ 13,​​ 2017,​​ the​ ​Tow​ ​Center ​​for ​​Digital​ ​Journalism​ ​and​​the ​​Brown ​​Institute ​​for​​ Media Innovation ​​convened ​​a ​​policy ​​exchange​​ forum ​​of ​​technologists ​​and ​​journalists ​​to ​​consider ​​how artificial ​​intelligence ​​is ​​impacting​​ newsrooms ​​and​​ how ​​it​​ can ​​be ​​better​​ adapted ​​to ​​the ​​field ​​of journalism. ​​The​​gathering​​ explored​​ questions​​ like:​​ How​​ can ​​journalists ​​use​ ​AI ​​to ​​assist​​there porting ​​process?​​ Which​​ newsroom​​ roles ​​might​​ AI ​​replace?​​ What​​ are​​ some ​​areas​​of ​​AI ​​that news ​​organizations ​​have ​​yet​​ to ​​capitalize​​ on?​​ Will ​​AI ​​eventually ​​be ​​a​ ​part ​​of​​ the ​​presentation ​​of every​ ​news​ ​story?

AI and Human Judgement (Kenan Malik)

This essay aims at invesstigating the difference between human and artificial intelligence. It was published on 17 May 2020, under the headline ‘For all its sophistication, AI isn’t fit to make life-or-death decisions’.

Last updated