Articles

Why algorithms can be racist and sexist (VOX/Recode)

That’s not to say there aren’t technical efforts to “de-bias” flawed artificial intelligence, but it’s important to keep in mind that the technology won’t be a solution to fundamental challenges of fairness and discrimination. And, as the examples we’ve gone through indicate, there’s no guarantee companies building or using this tech will make sure it’s not discriminatory, especially without a legal mandate to do so. It would seem it’s up to us, collectively, to push the government to rein in the tech and to make sure it helps us more than it might already be harming us.

Regulating social media content: Why AI alone cannot solve the problem (ARTICLE 19)

Over-broad restrictions on freedom of expression arising from regulation of speech online have to be challenged. And the use of technological tools to deal with complex problems like fake news, hate speech and misinformation fall far short of the standards required to protect freedom of expression.

This is in part because of the way the problems are being defined, and the approach that is being taken to address vague concepts such as fake news, fake speech and misinformation. They are too broad and susceptible to arbitrary interpretation and become particularly dangerous when State actors assume responsibility for the way these terms are interpreted. For example, Malaysia’s government introduced a “fake news” bill in March 2018 that sought to criminalise speech that criticises government conduct or exhibits critical political opposition.

A more significant challenge is posed by the way the attention has been focussed on technological tools like bots and algorithms to filter content. While useful for rudimentary sentiment analysis and pattern recognition, they alone cannot parse the social intricacies and subjective nature of speech, which are in themselves difficult even for humans to grasp. The nature of development and deployment of AI tools make the risk to freedom of expression even greater: the presence of human bias in the design of these systems means we are far away from datasets that reflect the complexity of tone, context, and sentiment of the diverse cultures and subcultures in which they function.

Privacy and Freedom of Expression in the Age of Artificial Intelligence (ARTICLE 19 & Privacy International)

The aim of the paper is fourfold:

  1. Present key technical definitions to clarify the debate;

  2. Examine key ways in which AI impacts the rights to freedom of expression and privacy and outline key challenges;

  3. Review the current landscape of AI governance, including various existing legal, technical, and corporate frameworks and industry-led AI initiatives that are relevant to freedom of expression and privacy; and

  4. Provide initial suggestions for rights-based solutions which can be pursued by civil society organisations and other stakeholders in AI advocacy activities.

Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI (Berkman Klein Center for Internet and Society)

The rapid spread of artificial intelligence (AI) systems has precipitated a rise in ethical and human rights-based frameworks intended to guide the development and use of these technologies. Despite the proliferation of these "AI principles," there has been little scholarly focus on understanding these efforts either individually or as contextualized within an expanding universe of principles with discernible trends.

To that end, this white paper and its associated data visualization compare the contents of thirty-six prominent AI principles documents side-by-side. This effort uncovered a growing consensus around eight key thematic trends: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. Underlying this “normative core,” our analysis examined the forty-seven individual principles that make up the themes, detailing notable similarities and differences in interpretation found across the documents. In sharing these observations, it is our hope that policymakers, advocates, scholars, and others working to maximize the benefits and minimize the harms of AI will be better positioned to build on existing efforts and to push the fractured, global conversation on the future of AI toward consensus.

OECD Principles on AI

The OECD Principles on Artificial Intelligence promote artificial intelligence (AI) that is innovative and trustworthy and that respects human rights and democratic values. They were adopted in May 2019 by OECD member countries when they approved the OECD Council Recommendation on Artificial Intelligence. The OECD AI Principles are the first such principles signed up to by governments. Beyond OECD members, other countries including Argentina, Brazil, Costa Rica, Malta, Peru, Romania and Ukraine have already adhered to the AI Principles, with further adherents welcomed.

The OECD AI Principles set standards for AI that are practical and flexible enough to stand the test of time in a rapidly evolving field. They complement existing OECD standards in areas such as privacy, digital security risk management and responsible business conduct.

In June 2019, the G20 adopted human-centred AI Principles that draw from the OECD AI Principles. A June 2021 report, State of implementation of the OECD AI Principles: Insights from national AI policies, presents a conceptual framework, provides findings, identifies good practices, and examines emerging trends in AI policy, particularly on how countries are implementing the five recommendations to policy makers contained in the OECD AI Principles.

How Innovative Newsrooms Are Using Artificial Intelligence (Open Society Foundations / GIJN)

Many large newsrooms and news agencies have, for some time, relegated sports, weather, stock exchange movements and corporate performance stories to computers. Surprisingly, machines can be more rigorous and comprehensive than some reporters. Unlike many journalists who often single-source stories, software can import data from various sources, recognize trends and patterns and, using Natural Language Processing, put those trends into context, constructing sophisticated sentences with adjectives, metaphors and similes. Robots can now convincingly report on crowd emotions in a tight soccer match.

These developments are why many in the journalistic profession fear Artificial Intelligence will leave them without a job. But, if instead of fearing it, journalists embrace AI, it could become the savior of the trade — making it possible for them to better cover the increasingly complex, globalized and information-rich world we live in.

Intelligent machines can turbo-power journalists’ reporting, creativity and ability to engage audiences. Following predictable data patterns and programmed to “learn” variations in these patterns over time, an algorithm can help reporters arrange, sort and produce content at a speed never thought possible. It can systematize data to find a missing link in an investigative story. It can identify trends and spot the outlier among millions of data points that could be the beginnings of a great scoop. For example, nowadays, a media outlet can continuously feed public procurement data into an algorithm which has the ability to cross-reference this data against companies sharing the same address. Perfecting this system could give reporters many clues as to where corruption may be happening in a given country.

Five reasons why now is the time to be thinking about artificial intelligence in your newsroom (Fathm)

  • Reason #1: Artificial intelligence (AI) is actually about data

  • Reason #2: AI can support human-centred thinking

  • Reason #3: AI can make you a better journalist

  • Reason #4: AI informs your overall tech strategy

  • Reason #5: You’re late… but not too late

Never before have so many countries, including China, moved with such vigor at the same time to limit the power of a single industry.

Every day we generate more data: our schedules, itineraries, preferences, activities and even our relationships are increasingly quantified. What then is the impact of this explosion of data – potentially available for collection and analysis – on the development of new media and on freedom of expression and the press?

To discuss this challenging issue, CIMA (Center for International Media Assistance) organized a panel at the Global Media Forum, an event sponsored by Deutsche Welle in Bonn from 13 to 15 June 2016. The participants included Sumandro Chattapadhyay, from the Centre for Internet and Society (India), Lorena Jaume-Palasi, from the European Dialogue on Internet Governance, and Carlos Affonso Souza, from the Institute for Technology and Society of Rio de Janeiro. The debate was moderated by Mark Nelson, the senior director of CIMA.

Last updated