D.6.5 Artifical Intelligence, Human Rights and Civil Liberties
June 2023
DOI:
https://doi.org/10.5281/zenodo.11395868Abstract
The democratic framework rests upon the people's (demos) sovereignty. It relies on majority-based political representation and fundamental human rights, which vary from country to country according to the socio-political circumstances and cultural context. This report offers an empirical overview of the theoretical and practical tension between developing human-like artificial intelligence (AI) technologies for social and security purposes and preserving human freedoms and rights through law. This tension has risen parallel to the sovereign state's desire to maintain collective and individual safety among citizens while exploiting digital tools (specifically in social networks). It focuses on two main concepts emerging from the extensive development of AI technologies: discrimination and bias, inevitably resulting in inequality. In coherence with the D.Rad framework of radicalisation as a process, social exclusion that derives from inequality might pose a potential risk in leading to broader notions and feelings of Injustice and Grievance, developing and creating linkage to Alienation, resulting in Polarisation (The I-GAP spectrum).In some cases, AI machinery can assist governments in keeping citizens safe from online radicalisation, which may lead to violence. Still, in many other cases, the capability to develop suitable non-biased AI solutions directed against online radicalisation can only be accomplished in the aftermath, examining online users who posted online documentation of offline violent acts. After a short introduction (1), Section 2 offers background on legal and constitutional issues concerning the use of human-like technology for everyday social, economic and political needs (e.g. related to job interviews, health systems, or expediting various legal procedures). It highlights the challenges that arise from using the "automatic" design and operation of artificial intelligence, which might promote, consciously or not, discriminatory practices (2.1).
The following sub-sections (2.2; 2.3) discuss the issue of techniques aimed at applying non-discrimination within the development of advanced AI systems, including steps in which developers can devise procedures for equality and inclusion.
Section 3 offers some reflections concerning attempts to design "neutral" machines that might result in an internal bias towards minorities based on the normative assumptions of the socio-political system and their definition of "objectivity" in representation. It might lead to the exclusion of other groups by replicating biases due to the use of narrow lexical components within AI designs.
Section 4 discusses the tension between legislation and security-oriented actions in cyberspace (as well as the difference between security and civil defence) and initiatives that seek to preserve fundamental freedoms as a part of human rights, which may conflict. Next, it presents some insights from the Israeli case, as an example of the connection between "offline" practices and "online" characteristics, and the ways of bypassing state laws and utilising social media to perform acts of extremism, which has the effect of increasing existing tensions followed by racism, xenophobia and nationalism (4.1). Digital platforms have enabled the emergence of three main threats: incitement, fake news and the copycat syndrome, sometimes aided by digital bots. The three reflect how online inspiration can lead to offline actualisation.
This section addresses the need to keep civilians' online activities safe from harm by relying on their ability to use cyberspace anonymously, and the institutional actions undertaken by the state (4.2) with the aim of finding some accountability for those who violate the freedoms of these spaces. Further, it considers the additional perspective of non-governmental organisations (NGOs) and their transformation through social media, which has expanded their ability to promote deradicalisation through activities designed to safeguard human rights (4.3). It highlights that civil society can play a crucial role in receiving and containing data regarding undetected discrimination and biased attitudes that the state has trouble locating. Section 5 offers concluding remarks concerning the relationship between offline acts and online activities that tend to radicalise other users, initially experiencing feelings of exclusion. By exposing them to violent content, hence further reflecting on the I-GAP spectrum components. First, one must consider the challenges posed by discriminatory and biased elements that may result while building an artificial intelligence mechanism – also regarding machine learning and training processes – and especially a textual framework for algorithmic design. Beyond that, cooperation between state security institutions and civil society (which is directly connected to the public) can direct attention to existing AI state systems that express discriminations, as reported by volunteers and the people, by creating shared datasets of reports. In this way, through broad transparency, it is possible to lead to an unbiased approach among official institutions which will also permeate unofficial institutions among civil society and the private sector. Receiving and sharing ad hoc data from NGOs and GOs can assist the goal of helping eliminate "automatic-made" violations of human rights, such as those on discriminatory AI systems which exist "unawarely" but still needs to be attended, thus avoiding the duplication of discriminatory discourse that may someday be perceived as "natural".
Downloads
Published
Issue
Section
License
Copyright (c) 2023 Ben-Gurion University of the Negev, University of Florence

This work is licensed under a Creative Commons Attribution 4.0 International License.
