loader image

Can there be misuse in research and development of artificial intelligence tools in law enforcement contexts?

 

This is the question that has been addressed and exposed by the PlusEthics team in the 21-24 of 2022 in the EUROCRIM organised in the city of Malaga (Spain). The main objective of the research carried out by our team was to detect and elaborate a sufficiently broad and concise framework to answer this question. The starting point was the various documents produced by the European Commission on misuse in research. From these, a number of possible misuses have been identified: those that may provide knowledge, materials and technologies that could be channelled towards crime or terrorism, those that could lead to chemical, biological, radiological or nuclear weapons and the means for their delivery, those that may involve the development of surveillance technologies that could restrict human rights and civil liberties, or finally and finally, those that may involve minorities or vulnerable groups or develop social, behavioural or genetic profiling technologies that could be misused to stigmatise, discriminate, harass or intimidate individuals.

Recognising these possible contexts in which AI research may lead to potential misuse opens the door to developing measures and strategies for reducing and possibly mitigating misuse. The aim is to develop a set of practical tools that can facilitate the creation of possible solutions to these challenges. To meet this objective, the focus is firstly on three of these challenges, which are referred to as follows: AI ending up in the wrong hands, infringing fundamental rights or other freedoms and, finally, leading to discriminatory and stigmatising attitudes. After identifying the most relevant misuses, a series of recommendations have been established for each of the misuses studied and, finally, a series of general recommendations have been drawn up to minimise the potential risk of misuse in police AI research.  Thus, with respect to the first of the identified misuses, where the risk is mainly that these tools, by their nature, if they fall into the wrong hands, can highlight and publicise the current (or potential) vulnerabilities and capabilities of law enforcement authorities, parameters have been established for the classification of information and a policy for incidental findings. Regarding the second potential misuse related to possible technological abuses that could result in a serious restriction or violation of human rights and civil liberties, a series of ethical recommendations have been established to encourage citizen participation in the different phases of AI development, especially in the final stages of implementation and use. In addition to these social-ethical recommendations, various ethical principles have been added from the High Level Expert Group on Artificial Intelligence and other scholars in the field. In summary, the principles outlined here are: Traceability, Explainability, Communication, Robustness, Accuracy, Reliability, Assurance and non-opacity (Black-box).

Thirdly, with regard to the third of the potential misuses related to those practices based on the research results that could give rise to patterns of discrimination, stigmatisation, intimidation or harassment, a series of requirements to be taken into account and avoided have been proposed, such as the existence of biases, elements of discrimination: gender, race, socio-economic position, sex, etc. In addition, a series of possible attitudes towards the use of technology, such as the assumption of technological neutrality or the maintenance of a certain degree of scepticism towards the use of these tools. Finally, a series of general recommendations have been established, which can be summarised as follows: Legitimacy, Respect for the law, Fairness, Human agency, Non-harmful, Empirical support and Ethical assessment.