This research brief is an output of the project Disruptive technologies and rights-based resilience. The project has received support from the Geneva Science-Policy Interface in 2021-2022 within its Impact Collaboration Programme.
Fast-paced technological advances, including in artificial intelligence (AI), are increasingly disrupting and transforming our world. Digital technologies pose significant societal challenges, notably regarding human rights. For example, the use of such technologies can contribute to exacerbating ethnic conflict, fuelling hate speech, undermining democratic processes, facilitatingmass surveillance, and perpetuating discriminatory narratives and practices.
At the same time, technological innovation also promises to support the promotion and protection of human rights, notably as means to achieving the Sustainable Development Goals.
The role of the private sector in fostering technological innovation is the key driving force of today’s data-driven economy. In 2020, the UN Secretary-General called on
States ‘to place human rights at the centre of regulatory frameworks and legislation on the development and use of digital technologies.’ Since then, a variety of regulatory initiatives at domestic and regional levels have been put forward to tackle different aspects concerning rights-respecting business conduct in the technology sector. That is, for instance, the regulation of AI technologies4 and that of online harms.
This research brief evaluates how regulatory approaches to business conduct in the technology sector could be better aligned with the UNGPs. The analysis draws on research carried out at the Geneva Academy of International Humanitarian Law and Human Rights as part of the project Disruptive Technologies and Rights-based Resilience – funded by the Geneva Science-Policy Interface and conducted in partnership with the Office of the United Nations (UN) High Commissioner for Human Rights (OHCHR) B-Tech Project.
Read and download the full brief below