Withdraw
Loading…
Future-oriented AI risk management
Gotcheva, Nadezhda; Wessberg, Nina
Loading…
Permalink
https://hdl.handle.net/2142/121845
Description
- Title
- Future-oriented AI risk management
- Author(s)
- Gotcheva, Nadezhda
- Wessberg, Nina
- Issue Date
- 2023
- Keyword(s)
- Foresight
- Horizon scanning
- Risk analysis
- Hazard identification
- Emerging technologies
- Abstract
- Previous research has examined the potential of integrating future-oriented technology analysis with risk management methodologies and tools, with the aim of systematically including risk assessment in future- oriented technology analysis (Koivisto et al., 2009). Integrating foresight and risk assessment traditions have been considered beneficial particularly in studies of new emerging risks because this field is closest to foresight exercises. The identification of new emerging AI risks demands longer timeframe since it could take a while for the implications on society to manifest. Risk-informed decision-making related to AI technologies is challenging because an understanding of both “proximate” risks (direct) and “distal” risks (latent, indirect and sometimes faraway) is needed to capture the full risk landscape. It is unknown if some change is posing a latent risk and what could be the potential magnitude and impact of actualizing the risk. To ensure such comprehensive understanding and competence building, continuous foresight as well as continuous risk analysis, assessment and management are needed in organizations. In this paper we propose a new method for continuous horizon scanning of AI technologies development and societal implications, called Signal post series. The goal is to systematically capture and analyze weak signal and to co-create knowledge on anticipating risks and opportunities related to AI and its various applications in the future. The Signal posts are essentially thematic future-oriented essays or blog texts, containing hyperlinks, which are written regularly, for example monthly, based on weak signals, identified, and interpreted in researchers’ workshops. The method has been developed in ETAIROS project in Finland, “Ethical AI for the Governance of the Society”, funded by Strategic Research Council at the Academy of Finland. ETAIROS project aims at studying and developing practical processes and frameworks that help public, private, and third-sector organizations enhance the ethical and social sustainability of applying AI technologies. In terms of impact, Signal posts contribute to strengthening future oriented responsibility and resilience of AI systems. To tackle the risks and opportunities we ask: 1) What should happen, 2) What happens, 3) What does not happen and 4) What may happen in the systems applying AI technology? Every Signal post includes an ethical consideration, which covers topics such as humanity, sovereignty, dignity, human rights, data management, safety, property rights, transparency, and sustainability. With this analysis we try to improve the culture of responsibility in AI systems. This also includes resilience, which creates a system able to learn from mistakes and manage future changes in the operating environment. In addition, we try to create acceptability and desirability of the AI systems addressing potential opportunities. There are pressing concerns for a harmonized global approach and regulation for assessing and managing AI-related risks, especially in terms of human values, norms, and overall culture. The advantage of Signal post method for integrating horizon scanning and risk analysis is that it holds strong emphasis on safety, ethics and responsibility while considering risks and opportunities related to AI technologies. This approach has the potential to enhance capacities across sectors and value chains for co-creating sustainable futures.
- Type of Resource
- text
- Language
- eng
- Handle URL
- https://hdl.handle.net/2142/121845
- Sponsor(s)/Grant Number(s)
- Strategic Research Council at the Academy of Finland
Owning Collections
PSAM 2023 Conference Proceedings PRIMARY
Manage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…