Withdraw
Loading…
A Human-Centered Approach to Aligning Risk Constructs in Machine Learning and Social Care Practice
Reinmund, Tyler
Loading…
Permalink
https://hdl.handle.net/2142/121796
Description
- Title
- A Human-Centered Approach to Aligning Risk Constructs in Machine Learning and Social Care Practice
- Author(s)
- Reinmund, Tyler
- Issue Date
- 2023
- Keyword(s)
- Social care
- Predictive risk models (PRM)
- Human-centered machine learning
- Abstract
- Social care organizations increasingly use machine learning-based predictive risk models (ML-based PRMs) for risk-informed decision-making. For example, these technological systems are used to identify welfare fraud, children at risk of maltreatment, or service users eligible for preventative care programs. Risk management, as these examples show, is a major component of contemporary social care practice. Yet, studies have found that social care practitioners meet ML-enabled PRMs with resistance. In this research, I explore one reason for this phenomenon: a misalignment between risk constructs. Drawing from a 5-month field study on the implementation of a ML-based PRM within a social care organization in England, I address two questions: (1) how does the construct of risk, as it is operationalized in a ML-based PRM, resemble the way social care practitioners engage with risk in their own practice? and (2) how can designers align these constructions of risk? Towards the first question, I show a tension between the two conceptions of risk. Social care practitioners are taught to see risk as contextual, subjective, and contingent: respectively, aspects of a person’s environment may either exacerbate or alleviate a risk; the person for whom the risk assessment is being conducted should be able to determine what is, and is not, a risk; and risks should be thought of in relation to proposed interventions. Meanwhile, the PRM assigns risks to individuals generally – a given risk is treated equivalently across different individuals – and externally: an external agent ascribes risk to an individual without input from the latter. Why is this dissonance important? One answer to that question comes from the field of human-computer interaction and its concept of a “mental model.” A mental model is the understanding a user has of how some system works; when a person has an incorrect mental model, she will struggle to effectively use a system and adapt to errors in its behavior. Broadly, there are two families of solutions that designers can employ to support the development of valid mental models: increasing system transparency and basing system design on people’s work practices. The first strategy aims to relay details about system behavior to users through feedback, instructions, or explanations. Meanwhile, the second attempts to design a system’s conceptual model – the concepts used by a system, the relationships between them, and their organisation – in accordance with people’s professional practices by drawing on user expertise and experience during the design process itself. Overall, this research contributes to the discussion on machine learning in risk-informed decision-making in two ways: first, it provides one explanation to the phenomenon of user resistance to ML-based PRMs in the context of social care; second, it outlines concrete and well-established strategies from the field of human-computer interaction that designers can employ to align risk constructs in machine learning and social care practice
- Type of Resource
- text
- Language
- eng
- Handle URL
- https://hdl.handle.net/2142/121796
Owning Collections
PSAM 2023 Conference Proceedings PRIMARY
Manage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…