Trust and trustworthy artificial intelligence: A research agenda for AI in the environmental sciences
Demands to manage the risks of artificial intelligence (AI) are growing. These demands and the government standards arising from them both call for trustworthy AI. In response, we adopt a convergent approach to review, evaluate, and synthesize research on the trust and trustworthiness of AI in the environmental sciences and propose a research agenda. Evidential and conceptual histories of research on trust and trustworthiness reveal persisting ambiguities and measurement shortcomings related to inconsistent attention to the contextual and social dependencies and dynamics of trust. Potentially underappreciated in the development of trustworthy AI for environmental sciences is the importance of engaging AI users and other stakeholders, which human-AI teaming perspectives on AI development similarly underscore. Co-development strategies may also help reconcile efforts to develop performance-based trustworthiness standards with dynamic and contextual notions of trust. We illustrate the importance of these themes with applied examples and show how insights from research on trust and the communication of risk and uncertainty can help advance the understanding of trust and trustworthiness of AI in the environmental sciences.
document
https://n2t.org/ark:/85065/d7df6wf6
eng
geoscientificInformation
Text
publication
2016-01-01T00:00:00Z
publication
2024-06-01T00:00:00Z
Copyright author(s). This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
None
OpenSky Support
UCAR/NCAR - Library
PO Box 3000
Boulder
80307-3000
name: homepage
pointOfContact
OpenSky Support
UCAR/NCAR - Library
PO Box 3000
Boulder
80307-3000
name: homepage
pointOfContact
2025-07-10T20:01:50.212307