AI Perceptions
Background
Artificial intelligence and automation are increasingly embedded in public safety and criminal justice systems, reshaping how governments interact with communities. While much of the existing debate focuses on algorithmic bias, less attention has been paid to the socio-technical context in which these tools are deployed and how communities actually experience them. Public perceptions of AI can significantly influence trust, legitimacy, and police–community relationships.
Purpose
This project examines how communities perceive the utility and function of AI-enabled and automated justice practices. Using field observations, qualitative interviews, and document analysis across local governments, the study investigates how criminal justice organizations implement these technologies and how residents experience and interpret them. By engaging both agency stakeholders and community members, the research seeks to understand how AI tools are shaping public trust and public safety processes.
Outcome
By providing one of the first systematic examinations of how communities and justice stakeholders define and respond to AI in criminal justice contexts, this research offers actionable insights for policymakers and practitioners. The findings aim to offer more equitable, transparent, and accountable approaches to AI governance, helping criminal justice agencies responsibly deploy AI and automation in ways that are more community centered and evidence-based.
