Monday, 27 April 2026

AI is Coming

For the 28th Annual McWilliams Probation Lecture on 14th July 2026, Professor Melissa Hamilton will speak on 'Artificial Intelligence in Probation: Opportunities, Risks, and Responsible Use'.

Melissa Hamilton is a Professor of Law & Criminal Justice at the University of Surrey and a Surrey AI Fellow with the Surrey Institute for People-Centred Artificial Intelligence. She holds a Juris Doctorate (law) and a PhD in Criminology and is a member of the Royal Statistical Society, International Corrections and Prisons Association, American Psychological Association, and the Association of Threat Assessment Professionals.

Her research is interdisciplinary and focuses on the use of AI and related technologies in criminal justice, sentencing practices, interpersonal violence, and trauma-informed approaches to legal and correctional decision-making. Before entering academia, Melissa worked as both a police officer and a prisons officer, experience that continues to inform her research and teaching.

Melissa’s work has been published across law, social science, and criminal justice journals. She also regularly contributes to public and professional discussions through print media, radio and television broadcasts, and online platforms including blogs and podcasts.

The respondent is David A Raho. David is a PhD researcher in Law and Criminology at the Institute of Law and Social Sciences at Sheffield Hallam University, investigating AI Maturity Models, AI Cultural Readiness, and the comparative adoption of artificial intelligence in probation and rehabilitation services across England & Wales, Brazil, and Japan. He has extensive frontline experience as a Probation practitioner spanning nearly four decades and now works as a member of the AI Team at HMPPS HQ. He has contributed to publications on the use of technology in Probation for both CEP and the UN, and is a member of the UNESCO expert network on AI and a Tutor at the University of Oxford on AI Governance. He is both a Fellow and a Trustee of the Probation Institute and a proud Napo member, having previously served as a Branch Chair in London and also as a National Vice Chair.

This event will be held at the Institute of Criminology in the lower ground floor seminar rooms.

Lunch will be served at 1pm. The lecture will begin at 2pm. Tea will be served at 3.45pm.

Please register for in-person attendance here.

Please register for online attendance here.

1 comment:

  1. AI isn’t a future prospect in probation, it’s already here. It’s not hard to see where this leads: keyboards and IT hardware replaced by transcription recordings and voice control, intake points using biometric scans, individuals tracked from entry to exit through linked systems. Electronic tagging evolves into continuous, remote supervision, potentially layered with facial recognition. Risk assessments, licence conditions, even sentencing recommendations, already, much of this can be generated at the press of a button.

    And the justification is familiar. We’re told the same story each time: workloads are too high, resources too thin, and this technology will make things more efficient, freeing up time for meaningful work. But in practice, that time rarely materialises as more time in the room with people under supervision. Instead, it’s absorbed elsewhere, redirected into managing the system itself and the PO becomes ignored and obsolete.

    The trajectory is clear, but the real question isn’t what AI can do, it’s where human judgment remains non-negotiable. At what point does a practitioner step in, rather than defer? When does a relationship, built through presence, trust, and discretion, stop being central to supervision? Because if decisions are increasingly shaped upstream by automated systems, there’s a risk that practitioners become implementers of outputs rather than authors of them, asking AI what to do before deciding what should be done.

    And underpinning all of this is a quieter, murkier issue: who is actually designing these systems, and whose assumptions and bias are being coded into them? If the logic behind risk, compliance, and intervention is embedded in software, then those choices don’t disappear, they’re just harder to see, and harder to challenge.

    See you at the lecture.

    ReplyDelete