Thanks yet again to ace researcher and contributor 'Getafix, here is a long but incredibly well-researched article on theConversation.com website into risk assessment and particularly the dreaded OASys so hated by many probation staff. In many people's eyes, including mine, OASys has been the single most significant factor in probation's demise as a useful endeavour, hence here is the complete article:-
Justice systems around the world are using artificial intelligence (AI) to assess people with criminal convictions. These AI technologies rely on machine learning algorithms and their key purpose is to predict the risk of reoffending. They influence decisions made by the courts and prisons and by parole and probation officers.
This kind of tech has been an intrinsic part of the UK justice system since 2001. That was the year a risk assessment tool, known as Oasys (Offender Assessment System), was introduced and began taking over certain tasks from probation officers. Yet in over two decades, scientists outside the government have not been permitted access to the data behind Oasys to independently analyse its workings and assess its accuracy – for example, whether the decisions it influences lead to fewer offences or reconvictions.
Lack of transparency affects AI systems generally. Their complex decision-making processes can evolve into a black box – too obscure to unravel without advanced technical knowledge.
Proponents believe that AI algorithms are more objective scientific tools because they are standardised and this helps to reduce human bias in assessments and decision making. This, supporters claim, makes them useful for public protection.
But critics say that a lack of access to the data, as well as other crucial information required for independent evaluation, raises serious questions of accountability and transparency. It also calls into question what kinds of biases exist in a system that uses data from criminal justice institutions, like the police, which research has repeatedly shown is skewed against ethnic minorities.
However, according to the Ministry of Justice, external evaluation poses data protection implications because it would require access to personal data, including protected characteristics such as race, ethnicity and gender (it is against the law to discriminate against someone because of a protected characteristic).
Oasys introduced
When Oasys was introduced in the UK in 2001 it brought with it sweeping changes to how courts and probation services assessed people convicted of crimes. It meant that algorithms would begin having a huge influence in deciding just how much of a “risk” people involved in the justice system posed to society. These people include those convicted of a crime and awaiting punishment, prisoners and parole applicants.
Before Oasys, a probation officer would interview a defendant to try to get to the bottom of their offending and assess whether they were sorry, regretful or potentially dangerous. But post 2001 this traditional client-based casework approach was cut back and the onus was increasingly put on algorithmic predictions.
These machine learning predictions inform a host of decisions, such as: granting bail, outcomes of immigration cases, the kinds of sentences people face (community-based, custodial or suspended), prison security classifications and assignments to rehabilitation programmes. They also help decide the conditions on how people convicted of crimes are supervised in the community and whether or not they can be released early from prison.
Some attempts at more rigorous risk assessments predate Oasys. The Parole Board in England and Wales deployed a re-conviction prediction score in 1976 which estimated the probability of a reconviction within a fixed period of two years on release from prison. Then, in the mid-1980s, a staff member with the Cambridgeshire Probation Service developed a rather simple risk prediction scale to provide more objectivity and consistency about predicting whether probation was an appropriate alternative to a custodial sentence. Both these methods were rather crude in terms of using only a handful of predictors and deploying rather informal statistical methods.
Harnessing computer power
Around this time, Home Office officials noticed the increased interest in the UK and the US authorities for developing predictive algorithms that could harness the efficiencies computers offered. These algorithms would support human opinions with scientific evidence about what factors were predictive of reoffending. The idea was to use scarce resources more effectively while protecting the public from people categorised as being at high risk of reoffending and causing serious harm.
The Home Office commissioned its first statistical predictive tool, which was deployed in 1996 across probation offices in England and Wales. This initial risk tool was called the Offender Group Reconviction Scale (OGRS). The OGRS is an actuarial tool in that it uses statistical methods to assess information about a person’s past (such as criminal history) to predict the risk of any type of reoffending.
The OGRS is still in use today after several revisions. And this simple algorithm has become incorporated into Oasys which has grown to incorporate additional machine learning algorithms. These have developed over time, predicting different types of reoffending. Reoffending is measured as reconviction within two years of release.
Oasys itself is based on the “what works” approach to risk assessment. Supporters of this method say it relies upon “objective evidence” of what is effective in reducing reoffending. “What works” introduced some basic principles of risk assessment and rehabilitation and it gained currency with governments around the world in the 1990s.
Risk factors can include “criminogenic needs” – these are factors in an offender’s life that are directly related to recidivism. Examples include, safe housing, job skills and mental health. The “what works” approach is based on several principles, one of which involves matching appropriate rehabilitation programmes to a person’s criminogenic needs. So, a person convicted of a sex crime, with a history of alcohol abuse, might be given a sentence plan that includes a sex offender treatment programme and drug treatment. This is meant to reduce their likelihood of reoffending. Following Home Office pilot studies between 1999 and 2001, Oasys was rolled out nationally and His Majesty’s Prison and Probation Service (HMPPS) have used the technology widely ever since.
What the algos do – scoring ‘risk’
The Offender Group Reconviction Scale and variations of Oasys are frequently modified and some information about how they work is publicly available. The available information suggests that Oasys is calibrated to predict risk. The algorithms consume the data probation officers obtain during interviews and information in self-assessment questionnaires completed by the person in question. That data is then used to score a set of risk factors (criminogenic needs). According to the designers, scientific studies indicate that these needs are linked to risks of reoffending.
The risk factors include static (unchangeable) things such as criminal history and age. But they also comprise dynamic (changeable) factors. In Oasys, dynamic factors include: accommodation, employability, relationships, lifestyle, drugs misuse, alcohol misuse, thinking and behaviour, and attitudes. Different weights are assigned to different risk factors as some factors are said to have greater or lesser predictive ability.
So what type of data is obtained from the person being risk assessed? Oasys has 12 sections. Two sections concern criminal history and the current offence. The other ten address areas related to needs and risk. Probation officers use discretion in scoring many of the dynamic risk factors.
The person becomes a set of numbers
The probation officer may, for example, judge whether the person has “suitable accommodation”, which could require considering such things as safety, difficulties with neighbours, available amenities and whether the space is overcrowded. The officer will determine whether the person has a drinking problem or if impulsivity is an issue. These judgments can increase the person’s “risk profile”. In other words, a probation officer may consider dynamic risk factors like having no fixed address and having a history of drug abuse, and say that the person poses a higher risk of reoffending.
The algorithms assess the probation officers’ entries and produce numeric risk scores: the person becomes a set of numbers. These numbers are then recombined and placed into low-, medium-, high-, and very high-risk categories. The system may also associate the category with a percentage indicating the proportion of people who reoffended in the past.
However, there is simply no specific guidance on how to translate any of the risk of reoffending scores into actual sentencing decisions. Probation officers conduct the assessments and they form part of the pre-sentence report (PSR) they present to the court along with a recommended intervention. But it is left to the court to determine a sentence, in line with the provisions of the Sentencing Council.
There is no dataset available to us that directly links Oasys predictions to the decisions they are meant to inform. Hence, we cannot know what decision-makers are doing with these scores in practice. The situation is muddier considering that multiple risk tools put out results in different ratings (as in high, medium, or low) for the same individual. That’s because the algorithms are predicting different offence types (general, violent, contact sexual and indecent images). So a person can collect several different ratings. It could be the person is labelled high risk of any reoffending, medium risk of violent offending, and low risk of both sexual offending types. What is a judge to do with these seemingly disparate pieces of data? Probation officers provide some recommendations but the decision is ultimately left to the judge.
Impact on workloads and risk aversion
Another issue is that probation officers have been known to struggle with completing Oasys assessments considering the significant amount of time it takes for each person. In 2006, researchers spoke to 180 probation officers and asked them about their views on Oasys. One probation officer called it “the worst tax form you’ve ever seen”. In a different study, another probation officer said Oasys was an arduous and time-intensive “box-ticking exercise”.
What can also happen is that risk-aversion becomes entrenched in the system due to the fear of getting it wrong. The backlash can be swift and severe if a person assessed as low risk commits a serious offence - there have been many high-profile media scandals that prove this. In a report for the Prison Reform Trust, one long-term prisoner commented:
They repeatedly go on about ‘risk’ but I realised many years ago that this has nothing to do with risk … it’s all about accountability, they want someone to blame should it all go wrong.The fear of being blamed is not an idle one. A probation officer was reportedly sacked in 2022 for gross misconduct for rating Damien Bendall as medium risk rather than high risk after a conviction for arson. Bendall was released with a suspended sentence. Within three months, he murdered his pregnant partner and three children. Jordan McSweeney, another convicted murderer, was released from prison in 2022 with an assessment of medium risk. Three days later, he raped and brutally killed a young woman walking home alone. A review of the case determined that he had been incorrectly assessed and should instead have been labelled high risk. But unlike in the Bendall case where an individual probation officer was apparently blamed, the chief inspector of probation, Justin Russell, explained:
Probation staff involved were … experiencing unmanageable workloads made worse by high staff vacancy rates – something we have increasingly seen in our local inspections of services. Prison and probation services didn’t communicate effectively about McSweeney’s risks, leaving the Probation Service with an incomplete picture of someone who was likely to reoffend.
‘Bias in, bias out’
Despite its widespread use there has been no independent audit examining the kind of data Oasys relies on to come to its decisions. And that could be a problem - particularly for people from minority ethnic backgrounds. That’s because Oasys, directly and indirectly, incorporates socio-demographic data into its tools.
AI systems, like Oasys, rely on arrest data as proxies for crime when they could in some cases be proxies for racially biased law enforcement (and there are plenty of examples in the UK and around the world of that). Predicting risks of reoffending on the basis of such data raises serious ethical questions. This is because racially biased policing can permeate the data, ultimately biasing predictions and creating the proverbial “bias in, bias out” problem.
In this way, criminal history records open up avenues for labelling and punishing people according to protected characteristics, like race, giving rise to racially biased outcomes. This could mean, for example, a higher percentage of minorities rated in the higher risk groups than non-minorities.
Another source of bias could stem from the way officers “rate” ethnic minorities when answering Oasys-led questions. Probation officers may assess minority ethnic people differently on questions such as, whether they have a temper control problem, are impulsive, hold pro-criminal attitudes, or recognise the impact of their offending on others. Unconscious biases could be at play here resulting from cultural differences in how various ethnic groups perceive these issues. For instance, people from one cultural background may “see” another person with a bad temper whereas that would be seen as acceptable emotional behaviour in another cultural background. Read more: How can black people feel safe and have confidence in policing?
In its review of AI in the justice system in 2022, the justice and home affairs committee of the House of Lords noted that there are “concerns about the dangers of human bias contained in the original data being reflected, and further embedded, in decisions made by algorithms”. And it’s not just the UK where such issues have arisen. The problem of racial bias in justice systems has been noted in various countries where risk assessment algorithms similar to Oasys are deployed.
In the US, the Compas and Pattern algorithms are used widely, and the Level of Service family of tools have been taken up in Australia and Canada. The Compas system, for instance, is an AI algorithm used by US judges to make decisions on granting bail and sentencing. An investigation claimed that the system generated “false positives” for black people and “false negatives” for white people. In other words, it suggested that black people would reoffend when, in reality, they did not and suggested that white people would not reoffend when they actually did. But the developer of the system has challenged these claims. Read more: AI: why installing 'robot judges' in courtrooms is a really bad idea
Studies suggest that such outcomes stem from racially biased decision making embedded in the data which the developers select to represent the risk factors that will determine the algorithm’s predictions. Criminal history data, such as police arrest records, is one example.
Other socio-economic data that developers select to represent risk factors may also be problematic. People will score as being higher risk if they do not have suitable accommodation or are unemployed. In other words, if you are poor or disadvantaged the system is stacked against you. People are also classed as “high risk” for personal circumstances which are sometimes beyond their control. Risk factors include “not having a good relationship with a partner” and “undergoing psychiatric treatment”.
Meanwhile, a report issued by Her Majesty’s Inspectorate of Probation in 2021 alludes to the problem of conscious and unconscious biases which can enter the process via probation officers’ assessments, thereby infecting the outcomes. More transparency could be useful for tracking when and how probation officer discretion has potentially tainted the final assessment, which could have resulted in people being incarcerated unnecessarily or being allocated inappropriate treatment programmes. This could result in flawed risk predictions.
For example, the report states:
Despite its widespread use there has been no independent audit examining the kind of data Oasys relies on to come to its decisions. And that could be a problem - particularly for people from minority ethnic backgrounds. That’s because Oasys, directly and indirectly, incorporates socio-demographic data into its tools.
AI systems, like Oasys, rely on arrest data as proxies for crime when they could in some cases be proxies for racially biased law enforcement (and there are plenty of examples in the UK and around the world of that). Predicting risks of reoffending on the basis of such data raises serious ethical questions. This is because racially biased policing can permeate the data, ultimately biasing predictions and creating the proverbial “bias in, bias out” problem.
In this way, criminal history records open up avenues for labelling and punishing people according to protected characteristics, like race, giving rise to racially biased outcomes. This could mean, for example, a higher percentage of minorities rated in the higher risk groups than non-minorities.
Another source of bias could stem from the way officers “rate” ethnic minorities when answering Oasys-led questions. Probation officers may assess minority ethnic people differently on questions such as, whether they have a temper control problem, are impulsive, hold pro-criminal attitudes, or recognise the impact of their offending on others. Unconscious biases could be at play here resulting from cultural differences in how various ethnic groups perceive these issues. For instance, people from one cultural background may “see” another person with a bad temper whereas that would be seen as acceptable emotional behaviour in another cultural background. Read more: How can black people feel safe and have confidence in policing?
In its review of AI in the justice system in 2022, the justice and home affairs committee of the House of Lords noted that there are “concerns about the dangers of human bias contained in the original data being reflected, and further embedded, in decisions made by algorithms”. And it’s not just the UK where such issues have arisen. The problem of racial bias in justice systems has been noted in various countries where risk assessment algorithms similar to Oasys are deployed.
In the US, the Compas and Pattern algorithms are used widely, and the Level of Service family of tools have been taken up in Australia and Canada. The Compas system, for instance, is an AI algorithm used by US judges to make decisions on granting bail and sentencing. An investigation claimed that the system generated “false positives” for black people and “false negatives” for white people. In other words, it suggested that black people would reoffend when, in reality, they did not and suggested that white people would not reoffend when they actually did. But the developer of the system has challenged these claims. Read more: AI: why installing 'robot judges' in courtrooms is a really bad idea
Studies suggest that such outcomes stem from racially biased decision making embedded in the data which the developers select to represent the risk factors that will determine the algorithm’s predictions. Criminal history data, such as police arrest records, is one example.
Other socio-economic data that developers select to represent risk factors may also be problematic. People will score as being higher risk if they do not have suitable accommodation or are unemployed. In other words, if you are poor or disadvantaged the system is stacked against you. People are also classed as “high risk” for personal circumstances which are sometimes beyond their control. Risk factors include “not having a good relationship with a partner” and “undergoing psychiatric treatment”.
Meanwhile, a report issued by Her Majesty’s Inspectorate of Probation in 2021 alludes to the problem of conscious and unconscious biases which can enter the process via probation officers’ assessments, thereby infecting the outcomes. More transparency could be useful for tracking when and how probation officer discretion has potentially tainted the final assessment, which could have resulted in people being incarcerated unnecessarily or being allocated inappropriate treatment programmes. This could result in flawed risk predictions.
For example, the report states:
It is impossible to be free from bias. How we think about the world and consider risk is intrinsically tied up with our emotions, values and tolerance (or otherwise) of risk challenges.
Social engineering?
Miklos Orban, visiting professor at the University of Surrey School of Law, recently engaged with the Ministry of Justice seeking information on Oasys. One of us (Melissa) spoke with Orban about this and he expressed concerns that the system might be a form of social engineering. He said that governmental officials were eliciting personal and sensitive information from defendants who may think they are making these disclosures to get help or sympathy. But the officers may instead use them for another purpose, such as labelling them with a drinking or drugs problem and then requiring them to go on a suitable treatment programme. He said:
In terms of racial bias, an HM Inspectorate of Prisons’ audit found that an Oasys assessment had not been completed or reviewed in the prior year for almost 20% of black and minority ethnic prisoners. This is a serious issue because further evaluation can help ensure that minority ethnic people are receiving similar treatment or being assigned to helpful programming. It can avoid probation officers simply assuming the risk status of minority ethnic people is unchangeable and thus reduce their chances of early release since Oasys assessments are required to ascertain whether interventions have reduced risks of reoffending.
Researchers with the Inspectorate of Probation encouraged designers of Oasys to expand the ways it can incorporate a person’s personal experiences with discrimination and how it may impact their relationship with the criminal justice system. But, so far, and to the best of our knowledge, this has not been done.
Miklos Orban, visiting professor at the University of Surrey School of Law, recently engaged with the Ministry of Justice seeking information on Oasys. One of us (Melissa) spoke with Orban about this and he expressed concerns that the system might be a form of social engineering. He said that governmental officials were eliciting personal and sensitive information from defendants who may think they are making these disclosures to get help or sympathy. But the officers may instead use them for another purpose, such as labelling them with a drinking or drugs problem and then requiring them to go on a suitable treatment programme. He said:
As a convict, you know very little of how risk assessment tools work, and I have my doubts as to how well judges and parole officers understand statistical models like Oasys. And that’s my number one concern.Not much is known about the accuracy of Oasys in relation to gender and ethnicity either. One available study (though a bit dated as it looked at a sample from 2007) shows the non-violent and violent predictive tools are less accurate with women and minority ethnic people. Meanwhile, Justice, a legal reform organisation, recently cited a lack of research on the accuracy of these tools for women and trans prisoners.
In terms of racial bias, an HM Inspectorate of Prisons’ audit found that an Oasys assessment had not been completed or reviewed in the prior year for almost 20% of black and minority ethnic prisoners. This is a serious issue because further evaluation can help ensure that minority ethnic people are receiving similar treatment or being assigned to helpful programming. It can avoid probation officers simply assuming the risk status of minority ethnic people is unchangeable and thus reduce their chances of early release since Oasys assessments are required to ascertain whether interventions have reduced risks of reoffending.
Researchers with the Inspectorate of Probation encouraged designers of Oasys to expand the ways it can incorporate a person’s personal experiences with discrimination and how it may impact their relationship with the criminal justice system. But, so far, and to the best of our knowledge, this has not been done.
Algorithms affect real people
Oasys results follow a person’s path through the criminal justice system and could influence key decisions from sentencing to parole eligibility. Such serious decisions have huge consequences on peoples’ lives. Yet officials can decline to disclose Oasys results to the defendant in question if they are thought to contain “sensitive information”. They can ask and be shown their completed assessment, but they are not guaranteed to see it.
Even if they are given their scores, defendants and their lawyers face significant hurdles in understanding and challenging their assessments. There is no legal obligation to publish information about the system, although the Ministry of Justice has commendably made certain information public. Still, even if more data were released, defence lawyers may not have the scientific skills to examine the assessments with a sufficiently critical eye.
Some prisoners describe additional challenges. They complain that their risk scores do not reflect how they see themselves. Others believe that their scores contain errors. While some also feel that Oasys mislabels them. In another report compiled by the PRT, one prisoner stated: “Oasys is who I was, not who I am now.” And a man serving a life sentence described the repeated risk assessment when he spoke to a researcher at the University of Birmingham:
I have likened it to a small snowball running downhill. Each turn it picks up more and more snow (inaccurate entries) until eventually you are left with this massive snowball which bears no semblance to the original small ball of snow. In other words, I no longer exist. I have become a construct of their imagination. It is the ultimate act of dehumanisation.Not all judicial officers are impressed either. When asked about using a risk assessment tool that the state required, a judge in the US said: “Frankly, I pay very little attention to the worksheets. Attorneys argue about them, but I really just look at the guidelines. I also don’t go to psychics.”
There have been relatively few legal challenges to any of the risk assessment algorithms in use across the world. But one case stands as an outlier. In 2018, the Supreme Court of Canada ruled in the case of Ewert v Canada that it was unlawful for the prison system to use a predictive algorithm (not Oasys) on Indigenous inmates.
Ewert was an Indigenous Canadian serving time in prison for murder and attempted murder. He challenged the prison system’s use of an AI tool to assess his risk of recidivism. The problem was the lack of evidence that the particular tool was sufficiently accurate when applied to the Indigenous population in Canada. In other words, the tool had never been tested on Indigenous Canadians.
The court understood that there might be risk-relevant differences between Indigenous and non-Indigenous peoples as to why they commit crimes. But since the algorithms had not been tested on Indigenous people, its accuracy for that population was not known. Therefore, using the tools to assess their risks violated the legal requirement that information about an offender must be accurate before it can be used for decision making. The court also noted that the over-representation of Indigenous people in the Canadian justice system was in part attributable to discriminatory policies.
Individual vs group risk
The feeling that the scores produced by risk assessment algorithms such as Oasys may not be properly personalised or contextualised finds merit when considering how predictive algorithms in general work. They assess people and produce risk scores and this has a longer history in business. The lending industry uses algorithms to assess the creditworthiness of customers. Insurance companies deploy algorithms to generate quotes for car insurance. The insurance algorithms often use driving records, age and gender to determine the likelihood of claiming against the policy.
But an all too common and mistaken assumption is that algorithms can provide a prediction about the specific person. On the contrary, publicly available information shows that the algorithms rely upon statistical groups. What does this mean? As we said earlier, they compare the circumstances and attributes of the person being risk assessed with risk factors and scores associated with criminal justice populations – or groups.
For example, what if “John” is placed in the medium-risk category, which is associated with a reoffending likelihood of 30%? This does not mean there is a 30% chance that John will reoffend. Instead, it means that about 30% of those assigned medium risk are forecasted to reoffend based on the observation that 30% of the medium risk had in the past been reconvicted.
This number cannot be directly assigned to any individual within that medium-risk group. John may, individually, have a 1% chance of reoffending. The scales are not individualised in this way and so John, himself, cannot be assigned specifically with a number. The reason for this is that the predictive factors are not causal in nature. They are correlated, meaning there may be some relationship between the factors and reoffending. Oasys uses male gender as one of the predictive factors of reoffending. But being male does not cause reoffending. The relationship as perceived by Oasys merely suggests that males are more likely to commit crimes than females.
There are important consequences to this. The individual can thereby be seen as being punished, not for what he or she is personally predicted to do. They face imprisonment because of what others – who share a similar risk score – have done. This is why more transparency of predictive algorithms is needed.
But even if we know what the inputs are, the weighting system is often obscure as well. And developers are frequently changing the algorithms for a host of reasons. The purposes may be valid. It could be that predictors of reoffending change over time in connection with societal shifts. Or it could be that new scientific knowledge suggests a modification is necessary.
Nevertheless, we have been unable to discover much about how well the Oasys system, or its components, performs. The Ministry of Justice has, to our knowledge, only released retroactive results. Those statistics cannot inform on the predictive performance of the tool for predictions made today, or for how accurate they are when we relook at the offenders in two years. Frequent retrospective results are needed to provide up to date information on the performance of algorithms.
Independent evaluation
To the best of our knowledge (and to the knowledge of other experts in the field), Oasys has not been independently evaluated. There is a clear need for more information on the effectiveness and accuracy of these tools and their impact on gender, race, disability and other protected characteristics. Without these sources it is not possible to fully understand the prospects and challenges of the system.
We acknowledge that the lack of transparency surrounding Oasys is a common, though not universal, denominator that unites these types of algorithms deployed by justice systems and other sectors across the world. A court case in the state of Wisconsin that challenged the use of a risk assessment tool that the developer claimed was confidential succeeded only to a point.
The defendant, convicted of charges related to a drive-by shooting, claimed that it was unfair to use a tool which used a private algorithm because it prevented him from challenging its scientific credentials. The US court ruled that the government did not have to reveal the underlying algorithm. However, it required authorities to issue warnings when the tool was used.
These warnings included:
- the fact that failure to disclose meant it was not possible to tell how scores were determined
- the algorithms were group-based assessments incapable of individualised predictions
- there could be biases toward minority ethnic people
- the tool had not been tested for use in the state of Wisconsin.
Opening up the black box
Problems such as AI bias and lack of transparency are not peculiar to Oasys. They affect many other data-driven technologies deployed by public sector agencies. In response, UK government agencies, such as the Central Digital and Data Office and the Centre for Data Ethics and Innovation (CDEI) have recognised the need for ethical approaches to algorithm design and implementation and have introduced remedial strategies. A recent example is the Algorithmic Transparency Recording Standard Hub which offers public sector organisations the opportunity to provide information about their algorithms.
A relatively recent report published by the CDEI also discussed bias-limitation measures, such as reducing the significance of things like arrest history as they have been show to be negative proxies for race. A post-prediction remedy in the CDEI report requires practitioners to lower the risk classification allocated to people belonging to a group known to be consistently vulnerable to higher risk AI scores than others.
More generally, researchers and civil society organisations have proposed pre and-post implementation audits to test, detect and resolve AI problems of the kind associated with Oasys. The need for appropriate regulation of AI systems including those deployed for risk assessment has also been recognised by key regulatory bodies in the UK and around the world, such as Ofcom, the Information Commissioner’s Office and the Competition and Markets Authority.
When we put these issues to the MOJ, it said the system had been subject to external review, but it was not specific on the data. It said it has been making data available externally through the Data First programme and that the next dataset to be shared with the programme will be “based on” the Oasys database and released “within 12 months”.
An MOJ spokesperson added: “The Oasys system has been subject to external review and scrutiny by the appropriate bodies. For obvious reasons, granting external access to sensitive offender information is a complex process, which is why we’ve set up Data First which allows accredited researchers to access our information in an ethical and responsible way.”
In the end, we recognise that algorithmic systems are here to stay and we acknowledge the ongoing efforts to reduce problems with accuracy and bias. Better access to, and input from, external experts to evaluate these systems and put forward solutions would be a useful step towards making them fairer. The justice system is vast and complex and technology is needed to manage it. But it is important to remember that there are people behind the numbers.
--oo00oo--
Problems such as AI bias and lack of transparency are not peculiar to Oasys. They affect many other data-driven technologies deployed by public sector agencies. In response, UK government agencies, such as the Central Digital and Data Office and the Centre for Data Ethics and Innovation (CDEI) have recognised the need for ethical approaches to algorithm design and implementation and have introduced remedial strategies. A recent example is the Algorithmic Transparency Recording Standard Hub which offers public sector organisations the opportunity to provide information about their algorithms.
A relatively recent report published by the CDEI also discussed bias-limitation measures, such as reducing the significance of things like arrest history as they have been show to be negative proxies for race. A post-prediction remedy in the CDEI report requires practitioners to lower the risk classification allocated to people belonging to a group known to be consistently vulnerable to higher risk AI scores than others.
More generally, researchers and civil society organisations have proposed pre and-post implementation audits to test, detect and resolve AI problems of the kind associated with Oasys. The need for appropriate regulation of AI systems including those deployed for risk assessment has also been recognised by key regulatory bodies in the UK and around the world, such as Ofcom, the Information Commissioner’s Office and the Competition and Markets Authority.
When we put these issues to the MOJ, it said the system had been subject to external review, but it was not specific on the data. It said it has been making data available externally through the Data First programme and that the next dataset to be shared with the programme will be “based on” the Oasys database and released “within 12 months”.
An MOJ spokesperson added: “The Oasys system has been subject to external review and scrutiny by the appropriate bodies. For obvious reasons, granting external access to sensitive offender information is a complex process, which is why we’ve set up Data First which allows accredited researchers to access our information in an ethical and responsible way.”
In the end, we recognise that algorithmic systems are here to stay and we acknowledge the ongoing efforts to reduce problems with accuracy and bias. Better access to, and input from, external experts to evaluate these systems and put forward solutions would be a useful step towards making them fairer. The justice system is vast and complex and technology is needed to manage it. But it is important to remember that there are people behind the numbers.
--oo00oo--
The article generated some interesting responses:-
Not an AI in 2001. The article goes on to refer to an algorithm which is likely correct. An AI is not an algorithm. I’d imagine the one being used by Oasys probably started out like something used by actuaries to calculate risk, the two fields look like there considerable overlap. Also the calculations done to calculate a credit score. I doubt it’s an AI even now..
Nitpicky stuff out of the way, it does illustrate the problems with black box systems and peoples reactions to them. First you can’t easily work out why the results are what they are. Secondly people tend to blindly accept the output even when it clearly diverges from reality, remember when people just followed satnav systems into fields or rivers? Generative systems are already fairly opaque, but it’s going to get worse.
Not an AI in 2001. The article goes on to refer to an algorithm which is likely correct. An AI is not an algorithm. I’d imagine the one being used by Oasys probably started out like something used by actuaries to calculate risk, the two fields look like there considerable overlap. Also the calculations done to calculate a credit score. I doubt it’s an AI even now..
Nitpicky stuff out of the way, it does illustrate the problems with black box systems and peoples reactions to them. First you can’t easily work out why the results are what they are. Secondly people tend to blindly accept the output even when it clearly diverges from reality, remember when people just followed satnav systems into fields or rivers? Generative systems are already fairly opaque, but it’s going to get worse.
*****
“I have likened it to a small snowball running downhill. Each turn it picks up more and more snow (inaccurate entries) until eventually you are left with this massive snowball which bears no semblance to the original small ball of snow. In other words, I no longer exist. I have become a construct of their imagination. It is the ultimate act of de-humanisation”. The person quoted is maybe not aware just how accurate they are. It used to be my practice to share what their files said with the person I was assessing for a DV or a Sex Offenders group. In every single instance the range of inaccuracies was enormous, from those with insignificant impact to those with monumental impact. Clearly, just like Chinese whispers, the initial mistake is made, repeated by the second worker with another couple added in and so on and so on.
“I have likened it to a small snowball running downhill. Each turn it picks up more and more snow (inaccurate entries) until eventually you are left with this massive snowball which bears no semblance to the original small ball of snow. In other words, I no longer exist. I have become a construct of their imagination. It is the ultimate act of de-humanisation”. The person quoted is maybe not aware just how accurate they are. It used to be my practice to share what their files said with the person I was assessing for a DV or a Sex Offenders group. In every single instance the range of inaccuracies was enormous, from those with insignificant impact to those with monumental impact. Clearly, just like Chinese whispers, the initial mistake is made, repeated by the second worker with another couple added in and so on and so on.
With AI taking over so much and humans more and more cut out of the equation in gaining information based on relationship (and hopefully a highly skilled interviewer aware of their own inherent biases) God knows where all this might end up. Whilst outcomes for this AI system have not been analysed outcomes for the What Works Approach (imported here into Western Australia) and for Sex Offender Treatment programs have both been analysed and found to be significantly flawed with positive outcomes seriously over-estimated. We still use these patently flawed systems in Western Australia.
When I was first employed by the (in)Justice system I was one of about 15 Senior Programmes Officers delivering these awful programs with 1 worker only focusing on prisoners education and job seeking stuff. It needed to be the other way around. And any money saved on the fairly useless work we were doing (and I was a very highly skilled and experienced practitioner) would have been better spent on dentists, tattoo removals, and secure accommodation post release. What would AI make of that I wonder?
*****
I have no doubt that OASys has many flaws and would benefit from rigorous academic assessment of its accuracy and biases, but it remains a risk assessment tool and is not AI. Now, if a computer was tasked with analysing videos of defendants in court and generating assessments based on their behaviour, that would be dangerous reliance on AI. On the other hand, analysing OASys outcomes and suggesting improvements in the algorithm would be, if properly managed and evaluated, a positive use of AI.
I have no doubt that OASys has many flaws and would benefit from rigorous academic assessment of its accuracy and biases, but it remains a risk assessment tool and is not AI. Now, if a computer was tasked with analysing videos of defendants in court and generating assessments based on their behaviour, that would be dangerous reliance on AI. On the other hand, analysing OASys outcomes and suggesting improvements in the algorithm would be, if properly managed and evaluated, a positive use of AI.
Someone ought to track down & chew over the numbers with the main protagonist in the oasys debacle, prison psychologist Danny Clark.
ReplyDeletehttp://probationmatters.blogspot.com/2015/01/the-final-insult.html
https://assets.publishing.service.gov.uk/media/5a7f676fed915d74e33f6380/research-analysis-offender-assessment-system.pdf
https://www.crimeandjustice.org.uk/sites/crimeandjustice.org.uk/files/09627250308553480.pdf
https://www.researchgate.net/profile/Danny-Clark
"This study considers the applicability of the Level of Service Inventory-Revised (LSI-R) with an English prison population. After slight modification of the LSI-R for use in England, several items were added to amplify it for use in prisons. As data from an English prison population have not previously been published, full details are presented. Comparison with data from a Canadian prison population suggests that the LSI-R functions in a similar manner in assessing needs for both populations. The calculation of test-retest change scores over the duration of the sentence, based on the dynamic risk items, represents a new use of the LSI-R. This study precedes another study presently under way using this data set to search for relationships between LSI-R scores and recidivism. Such relationships, if reliably established, would have several applications within the prison service in terms of sentence planning and risk assessment."
Thanks for another reminder and this is 8 years ago :-
DeleteThe Final Insult
Anyone who cares to go digging back into the early days of this blog will discover just how often the topic of OASys, the 'world-class' offender assessment system, crops up. Developed by the prison service and foisted upon us, I'm convinced that when historians come to write about the demise of probation in England and Wales, it's OASys that will be highlighted as one of the prime causes.
This utterly useless and tedious development ensured all probation staff would be effectively chained to their computers for the best part of every day, whilst undertaking the input of data that invariably would be of little or no use to man nor beast. Our productivity dropped like a stone overnight and drove many a good officer to despair.
I'm clear that it was OASys that 'did' for us and it's no surprise at all that the new owners of the CRC's will ditch it as soon as possible as completely inefficient, ineffective and not at all conducive to making money. And now we have the final insult with the prison service quietly beginning the process of ditching it too. Why? Because it takes too bloody long to fill in; is crap and invariably tells you at the end what you damned well knew before you started.
It seems incredible I know, but before OASys, PO's were pretty good at weighing up who the risky clients were and for what reasons - we didn't need a shite computer system to tell us what we'd already worked out. Well it looks like the penny has at last dropped down at NOMS HQ that, try as they might to feed the OASys monster, it's just a waste of time and effort, so here's the prison service instructing all staff not to bother too much with the ones that aren't that risky!
There's to be a 'review' obviously, but who'd like to place any money on OASys still being around in a year or two's time?
Two comments at the time stick out:-
Delete"This post did make me chuckle because it has been clear to any offender who has ever undergone the ridiculous OASys assessment process that it is a complete waste of time, effort and money. The level of risk that the system regurgitates at the end of this fruitless exercise bears little resemblance to anyone's actual risk level because of the way the questions are structured and the fact that the really important questions are not asked and offender input into these assessments is negligible at best. Every single OASys I ever had done by four different OS's and OM's had zero input from me despite the PI's and good practice making it clear that they all should have had my active input. The fact that I am a human being and not a crime statistic and the only person who really knows what my risk of reoffending is (the OS's and OM's clearly don't as none of them having bothered to get to know me as a human being) is completely ignored so the end result of any of these stupid OASys analyses bears no resemblance to my actual risk level making the whole exercise completely pointless."
"Agree completely. I was one of the handful of prison & probation staff who (thinking that a single comprehensive risk assessment tool was a good idea in principle) voluntarily embarked on piloting the original OASys, an unwieldy paper exercise of 48+ pages with tables to convert a series of numbers into scores - a bit like Jackie comic used to do when telling girls what sort of man they should marry. I recall being at a final meeting in London (on the day of the total eclipse 11 Aug 1999) watching Danny Clark & other NOMS staff totally ignore or rubbish the feedback from frontline probation and prison staff - they already knew what they wanted OASys to do. Still, Mr Clark was well rewarded. In a 2010 book entitled "Psychological Therapy in Prisons and Other Secure Settings" there's this acknowledgment:
"Danny Clark OBE is Head of Substance Misuse, Cognitive Skills and Motivational Interventions at NOMS. He was previously the Head of the Attitudes, Thinking and Behaviour Interventions Unit at NOMS. He was responsible for the research on and development of the Offender Assessment System."
Being a bit of a compulsive obsessive (hoarder, pedant & all round nerd) I've probably still got some of the original guidelines in the roof-space."
gentle correction: they weren't 'noms' staff (it didn't exist then), they were prison service staff, a mix of senior civil servants & prison psychologists. The day was held at Abell House - danny clark, sue mann & others ran the day, but yes, the feedback was either rubbished or ignored & anyone trying to raise issues was closed down.
DeleteAnon 11:23 Sue Hall WY CPO always reckoned to have been involved in the early development stages. Is this correct and does name ring a bell?
DeleteNot immediately, but there were maybe 40-or-so staff from probation & prisons around Eng & Wales who had undertaken up to 5 x paper-based oasys each (there was a handsome bounty for each completed assesment) over the previous 6-month period, and the meeting in Abell House was to share feedback (so say). I've fought to keep a filing cabinet (aye, with real paper in it) & might still have some info tucked away at the back. I'll have a look in the next few days.
DeleteAnon 12:57 "I've fought to keep a filing cabinet (aye, with real paper in it" ) Oh well done you! I failed miserably and I had 3 in my heyday. It would be good if you could ferret something out. I once challenged Sue at a big meeting if she knew how long it took to fill the damned thing in? "Oh No Jim - how long?" came the surprising response!
Deletewell, well... found a folder marked "OASys Training July 1999" - sadly not much relevant in there (and nothing with names on).
DeleteThere is a version 3 paper copy of Joint Risk/Needs parts A-N watermarked "Commercial in Confidence" [40 pages]:
A - 14 sections
B - 6 sections, but 40 questions in total
C - 7 sections
D - 13 sections
E - 12 sections
F - 9 sections
G - 10 sections
H - 13 sections
I - 16 sections
J - 9 sections
K - 9 sections
L - 13 sections
M - 7 sections
N - 7 sections
180 questions to answer + 14 boxes for evidence/additional information
"Scoring the Joint Offender Assessment Inventory ... items are scored on a three point scale 0-2. These are items where the assessor is required to make a jidgement about how satisfactory or appropriate the situation is at the present time... The rating scale below should be adhered to:
N/A = not applicable
0 = no immediate need for improvement
1 = a need for some improvement
2 = major problems & considerable need for improvement"
(I asked if that wasn't actually a four-point scale - you can imagine the response)
"A score of '0' does not mean the situation could not be improved but that it meets a minimum level to be judged satisfactory."
I recall challenging the oasys team about the language, specifically the subjective nature of "satisfactory", whilst I could see how "appropriate" might be valid but that it was still wide open to variations of interpretation.
I was effectively told to shut up.
There's also a first draft paper copy of the Risk of Serious Harm Initial Assessment [6 pages, 35 questions] which seems to be an amalgamation of the risk assessments that were variously pre-existing around the country in 1999. My handwritten notes show I had more questions:
"Previous Offences - Assess whether Risk Analysis required" - my notes say "surely RA MUST be completed in all cases?"
"Significant Events - Assess whether Risk Analysis required" - my notes again "surely RA MUST be completed?"
And that seems to be all that's possible to share from the paper bits & bobs.
Anon 11:43 Thanks very much for digging that out - I have a very early manual somewhere - I'll try and find it.
DeleteOASyS Vastly Simplified:
DeleteOffender Profile:
Mad = Very High RoSH
Bad = High RoSH
Sad = Medium RoSH
Had = Low RoSH
I find this useful.
Oasys is a complete waste of time and causes much damage to individuals. It also fundamentally alters the way police and probation think. I'm sure my risk is percieved to be elevated as I treat police and probation with precisely the same contempt they have always shown me.
ReplyDeleteIt always starts with an outgroup, in your case sex offenders, then spreads everywhere. Everything revolves around that one event and not even decades of good character make an iota of difference. There is a base refusal to look for exculptory evidence it's simply ignored in a drive for more prosecutions and ludicrous levels of control.
I wonder what what the oasys report for Andy Malkinson said? GIGO: Garbage In Garbage Out.
Wikipedia:-
DeleteUpon his name being cleared, Malkinson stated that he felt he was "forcibly kidnapped ... by the state". Greater Manchester Police apologised although this apology was not accepted by Malkinson, who called it "meaningless". The Independent Office for Police Conduct opened a review into the GMP's handling of Malkinson's complaints.
Edward Garnier, a former Solicitor General, called for a public inquiry and criticised the justice's system handling of the case and particularly the conduct of the CCRC, saying that the decision to reject Malkinson's 2009 appeal on cost–benefit grounds despite the lead of the unknown man's DNA had, in fact, led to significant costs both to Malkinson and to the state in compensation to be paid; further, he suggested that exemplary damages may be due "because of the oppressive and arbitrary behaviour of agents of the state". Former Director of Public Prosecutions Ken Macdonald and barrister Michael Mansfield also called for an inquiry.
The Criminal Cases Review Commission announced on 17 August 2023 that it had appointed an external KC to conduct a review into its actions in relation to the case. On 24 August, the Justice Secretary, Alex Chalk, announced the launch of a non-statutory inquiry to investigate the role of the Crown Prosecution Service, Greater Manchester Police and the Criminal Cases Review Commission.
On 13 September 2023, the Independent Office for Police Conduct announced it would investigate Greater Manchester Police's handling of the Malkinson case.
An inquiry into the wrongful conviction, led by Judge Sarah Munro KC, began on 26 October 2023. The hearing will examine the original investigation by Greater Manchester Police and why it took so long for the conviction to be overturned. Munro said she would be "fearless" in seeking the truth.
Let's not forget the unique selling point of the original eoasys, namely that it would write the busy Probation Officer's pre sentence report with no extra effort as if by magic....it doesn't now and never did.
ReplyDeleteYes - the magic 'create report' button! Result? Completely useless - I regularly threw it away and 'created' a report freestyle - often commended by magistrates, unlike the formulaic and endlessly repetitive machine produced efforts.
DeleteIf OASyS was such a great predictor, surely bookmakers would adapt it and apply it to horse racing?
Delete'Getafix
922 yes JB but you do not appear to appreciate po pso today cannot do anything other than oasys for the electronic records . Most critically a po today's low autonomy but whether they have the literacy skills or not could just not write a report for court. The construction or process for it is lost professional function.
DeleteFrom Twitter:-
Delete"Never quite worked like that. Needed huge editing. Although though I did like the ability to save paragraphs to reuse in my conclusions - although there was always the risk of becoming a bit lazy."
OASys "remains a risk management tool and is not AI"
ReplyDeleteIt may not be AI but it is so skewed with bias its not fit for purpose.
Its not a risk management tool of any use: A handy list of contact numbers including next of kin, doctors, etc would be handy if they were reliably recorded and in the one place but the wordy copy and pasted screeds in the current RMP are there to pacify the inspector. Like all the clunky mishmash of processes, databases and aps we are saddled with, they are not there to serve the practitioner, they are there to feed the neurotics needs of management.
I was going to say that OASys is quite a good risk assessment tool, and then caught myself tripping up on my own bias. It is 100% not fit for purpose as a risk assessment tool, given that we all have a copy and paste line or two to point out that all the static tools embedded do not reliably reflect risk where domestic violence is either the index offence or a factor in the case. I wont hash over the statistics about femicide, but the bias of the criminal justice system to minimise this issue is embedded in OASys. Which is why most of the DV cases went over to the CRCs as medium risk. Medium risk where SFOs occurr in the main.
Not fit for purpose as a risk management, or even assessment tool then. OASys is also meant to be a sentence planning tool, That is even more risible. And now we are saddled with increasing pressure to populate that section with lots of prescribed copy and pasted wordage. That again serves neither client or practitioner. The entirety of the preceding sections are geared to risk assessment, even if inaccurately, and not strengths based. So rehearse all the problems and ignore any opportunities. All our newly required wordage is there to achieve two functions, 1. satisfy the inspector and feed management neurosis 2. require the referral of the client to the menu of outsourced interventions, thereby squaring the circle of centralised commissioning. Dont get me started on the commissioning of outsourced services. Nothing remotely dynamic about that rigid incompetent framework.
Time for a complete overhaul. And that will require wresting the service from HMPPS, where those in power are so deeply invested in the industry of OASys and centralised commissioning, centralised everything, that it needs a clean break to clear up the mess.
So in conclusion, the OASYS risk assessment is flawed, it discriminates particularly against black males it’ll disproportionately assess as higher risk, and provides statistical analysis probation officers don’t know what to do with but may get sacked if what they do is wrong? Great!
ReplyDelete"They repeatedly go on about ‘risk’ but I realised many years ago that this has nothing to do with risk … it’s all about accountability, they want someone to blame should it all go wrong."
ReplyDeleteJim, have you had anyone raise concerns about Occ Health corrupt process. Very unwell staff forced back to work and failing then dismissed for incapability or dying. The provider works on tick box completions in favour of the employer, paid outcomes. Is this worth exploring for staff opinions/experience please
DeleteThe answer to both is yes! It's obviously a very difficult subject area however for obvious reasons - but something that really does need exposing.
DeleteGreat subject - yes to this. I expect an explosion of replies J.B
DeleteI am now retired from the Probation Service. In 2001 when the training/indoctrination re. Oasys commenced I somehow realised that this was the Trojan horse inside the edifice of traditional Probation work. How right I was! Probation work is no better since then - probably worse.
DeleteI remember many years ago being at an Oral Hearing when an ex Trust Chief Officer was on the panel. She expressed her own opinion that the OASys wasn’t fit for purpose and that she was interested in my opinion much more than any statistics produced by the system. Years later, many years later, nothing has changed. Officers now spend 7.5 hours of an 8 hour day populating data to OASys and other applications, resulting in little or no time to work with offenders who are in desperate need for help, support and guidance. OASys is not fit for purpose and no longer is the Probation Service.
ReplyDeleteFrom Twitter:-
ReplyDelete"I was a PO when OASys was introduced. Over time it went from being 'tool' to the main part of our work with PSRs gradually edged out and oddly- time with the person the OASys was about being of much less importance than the actual document."
From Twitter:-
ReplyDelete"When I undertook my OASYs training we were told it was the be all and end all. Judges would use it in court for sentencing, and direct which prison the Offender would be taken to from court. Personnel officers would use it on the wings."
Is OASyS actually just a tool anymore?
DeleteIf it dictates the offenders journey through the CJS, it must then also significantly diminish the autonomy of the probation officer to work on an individual basis with the offender and use their professional skill set.
OASyS 'IS' the modern day probation officer to that effect, it's a dictator that makes all of the decisions from both sides of the desk.
'Getafix
Spot on Getafix, as ever. Somewhere buried deep in my foundations is a complete distrust of being controlled and pigeon-holed. After various teenage and onwards adventures, found myself working in the Probation Service which I discovered was an organisation which was happy to work against the grain a bit, and really cared for the individual. That is no longer the case. We are being ruthlessly pigeon-holed, both practitioner and client. Probation as I knew it has left me, so I am leaving it, soon. I do squeeze in the work -as I understand it- with my current clients, and I will miss that and them dreadfully
DeleteIts what prompted my pen-name: there are a grand number of people I have worked with who I will always remember with fondness, and a small number of people with whom I was privileged to accompany on a journey to a much better life. So proud of them, and of the little I did to make that happen.
Pearly Gates
From Twitter:-
Delete"I use it as just that-a tick box exercise. I work with my people as they should be worked, as people. I address their needs (which I, not oasys) have highlighted with THEM. However, the amount of effort said tock box exercise requires is ridiculous, easily 8-10 hours work 2b good."
From Twitter:-
Delete"OASys if used properly I assume can be a good tool. However the information on mine was cut and pasted from newspaper articles ! Utter tripe and factually incorrect by try and tell my offender supervisor that!"
From Twitter:-
Delete"It can be a useful tool and when it was introduced we didn't really have anywhere to put that sort of detail where it could be accessed easily so it helped. But garbage in = garbage out and QA didn't address that just checks ticks in the right box."
From Twitter:-
Delete"Whilst OASys absolutely has its flaws, I do find the actual process of writing them helpful. In so much as you look into the various aspects of their lives, previous behaviours, frailties and risks that may not be linked the index offence."
OASys will be replaced with a new risk assessment and sentencing planning tool in the coming weeks / months.
ReplyDeleteAnon 14:36 There must have been trials - where's the details?
DeleteGov.UK Digital Marketplace, with a look re contracts tendered for.
ReplyDeleteThe MOJ has a bold strategic vision for the delivery of digital, data & technology in Her Majesty's Prison and Probation Service (HMPPS) which includes:
● Replacing prison legacy systems with simpler, clearer, faster digital services
● Capturing, storing and sharing high quality data across services leading to better and faster decision-making
The MOJ has made progress in developing user-centred design services transforming the lives of those using and interacting with them. Many of the products and services developed in the last few years are however constrained with the requirement to return data to legacy systems such as NOMIS, Delius and OASyS. MOJ is now at a critical stage where we need to replace more substantial elements of the legacy systems so that we are a) able to switch off larger pieces of the estate and b) enable more transformational development which is free from those constraints.
Wonder if this could be in the mix?
Deletehttps://www.cep-probation.org/risk-assessment-the-dutch-way-a-scalable-easy-to-use-tool-for-probation-reports/
'Getafix
I love the way MoJ presents itself: bold, transformational, user centred etc, etc. It sounds utterly astonishing on paper but in reality it’s never quite what it seems. What usually happens, is that they can’t quite afford the really good stuff so they buy in out of date applications that are then cobbled onto the network. This has happened on so many occasions across government that their claims are laughable. In reality there will be problems linking the entire estate. This will the create work arounds that are issued on a monthly basis as they begin to realise that they purchased a load of crap that isn’t fit for purpose. This will then lead to wasting millions of pounds trying to make it work until someone realises it was all pissing in the wind. Some time down the line a critical report will be issued pointing out a litany of failures (Universal credit disaster), hands will be washed, and another transformational strategy will repeat all the earlier mistakes. The only thing transformational about the Criminal Justice System is that it’s been transformed into a creaking mess that’s not for purpose. Still at least more careers will be launched at a strategic level. God help us…
DeletePqip oh dear I will need to talk to someone like a human if the tick box goes
ReplyDeleteI personally find it astounding that someone can qualify through Pqip, and be promoted to senior probation officer in three years!
Deletehttps://www.prospects.ac.uk/jobs-and-work-experience/job-sectors/social-care/life-as-a-probation-officer
'Getafix
How did you get your job?
DeleteI got my job through a sponsored ad on Instagram. I applied via the ad and then went for an assessment day, which was different to any other interview process I had encountered before.
I was then offered a position to join the PQiP programme and spent the next 15 months training to be a probation officer in Basildon, Essex. This involved shadowing the work of other qualified staff and learning on the job, as well as having my own caseload and completing the required formal qualification.
I had a bumpy ride as a PQiP. My position on the programme was interrupted due to being diagnosed with dyslexia. As a result I was given more support, not only from my team but also from assistive technology. A practice tutor assessor, with whom I built a great relationship, also supported me.
Life as a trainee was stressful and at times, I felt I had to balance so many things at once. However, I was surrounded by a great team in South Essex, who shared their wealth of knowledge and supported me through it all. When I qualified and was posted to North Essex, as part of a team in Chelmsford. I remained as a probation officer for three years before being promoted to a senior probation officer based back in the team that I trained with.
From Twitter:-
ReplyDelete"There's an article in the Probation Journal calling it the 'worst tax form you've ever seen' by George Mair. It's also worth reading Hazel Kemshall's Inspectorate report on bias and error in risk assessment - it's quite short and was published just 2 years ago."