Sunday, 23 June 2013

Dodgy Figures?

A few days ago I highlighted a blog post written by Jon Harvey and in response to the Ministry of Justice publication of statistics relating to the two PbR pilot projects at HMP Peterborough and HMP Doncaster. 

These are extremely important trials and must provide favourable results for Chris Grayling to be able to argue that PbR works and can therefore be rolled out as part of the privatisation plans for probation. Given his obvious failure with PbR at the DWP, anything less will get him into some serious trouble with HM Treasury, bring the whole privatisation plans into doubt and probably terminate a promising political career. 

Now with this much at stake, it wouldn't take much of a cynic to assume that there might be a wee temptation to play with the figures a little in order to put PbR in as good a light as possible. I mean it's not as if politicians haven't got a bit of form where this sort of thing is concerned is it? 

Wasn't this why the UK Statistics Authority was set up in the first place to try and stop politicians misusing figures? A bit like Grant Shapps, now Conservative Party Chairman, getting caught out 'misrepresenting' benefit figures and earning a rebuke from the UKSA in the process. Jeremy Hunt, Iain Duncan Smith, and even David Cameron have all been caught out before, so how do we know for sure that Chris Grayling and his department aren't indulging in a little dissembling of their own?

Well most of us have no idea at all and have to hope that those in authority can be trusted. A ridiculous idea I know, especially given the unfolding scandal into the Care Quality Commission and the 'buried' critical report, but this is where whistleblowing, or in our case a twitching nose comes to the fore. Jon Harvey says he was 'irritated' by the MoJ statistics and knows enough to ask some very pointed questions of the author using a Freedom of Information request.

I thought it would be instructive to quote the letter containing the questions in full and hope Mr Harvey doesn't mind. I know little of statistics, but I think they're belters! The FOI clock is ticking with some kind of response required within 20 days. We all wait with interest.

Dear Mike Elkins

I have just read through your publication. I have a number of questions and I would be most grateful for your thoughts:
1.       The pilots began on 9 September 2010 and the 1 October 2011 (Peterborough and Doncaster respectively.) Please can you qualify “began”?
2.       Given that “the next Proven Reoffending Statistics quarterly bulletin will not be published until 25 July 2013”, why did you publish your results today rather than a few weeks from now?
3.       I understand that “the interim re-conviction figures being published in this statistical bulletin are based on periods half the length of those that will be used for the final results” – daft question I am sure, but presumably this applies to both the ‘experimental’ subject averages and the national comparators?
4.       You say that these “interim 6 month re-conviction figures are available for almost all of Peterborough cohort 1 (around 850 offenders) and half of Doncaster cohort 1 (around 700 offenders)”, please can you explain what has happened to the other portions of the cohorts and why they are included?
5.       In terms of methodology, you say “offenders enter the PbR pilots after their first eligible release from the prison within the cohort period”, please can you explain “eligible” in this context and whether the national comparator figures also cover the same “eligible” group?
6.       You explain that the key difference is that reconvictions only count offences for which the offender was convicted at court, whereas the National Statistics proven re-offending measure also includes out of court disposals (cautions)” and “Additionally, there are a number of other differences between the pilots and the
7.       National Statistics proven re-offending measure in terms of which offenders are counted within the cohort”. Are you able to say what difference these differences might make to the figures? For example, what number of offenders per hundred are usually subject to a caution (or similar disposal) as opposed to a court conviction?
8.       Again I assume that given that the “Peterborough pilot includes offenders released from custodial sentences of less than 12 months, whereas the Doncaster pilot includes all offenders released from custody regardless of sentence length”, the national comparisons are on a like for like basis?
9.       You explain that the “success of each Peterborough cohort will be determined by comparison with a control group (of comparable offenders from across the country)”. How will this ‘control’ group be selected to ensure there is no inadvertent or unknown bias? Indeed was there (will there be) any form of randomised control trial element to either of these two trials (and extensions)? If not, what is your considered professional judgement as a statistician as to the validity of these results to guide future practice?
10.   For Doncaster, success “will be determined by comparison with the reconviction rate in the baseline year of 2009”. How will this accommodate national and/or local trends in (say) sentencing practice or levels of crime?
11.   Given that normally reconviction rates are measured on a 12 month basis and these interim results are measured on a 6 month one, how much is that likely (based on past data) to have depressed the reconviction rates?
12.   You say “Whereas in this publication, to eliminate the risk of seasonality and enable a consistent comparison over time, all figures relate to offenders released in the 6 month period from October to March”. I may well be missing something here, but by only using the six winter months, are you not likely to increase the risk of a seasonal effect in the data? Please explain further.
13.   Given that the Peterborough cohort finished on 1/7/12, and allowing for the 6 months plus 3 (for court delays), this takes us up to March 2013. So on this basis, why have the last three months of data (April, May and June 2012) been excluded? (As far as I can see there is no explanation of this decision, but forgive me if I have overlooked it.)
14.   Given that I assume that data is ordinarily collected on a quarterly basis, it would have been helpful to have presented your data in a similar way so that trends could be spotted over time rather than use the fairly arbitrary 19 month period to show the data. Why did you present it this way? Please could I have the data on a quarterly basis.
15.   Given that you must have the data for Peterborough for the missing 19 month period (September 08 to March 11), and acknowledging that this overlaps with the pilot beginning, please could I have this data nonetheless.
16.   Likewise, please could I have the data for the quarter beginning April 2012.
17.   You say “Nationally the equivalent figures show a rise of 16% from 69 to 79 re-conviction events per 100 offenders”. How do you get 16%? I can see a rise of 10 ‘points’ or a rise of (10/69*100) 14.5%.
18.   (As an aside, this is quite a large rise nationally in re-conviction rates comparing the period from just before the last election to period after. Have national rates continued to rise or have they levelled off now?)
19.   You say “these interim figures show a fall in the frequency of re-conviction events at Peterborough” which is drop from 41.6% to 39.2%. At what threshold of probability is this statistically significant?
20.   Please can you confirm that the OGRS scores cited relate to the cohort groups in both Peterborough and Doncaster (rather than all offenders who were released)?
21.   Why are the national re-conviction scores given next to Doncaster data (which average 32.9%) differ from the scores given next to the Peterborough data (average 37.9%)? I know the period is different and there is some missing data, but this still seems like a large difference…

I look forward to your thoughts

Many thanks

3 comments:

  1. Brilliant!!!!

    ReplyDelete
  2. Just a thought from an outsider. My understanding is that if an offender fails to turn up for an arranged appointment, their offender manager notifies the police, who then have to find the offender, and hand them over to the relevent agency, i.e the courts or prison.
    The bit that troubles me with this in a world where the private sector is responsible for offender management is that the public sector i.e the police will have the job of mopping up afterwards. You may say thats what happens now anyway, but now its a public service providing a service for the public. In the PbR world it will be a profit orientated private enterprise that is asking the police to make an intervention on their failing. It seems a bit seedy and frankly just wrong to me if a public service like the police are to be part of the support structure for private enterprise. Maybe my council tax will be reduced eh?

    ReplyDelete
    Replies
    1. Well there are lots of missed appointments and it's the job of the PO or PSO to chase clients up, rearrange, allow discretion, issue warnings either official or unofficial and basically try and keep the person engaged.

      Ultimately breach action could be triggered, in which case a summons to court will be issued, but the police are only involved if a warrant without bail is issued by the court due to the client not answering the summons. Prosecutions are normally undertaken by probation staff, but it's not entirely clear who will be doing the breaches of the privatised work.

      In the case of licences, again there might be some degree of discretion allowed for missed appointments in low or medium risk cases, but recall would trigger the issuing of a warrant for the police to execute. Again, in the new world it's not clear who has responsibility for initiating this.

      Thanks for raising this,

      Cheers,

      Jim

      Delete