Monday, 25 March 2019

Some Difficult Questions

Thanks go to the reader for drawing my attention to a piece in Private Eye regarding forthcoming Inquests. Right from the beginning of the TR omnishambles it was highlighted how the proposed split would increase risks to the public and the NPS will shortly have to find answers to investigations brought under the Human Rights Act Article 2 'Right to Life' which can look at circumstances surrounding a death and identify any failings by state agencies.

I suspect this is going to be a particularly unhappy process for all involved, not least because of reports that the secretive 'command and control' Civil Service management ethos now becoming prevalent within NPS is gaining an unhealthy reputation for a desire to 'throw the staff under the bus' in such circumstances, rather than admit to any systemic and structural failings borne of TR.   



15 comments:

  1. A fair few of Grayling's 'elite professionals' who were allocated to NPS duties are revelling in their new environment - one that rewards passive-aggressive bullies, sneaks & lickspittles.

    Sad to say, the service provision for those directed to use NPS services is way off the mark, and there ain't no sign of fair treatment for frontline staff.

    But that's just how Spurr designed it - comfy for those in the circle of trust, shyte with the odd condescending crumb of comfort for everyone else.

    No surprise then that the Tories want to junk Human Rights legislation.

    ReplyDelete
  2. Whilst probation occupies a public protection space, somebody is always going to go under the bus when an SFO happens.
    As with all organisations with a hierarchy, the blame will always filter down to those at the coalface. Those higher up can't be seen to be making bad decisions, it would threaten the whole model of the organisation.
    It's not fair, and it's unjust, particularly as the modern day probation officer has little or no autonomy in their decision making. It's a prescriptive process designed to maximise outcomes.
    Staff would be much safer if probation was an organisation that worked to make communities safer. That's not absolute and allows for a degree of error.
    Leave public protection to the caped crusaders.
    I think too the use of AI in predicting behavioral patterns and risk is something that needs to seriously looked at. My personal view is its all crap, and better suited to dating agencies for finding your perfect partner. But it's here and more of its coming. I really don't see how human perception, knowledge and training can be interfaced with a computer system? Especially when the computer system directs what approach human intervention must take.
    Maybe the machines should fave more disciplinary hearings? If they're part of the process they must be part of the problem when things go wrong?

    https://www.independent.co.uk/voices/artificial-intelligence-ethics-police-bias-healthcare-a8837286.html

    'Getafix

    ReplyDelete
    Replies
    1. There is growing enthusiasm for Artificial Intelligence (AI) and its capacity to drastically transform business performance and streamline outcomes in public services.

      As great as that hunger for innovation sounds, however, in reality, pivots towards AI are typically coupled with a serious lack of understanding of the dangers and limitations of the new technology.

      Authorities especially, are beginning to get carried away with the potential of AI. But are they considering and introducing sufficient measures to avoid harm and injustice?

      Organisations across the globe have been falling over themselves to introduce AI to projects and products. From facial and object recognition in China to machines that can diagnose diseases more accurately than doctors in America, AI has reached the UK’s shores and grown exponentially in the past few years.

      Predictably, this new era of technological innovation, exciting as it is, also raises serious ethical questions, especially when applied to the most vulnerable in society.

      My own PhD research project involves developing a system of early detection of depressive disorders in prisoners, as well as analysing the ethical implications of using algorithms to diagnose something as sensitive as mental health issues in a vulnerable group. Essentially, I am asking two questions: “can I do it?” and “should I do it?”

      Most engineers and data scientists have been working with a powerful tool called machine learning – which offers fancier and more accurate predictions than simple statistical projections. They are a commonly used type of algorithms – like the one Netflix employs to recommend shows to its users, or the ones that make you see “relevant” ads wherever you go online. More sophisticated systems such as computer vision, used in facial recognition, and natural language processing, used in virtual assistants like Alexa and Siri, are also being developed and tested at a fast pace.

      Slowly but surely, machine learning has also been creeping into and helping to shape public policy – in healthcare, policing, probation services and other areas. But are crucial questions being asked about the ethics of using this technology on the general population?

      Imagine the potential cost of being a “false positive” in a machine’s prediction about a key aspect of life. Imagine being wrongly earmarked by a police force as someone likely to commit a crime based on an algorithm’s learned outlook of a reality it doesn’t really “understand”. Those are risks we might all be exposed to sooner than we think.

      Delete
    2. For instance, West Midlands Police recently announced the development of a system called NAS (National Analytics Solution): a predictive model to "guess" the likelihood of someone committing a crime.

      This initiative fits into the National Police Chiefs Council’s push to introduce data-driven policing, as set out intheir plan for the next 10 years, Policing Vision 2025. Despite concerns expressed by an ethics panel from the Alan Turing Institute in a recent report, which include warnings about "surveillance and autonomy and the potential reversal of the presumption of innocence," West Midlands Police are pressing on with the system.

      Similarly, the National Offender Management Service’s (NOMS) OAsys tool, used to assess the risk of recidivism in offenders, has been increasingly relying on automation for its decisions, although human input still takes precedent in decisions.

      The trend, however, as seen in the American justice system, is to move away from requiring human insight and allowing machines to make decisions unaided. But can data – raw, dry, technical information about a human being’s behaviour – be the sole indicator used to predict future behaviour?

      A number of machine learning academics and practitioners have recently raised the issue of bias in algorithm’s “decisions,” and rightly so. If the only data available to “teach” machines about reoffending consistently points to offenders from different ethnicities, for instance, being more likely to enter the criminal justice system, and to stay in it, it is possible that a machine would calculate that as a universal truth to be applied to any individual that fits the demographic, regardless of context and circumstances.

      The lack of accountability is another conundrum afflicting the industry, since there is no known way for humans to analyse the logic behind an algorithm’s decision – a phenomenon known as “black box” – so “tracing” a possible mistake in a machine’s prediction and correcting it is difficult.

      It is clear that algorithms cannot as yet act as a reliable substitute for human insight, and are also subject to human bias at the data collection and processing stages. Even though machine learning has been used successfully in healthcare, for example, where algorithms are capable of quickly analysing heaps of data, spotting hidden patterns and diagnosing diseases more accurately than humans, machines lack the insight and contextual knowledge to predict human behaviour.

      It is key that the ethical implications of using AI are not overlooked by industry and government alike. As they rush off to enter the global AI race as serious players, they must not ignore the potential human cost of bad science.

      Thais Portilho is a postgraduate researcher in criminology and computer science at the University of Leicester

      Delete
    3. AI dosen't sit across the table, it dosen't interact, AI can't do what people can do. AI is total bolocks.

      Delete
  3. Interesting to see how difficult questions were being answered in 2004.
    Quite refreshing.

    https://www.telegraph.co.uk/news/uknews/1457768/Probation-boss-takes-the-blame-for-errors-that-led-to-killing-of-Pc.html

    ReplyDelete
    Replies
    1. Hancock was an intriguing Chief; a hybrid of sharp suited civil servant & man-with-a-heart. He was the hatchet man brought in by "the centre" after Colin Edwards' spirited fight to keep probation independent failed, but it seems Hancock was so impressed with what Edwards had achieved he began to adopt a similar approach.

      "The Probation and After-Care Service is not a department of the local authority; nor is it administered by the central government. Probation officers are employed by local committees of magistrates within areas corresponding to those of local government, and these committees are funded both from from central and local government sources. This arrangement gives probation officers independence from the administrative arm of government, enabling the courts to receive impartial information and advice." [Jarvis]

      Delete
    2. The head of a probation service that was blamed for failing properly to monitor a drug addict who killed a policeman while out of prison on licence said yesterday that he took full responsibility for its errors.

      But David Hancock, the chief officer of Nottinghamshire Probation Area, said he would not be disciplining any of the staff involved and would not be resigning.

      He blamed the failures that allowed David Parfitt to be free when he killed Pc Ged Walker on his staff's heavy workload.

      Pc Walker's widow, Tracy, said if they had not learned from their mistakes, as "they don't appear to have done", they faced having a similar tragedy on their hands again.

      Parfitt, 26, was jailed for 12 years last December for Pc Walker's manslaughter after driving 100 yards down a road in a taxi he was stealing with the officer clinging on. He flung Pc Walker into a bollard, causing fatal head injuries.

      The report by Rod Morgan, HM Chief Inspector of Probation, highlighted serious problems with the probation service and its monitoring of Parfitt who, at the time of Pc Walker's death, was out of prison on licence, subject to drug testing.

      He ruled that Parfitt should have been returned to prison long before the incident which resulted in Pc Walker's death. In the weeks following his release in September 2002, Parfitt regularly breached his licence conditions by failing 10 drugs tests and missing a number of appointments.

      His probation officer failed to report him or recommend that his licence be revoked because she was unaware of what she was meant to do.

      In November, Parfitt's licence was finally revoked but by then his period of curfew had ended and the police, who tried to arrest him, had no idea where he had gone. He was still at large in January 2003 when he dragged Pc Walker to his death when the officer tried to arrest him.

      Mr Hancock said he accepted all the criticisms in the report. He was aware that his staff were overworked.

      He said Parfitt's original probation officer had been adequately briefed on national guidelines to deal with his drug testing programme, but she had "failed to absorb" the finer detail of the requirements.

      He refused to say why feeling ill after taking heroin was deemed an acceptable excuse not to turn up for a drugs test, or why the officer was allowed for so long to apply her own criteria as to what constituted a breach of licence.

      He said she should not face any action because she was overworked and that was the management's fault.

      Mr Hancock also blamed a raft of Home Office initiatives imposed on his service, which had confused staff.

      He said: "We are a small office dealing with a huge amount of crime." He said they had a staff of 90 taking on 5,000 new cases per year.

      Steps taken since Pc Walker's death included the Nottinghamshire office being reorganised. The time between the revocation of a prison licence and an offender's arrest had been cut from 32 days to an average 1.5, and enforcement targets had improved.

      The Home Office also published a six-point action plan which includes a review of the way offenders are recalled to court on breach of their licence, a more cohesive, multi-agency approach to victims of crime and better training for probation staff.

      Mrs Walker said Mr Hancock's response amounted to little. She said: "He doesn't address why it went wrong.

      "Being busy isn't an excuse. If the probation service had done its job my husband would still be alive."

      Steve Green, the Chief Constable of Nottinghamshire, said the drug pilot scheme that Parfitt was on had "let them down" and was in jeopardy "unless a lot of work is done on it".

      Delete
  4. Another Private Eye story - "AWOL AMEY
    Amey has pulled out of the privatised probation service, increasing pressure on justice secretary David Gauke to abandon plans to re-privatise the service."

    could this be true?

    ReplyDelete
    Replies
    1. This from FT in Dec'18 may help?

      "UK outsourcer Amey is expected to be sold to a private equity firm in the new year, in another sign of the pressures bearing down on the sector in Britain.

      Ferrovial, the Spanish infrastructure group and part-owner of Heathrow airport, is in talks with PAI Partners and Greybull Capital, which presided over the collapse of Monarch Airlines last year, over a potential sale of the business, according to people close to the vendors. Ferrovial bought Amey in 2003 for £81m.

      Amey, which provides construction and other support services, is the most troubled part of Ferrovial’s portfolio. Its Spanish owners “may need to package it with other parts of the business to find a buyer”, said another source."

      https://www.ft.com/content/8257cb26-fdfb-11e8-aebf-99e208d3e521

      Delete
    2. Update Feb 2019, again in FT:

      "Ferrovial, the Spanish infrastructure group and part owner of Heathrow airport, has taken a €774m writedown on its UK subsidiary Amey, in another sign of the pressures faced by the construction and support services sector in Britain.

      The listed but family-owned construction company confirmed on Wednesday that it is seeking to sell Amey as part of a wider sell-off of its support services business, including divisions in Spain, UK, Australia, Chile, US, Canada and Poland.

      Ferrovial’s UK business Amey, which provides construction and other support services, employs about 19,000 people in the UK, providing everything from maintenance for the Ministry of Defence, transport for prisoners for the Ministry of Justice, road and water pipe maintenance and private finance initiative projects for schools.

      It is the most troubled part of Ferrovial’s portfolio after being hit by a series of legal disputes and lossmaking contracts."

      https://www.ft.com/content/792b4eb2-3ab7-11e9-b72b-2c7f526ca5d0

      Delete
    3. http://www.mtcnovo.co.uk/partners/

      There's a very obvious gap where the Amey logo should be...

      Delete
  5. Interesting program on Panorama tonight on school academies run by private companies - again another government privatisation failure with tax payers money being thrown at them

    ReplyDelete
  6. https://www.governmenteuropa.eu/ai-and-public-standards/92688/

    ReplyDelete
    Replies
    1. The UK’s Committee on Standards in Public Life intends to conduct a review into the efficacy of existing frameworks and standards governing artificial intelligence.

      As technology becomes more widely used as a factor in decision making throughout the public sector, the committee’s report, due for publication in early 2020, aims to analyse the ethical implications of artificial intelligence (AI) deployment in the public sector and gauge whether the high standards of conduct expected from public sector workers are followed in AI-based decision making.

      Lord Evans, Chair of the Committee on Standards in Public Life, said: “Honesty, integrity, objectivity, openness, leadership, selflessness and accountability were first outlined by Lord Nolan as the standards expected of those who act on the public’s behalf. These principles have stood the test of time and are deeply embedded across much of the public sector – from the Civil Service and NHS bodies to local councils and schools. The increasing development and use of data and data-enabled technologies in our public services can potentially bring huge advantages in terms of pace and scale of service delivery, but there are some major ethical and practical challenges about what this means for accountability, objectivity and the other Nolan principles.”

      The “seven principles of public life”, also known as the “Nolan principles”, were first drawn up by Lord Nolan, the founding chair of the committee, in 1995. They are designed to apply to all public officials and employees, including elected officials; civil servants; members of the police service; employees of courts and probation services; and local government employees. The seven principles are:

      Selflessness – acting solely in the public interest;
      Integrity – avoiding conflicts of interest and nepotism;
      Objectivity – acting without bias or partiality;
      Accountability – submitting to appropriate scrutiny to remain accountable to the public;
      Openness – open and transparent decision making processes;
      Honesty; and
      Leadership – holders of public office must exhibit the seven principles in their own behaviour to set an example and promote these values to others.

      Lord Evans added: “As the committee celebrates its 25th year as an advisory body conducting broad reviews of key ethical issues, we want to look at what the future holds for public services and help ensure that high standards of conduct continue to be ‘built in’ to new ways of making decisions on the public’s behalf. We are keen to hear from individuals and organisations who are developing policy, systems or safeguards on the use of AI as we gather evidence for this review.”

      Delete