Garbage In: Garbage Out? The use of predictive algorithms in decisions about child protection

Machine learning is becoming a methodological substrate for knowledge and action. But machine learning is not ethically neutral. It is skewed by data and obfuscated by nature…

Dan McQuillan ‘People’s Councils for Ethical Machine Learning’

The use of Artificial Intelligence (‘AI’) and relying on algorithms to determine what children are at risk of harm have been in the news of late. The word ‘algorithm’ is an old word; imported into English from the name of a ninth century mathematician called al-Khwarizmi. Originally it meant simply what is now called the ‘Arabic’ system of numbers (as opposed to Roman numerals’) but later it took on a more particular meaning and is now defined as ‘a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer’.

James Hind describes AI in this way:

It’s the great fashion in recent years that everyone gets into AI, which usually means either they have something that automates a system, or it is a pattern recognition tool that pulls conclusions out of big data fed to a network, which acts on the conclusion….There is an obsession with big data, which always has to be cleaned up by low paid humans in places like India to be useable in a pattern recognition system.  These pattern recognition systems such as neural networks operate according to hundreds and thousands of data points, building up through statistics a model upon which conclusions and decisions are made. These models and processes are so complex that not even the designers know how they come to their conclusions, what is called a black box situation.

A positive case is made for better and more efficient identification of children who are most at need as early as possible. In the context of austerity and reduced spending on services such early identification is necessary in order to prevent harm to children’s development. Early identification could also help children avoid more intensive and intrusive child protection services – the end result of which may be removal from their families. There is also a view that decisions about child protection can be too subjective – it can only be a good thing to have some more ‘objective science’ about such important and often life changing decisions.

But there are many who have voiced serious concerns about the efficacy and assumed benign consequences of using artificial intelligence to determine if children are at risk.

The history of predictive analytics in child protection – Allegheny County

Discussions about child protection have long centred on the debate about what is better – removing children quickly from risk or trying to support families manage better?  This discussion has gradually enlarged to consider how we can best identify families which are most at risk and make sure that increasingly scarce services are targeted effectively.

A ‘predictive analytics algorithm’ is basically a sophisticated kind of pattern recognition, commonly used in credit reports and automated buying and selling in financial markets. Its application to decisions about risk in child protection services is not a ‘new thing’ but its applications to date have been fairly limited.

The social scientists, Emily Putnam-Hornstein, of the University of Southern California, and Rhema Vaithianathan, of Auckland University of Technology in New Zealand were asked to help investigate how predictive analytics could improve the handling of maltreatment allegations in the USA. 

Allegheny County in  the southwest of the U.S. state of Pennsylvania (with a population of 1,225,365 in 2016) experienced a tragic series of children dying after being ‘screened out’ as low risk by human call handlers dealing with telephone referrals about children who the caller worried were being mistreated. In 2016 Allegheny County became the first jurisdiction anywhere in the world to attempt to use a ‘predictive-analytics algorithm’ to try and do a better job of identifying families most in need of intervention.  76,964 allegations of maltreatment made between April 2010 and April 2014 were used as the basis of the algorithm.

What’s the problem?

The New York Times commented that the use of the algorithm appeared to be  having a positive impact on child protection in Allegheny County:

In December, 16 months after the Allegheny Family Screening Tool was first used, Cherna’s team shared preliminary data with me on how the predictive-analytics program was affecting screening decisions. So far, they had found that black and white families were being treated more consistently, based on their risk scores, than they were before the program’s introduction. And the percentage of low-risk cases being recommended for investigation had dropped — from nearly half, in the years before the program began, to around one-third. That meant caseworkers were spending less time investigating well-functioning families, who in turn were not being hassled by an intrusive government agency. At the same time, high-risk calls were being screened in more often. Not by much — just a few percentage points. But in the world of child welfare, that represented progress.

However It is important to note that the algorithm used in Allegheny County was to help to decide who got a home visit –  NOT to make far more intrusive decisions about removing a child.

Follow the money

Another important and positive distinction is that the workings of the algorithm in Allegheny County are public and transparent; the local community are involved and able to ask questions. Dan McQuillan commented in May 2018 about the repercussions of imposing ‘machine based learning’ and possible ways of challenging it via ‘People’s Councils’:

Unconstrained machine learning enables and delimits our knowledge of the world in particular ways: the abstractions and operations of machine learning produce a “view from above” whose consequences for both ethics and legality parallel the dilemmas of drone warfare. The family of machine learning methods is not somehow inherently bad or dangerous, nor does implementing them signal any intent to cause harm. Nevertheless, the machine learning assemblage produces a targeting gaze whose algorithms obfuscate the legality of its judgments, and whose iterations threaten to create both specific injustices and broader states of exception. Given the urgent need to provide some kind of balance before machine learning becomes embedded everywhere, this article proposes people’s councils as a way to contest machinic judgments and reassert openness and discourse.

When matters are not discussed openly and transparently, the concerns increase. As the New York Times commented, secrecy around algorithms marketed and guarded by private profit making firms raise very serious questions:

That’s a chief objection lodged against two Florida companies: Eckerd Connects, a nonprofit, and its for-profit partner, MindShare Technology. Their predictive-analytics package, called Rapid Safety Feedback, is now being used, the companies say, by child-welfare agencies in Connecticut, Louisiana, Maine, Oklahoma and Tennessee. Early last month, the Illinois Department of Children and Family Services announced that it would stop using the program, for which it had already been billed $366,000 — in part because Eckerd and MindShare refused to reveal details about what goes into their formula, even after the deaths of children whose cases had not been flagged as high risk.

It is very disturbing to read that Hackney rejected a recent FOI request about its screening profile on this basis:

London Borough of Hackney is working with Xantura as a development
partner. Because of this, we believe that it would be damaging to
Xantura’s commercial interests to have the financial details of our
agreement made public. We believe that the public benefit of knowing the
financial details is in this case outweighed by the need to protect their
interests and, by extension, those of Hackney in developing the project.
We therefore exempt this part of your request under Section 43 of the
Freedom of Information Act.

It is not simply concerns about where the money goes. There are serious worries about how data is collected and analysed and what the repercussions could be in taking predictive analytics into fields far beyond simply call screening.

Political scientist and technologist Virginia Eubanks argues that automated decision making has far reaching consequences, particularly for the  poor. Louise Russell-Prywata commented on reviewing Eubanks work:

The story of Indiana’s welfare reform contains all the key elements of an automation bogeyman: an explicit aim to reduce costs and move people off benefits; a whiff of dodginess about the award process for a $1.3 billion contract to privatise a state service; widespread tech failure upon implementation; the inability to effectively hold the corporate contractor to account for this failure; the removal of human connections; and pressure on community services such as food banks to deal with the consequences.

Garbage In: Garbage Out

Emily Keddell and Tony Stanley discussed the concerns about predicative algorithms used by certain local authorities such as Hackney, in an article for Community care in March 2018 

They identify a number of concerns. Some are easy for me to understand. For example, how is consent obtained to use people’s data to inform these systems? There are serious worries about the actual accuracy of such tools and the risks of false positives are high – one tool developed in New Zealand was just 25% accurate at the top decile of risk over five years – meaning there were no findings of actual harm for 75% of those identified by the tool as high risk.

Some concerns however reveal the depth of my ignorance about how such systems work. Which is a worry. If I don’t understand it, how can I – a lawyer often acting for parents – ever hope to challenge it? The authors comment in the following terms:

The source and quality of the predictive variables, the quality of data linkage, the type of statistical methods used, the outcome the algorithm is trained on and the accuracy of the algorithm all require examination.

I think this translates to the famous phrase ‘Garbage In: Garbage out’ i.e. systems that manipulate data to produce likely outcomes, are only as good as the data they are fed. What happens if someone makes a false allegation about you? Is that ‘data’ that will be recorded to inform your future risk? How do you know what ‘data’ is stored about you and how do you challenge it?

The authors comment:

The big problem in an algorithm drawing on administrative data is that it will contain bias relating to poverty and deprivation. Where council housing data is used, for example, those who don’t need council housing will be absent. Those caught up in criminal justice systems and social services of any kind lead to an oversampling of the poor.

Big datasets such as these make some people invisible, while others become super visible, caught in the glare of the many data points that the council or government holds about them. Where such processes occur under the veil of commercial sensitivity, even the most basic of ethical or data checks are difficult to undertake.

Dr Patrick Brown, Associate professor, Amsterdam Institute of Social Science Research, University of Amsterdam; editor, Health, Risk and Society commented in a letter to the Guardian on September 19th 2018:

Our own research into child protection notes a weak evidence base for interventions, with social workers falling back on crude assumptions. Stereotypes discriminate against some families and lead to the overlooking of risk in other cases, yet may become entrenched and legitimised when incorporated into technology. Research is needed into whether these technologies enhance decision-making or whether they become uncritically relied on by pressured professionals with burgeoning caseloads. Enticed by software-driven solutions, our overstretched and decentralised child-protection system may lack the capacity for a robust ethical and evidence-based reflection on these technologies.

James Hind puts it this way:

If the reader has coded anything, they will learn that bad code and inputs result in bad outputs.  For example, if I dumped into an AI system voting intentions of a large sample of voters in Clacton UK, and used this to predict how the UK will vote in an overall general election, it might suggest UKIP would form the next government, but when the prediction is tested in real life, UKIP will if they are lucky only have control of the Clacton seat in Parliament. In a rising number of cases it has been discovered that the models built on big data are faulty, biased against certain groups, and are unable to handle unique situations.  People are forced to conform to a narrow set of categories to access services or be on the good side of a statistical artificial computer model that has no relation to reality.

It is a tragedy that for reasons of money, faith in a flawed technology, and a lack of trust of the wisdom and knowledge of human beings with decades of experience in their fields, the AI has replaced the human with tragic consequences for individuals and society.  Families wrongly suffer their children being taken into care, or being imprisoned because the computer judged according to its model this was the right outcome, and nobody can challenge the system data model, because nobody understands how it came to the conclusion.

Conclusion

Even from my brief investigation and reading, there are clearly a number of issues of serious ethical and practical concerns that make it worrying that use of AI to identify children at risk appears to be something that is being enthusiastically touted by senior figures in the social work profession. I was glad to see Professor Lauren Devine of the University of the West of England tweeting today (September 24th) that she is concerned about the use of AI and will commence research funded by the Economic and Research Council in 2019 into the ‘risk of risk’. I will be very interested in her findings.

I will leave the last word to Tina Shaw who also commented in a letter to the Guardian:

Why are cash-strapped councils wasting money on predictive software telling us what we already know? It’s not rocket science. Poverty, addictions, poor health, school exclusions etc, have always been predictors of potential difficulties for children. They should be spending what little money there is on preventive services, Sure Start nurseries, youth clubs and teaching assistants.

EDIT September 26th – further comments

Some interesting discussion followed on Twitter. I have added additional resources to the list of further reading below and note the key concerns raised by those commenting:

  • Sophie Ayres emphasised the issue of legality of sharing information to feed the algorithm without the consent of the data subject: ‘how does a Children Services team have the right to information such as school attendance. Usually at the start of a social work assessment – consent forms are signed by parents to say sw can contact other agencies. If parents to not consent at CIN stage – SW cannot seek info’.
  • Lack of accountability concerned Professor Devine:  ‘also the content of their algorithm? These things are cheap to put together, unaccountable and sold for huge profit’.
  • SocialWhatNow echoed the concern about lack of accountability and wanted to know what the SWs using these systems thought about them: ‘Clarity needed. Some data not used in final models. Problem is it’s all under the radar. Embedded w/out consultation or discussion w/ the public or those who use it. Which leads me to ask, what do the social workers who use these systems think? Where are they?’
  • Dan McQuillan touched on the far reaching consequences of use of AI: ‘that’s symbolic of two other qualities of ai that affect services as well; the fragility of the algorithm and the thoughtlessness it can produce. the systemic effects may not be so obvious but are likely to be more far reaching’

EDIT NOVEMBER 17th 2019

Community Care report that Hackney have abandoned its venture into algorithms after it did ‘not realise the expected benefits’.

Further reading

London councils are using data analytics to predict which children are at risk for neglect and abuse, 18th September 2017 Jack Graham Apolitical

Automating Inequality: How High-Tech Tools Profile, Police and Punish the PoorVirginia Eubanks 2018

Can an Algorithm tell when kids are in danger? The New York Times 2nd January 2018

A Child Abuse Prediction Model Fails Poor Families 15th January 2018 Wired

21 Fairness Definitions and their politics 1st March 2018 Arvind Narayanan. Computer scientists and statisticians have devised numerous mathematical criteria to define what it means for a classifier or a model to be fair. The proliferation of these definitions represents an attempt to make technical sense of the complex, shifting social understanding of fairness.

Artificial intelligence in children’s services: the ethical and practical issues Community Care March 29th 2018

People’s Council for Ethical Machine Learning 2nd May 2018 Dan McQuillan

Councils use 377,000 people’s data in efforts to predict child abuse 16th September 2018 The Guardian

Don’t trust algorithms to predict child-abuse risk: Letters to the Guardian 19th September 2018

Government, Big Data and Child Protection 20th September 2018 Researching Reform.

New Algorithms perpetuate old biases in child protection cases Elizabeth Brico 20th September 2018

Documents relating to the Children’s Safeguarding Profiling System – Freedom of Information request made to Hackney – request refused as damaging to commercial interests.

Social Workers and AI 25th September 2018 Jo Fox

HOW FAIR IS AN ALGORITHM? A COMMENT ON THE ALGORITHM ASSESSMENT REPORT 7th December 2018 Emily Keddell

Predictive analytics and the What Works Centre for Children’s Social Care — Connecting some dots the old fashioned way 11 February 2019 Social What Now

3 thoughts on “Garbage In: Garbage Out? The use of predictive algorithms in decisions about child protection

  1. Angelo Granda

    A Parent’s View.

    For any one interested , we touched on the use of computers,algorithms and the use of predications based on data on the CPR earlier this year and these were my thoughts put briefly then.

    If we test the ethics of data analytics for use as a means of predicating the future difficulties of troubled families with the aim of making a risk assessment and supplying support to a family in order to lessen the risk of significant harm to a child then I don’t suppose too many citizens would object. After all, everything might be significant, i suppose, for example if it can be established that a parent shops at Harrods rather than the Co-op or the local convenience store or that a family is politically correct and cuts down on junk food , it may mean a child is in less danger. If families eat fish and chips ,pizza etc. every day and eats between meals or purchases the wrong type of baked beans ,does not buy clothes from top shop or wherever ,it may be a bad sign. But predication, guesswork and data such as that must never be used in a factual matrix for use in our Family Court system.

    Ethically it would not be all that different than what we have already though, would it? Currently, the Court is supplied with a matrix based largely on predication and other data disguised as facts . Data actually means facts literally, as it happens, but computer input is not genuine fact although it is looked upon as a data-base. Wrongly, many SW’s rely on these data-bases as some sort of oracle and use the notes therein as an EVIDENCE BASE for a factual matrix without making any further in-depth investigations in order to vouch for the information and intelligence they glean from it. The evidence is not factual .

    With the right aims and computer programme, in theory we could do away with the Courts and judicial discretion ,saving a mountain of expense ,if we just entered the informational matrix into a computer and relied on robotics to analyse every case and come to realistic appraisals. Ideally ,the computer would not be biased and have no conflict of interests; fairness would be programmed in and where all procedures and safeguards are not followed, where SW assessments are irrational etc. it would be spotted automatically and the no-order principle implemented. Likewise, if court orders are flouted or statements and reports lodged late.
    I suppose a computer could be programmed to read and understand medical assessments too thus biased professional judgments submitted by the LA itself would not hold too much sway.
    However, if we were to rely on a robotic system, naturally there would have to be limits on the power it is given. Support plans to be focussed on solely and no need to go to Court at all in most cases thus much more money to spend on family preservation in accordance with the Children Act. More money all round to pay for advocacy too.
    Yes, in my view, ethically it would not be too much different to what we already have but possibly better in a way! Really the power of the present family courts should be limited too. Presently, the matrix is very brief and almost always one-sided; also professional evidence is preferred to that of respondents. Not proportional and not fair to order such draconian sanctions.
    Data analytics calls to my mind psethology i.e. the study of electoral systems. Psetho means stones or grit used in a matrix of concrete .Every stone and particle of sand and cement and water in the matrix provided to a court or entered into a computer must be counted and taken into account for it to mean anything at all . In ancient history ,each man would be given a number of stones depending on his status and for voting purposes the stones would be gathered up and counted. Unless they were all counted, the poll would be an unfair one and the same applies to a Court matrix in this day and age. There must be a full and fair matrix and every stone must be given equal value and counted or the decisions are not fair or proportionate.
    Were the Children Act focused on and power limited to family preservation and support only there would be no real problem with the use of computers.
    These are my initial thoughts re- the ethics and I look forward to all comments etc.

  2. Angelo Granda

    I anticipate this is where the future lies. Cases will use templates just as they do now and lawyers will merely have to click on a mouse once or twice to access a ready-made case. The predications will all be done for them if they enter the name, address ,post-code and a few details of the family. When the system becomes reality, it will be essential that strict controls are in place and followed scrupulously and I suggest a limit placed on sanctions issued by the family court as above.

    CaseLines is used daily for Children’s Act cases in the Family Courts, including applications for child protection orders. The service allows secure sharing of sensitive data between care professionals including care workers and health experts.

    CaseLines: Cloud-Based Legal Evidence Management Platform
    http://caselines.com/

Comments are closed.