The Programme was heralded three weeks before the pre-election period for the last general election by Communities Secretary Eric Pickles as “a triumph…[that will] turn around the lives 120,000 of this country’s hardest to help families…[saving] the taxpayer over a billion pounds”[i]. A few weeks later, Prime Minister Cameron announced the success rate had been 99% which had “saved as much as £1.2 billion in the process”[ii].
But then a year later, in October 2016, the Government finally published the independent evaluation of the programme, resulting in front page headlines alleging £1.3 billion “wasted on the vanity of politicians” on a programme which “had no significant impact” (Sky News, The Telegraph, Daily Mail, The Independent).
As I said in my blog[iii] in the Guardian four years ago, done well the TFP evaluation could revolutionise our understanding of how to turn around these families. Unfortunately the immediate impact in October was to generate more heat than light.
There is no doubt that the TFP had positive impacts and has certainly changed how services for these families are delivered. The independent evaluation finds it has mainstreamed “whole-family” approaches, stimulated local multi-agency working, opened up previously impossible data sharing and made employment support more responsive.
Families on the programme feel (and told the researchers) that it’s made a big difference to their lives. Almost seven in ten say that they are confident that their worst problems are behind them, compared with barely half of similar families not on the programme. A similar proportion say that they “feel in control” and “feel positive about the future” – both significantly more than the families not on the programme.
The figures local authorities submitted about the changes in families who were classified as “troubled” (out of school, out of work, committing crime, etc.) are audited and truthful – they do represent actual changes in people’s circumstances.
And yet, it’s a bit more complicated than that. Because one of the questions the DCLG bravely asked its independent evaluators was: “what would have happened if we didn’t have the TFP?”
As Permanent Secretary Melanie Dawes told the Public Accounts Committee in October, this is a question government seldom asks itself. Especially of the £772 billion it spends each year on the “current ways of doing things”.
In measured academic language the independent evaluation report concluded:
“we were unable to find consistent evidence that the Troubled Families Programme had any significant or systematic impact” (p.69)
So what can we learn from the TFP evaluation? There are important issues around evaluation timing, data quality, access to critical information, and so on which I’ll be exploring in detail over coming weeks at the Inlogov blog (https://inlogov.com/ ).
But one of the most important learning points is this: to work well, evaluation has to be planned early as an integral part of designing a programme and has to be sensitive to local differences.
The TFP was developed around a limited evidence base. Often for excellent reasons, it’s been implemented differently in different local areas. So it is highly likely that the approaches adopted in some areas are more or less effective than those adopted elsewhere.
For example, some areas developed a dedicated team to support the identified families whereas others embedded the work into existing services (and some chose a hybrid approach). LAs also adopted different professional roles; different approaches to staff recruitment, supervision and development; different ways of assessing and planning support for families, and so on.
This was a complex but potentially hugely informative mix of (in effect) social experiments. Unfortunately, by taking only a national-level perspective, the evaluation has not assessed these different approaches to see what works for different types of “troubled families” in different local contexts. Instead we have a bland averaging of positives and negatives providing no “consistent” evidence.
As in so many areas, taking a more devolved and localised approach offers much richer insight, learning and support for improvement. DCLG and the evaluators should take the opportunity of the extended programme to work with a group of local areas in depth, to learn in much more granular detail what is and isn’t working in the different local contexts.
DCLG’s national independent evaluation was a brave and useful step forward; we now need to build on that by developing much better local insights.
If we do that well, in three years’ time we should have a really strong understanding of how to improve outcomes and reduce public expenditure costs for this important group of people.