We know we need to do things differently. Not only are we aiming to support system change at a local level, but also that we are part of the systems that we are trying influence.
The Tackling Child Exploitation (TCE) Support Programme was announced in May, led by Research in Practice and consortium partners The Children’s Society and University of Bedfordshire. This Department for Education funded programme aims to support local areas to develop an effective strategic response to child exploitation and extra-familial harm. Support will be delivered through small-scale, time-limited projects.
Mindful of the need to maximise the impact of this investment in a crowded field, we know we need to do things differently. We are conscious that not only are we aiming to support system change at a local level, but also that we are part of the systems that we are trying influence. This requires us to think and behave in ways that are congruent with the way we hope to help local areas think and behave, and to disrupt unhelpful patterns or dynamics where we can.
Two concepts have been resonating with me recently in relation to this programme and beyond: that of ‘parallel processes’ (something that Danielle Turney and Gillian Ruch, in particular, have often talked about in relation to social work) and the ‘Human Learning Systems’ approach by Toby Lowe et al in their work on commissioning in the context of complexity. The former encourages us to consider how our actions filter up and down each level of a system; the second urges us to re-evaluate the usefulness of traditional public management approaches when working in complex systems. Toby and others challenge the assumption that metrics and key performance indicators create accountability, arguing that accountability involves working collaboratively towards a shared purpose, wherein learning and adaptation (not outputs) is the prize. These messages have felt particularly pertinent as we design the application process for TCE Support Programme.
Whilst any investment in this area is warmly welcomed, there can be unintended consequences to the multitude of initiatives and funding streams relating to child exploitation and extra-familial harm. One clear message from scoping activity we undertook between May-September is that local areas can feel overwhelmed by the amount of effort required to bid for opportunities. Some colleagues were concerned that support is offered to those who can ‘write the best bids’. Furthermore, funding is sometimes bound by narrow criteria that don’t reflect the complex and emergent issues local areas are grappling with.
We faced a dilemma here: with finite resource available to deliver TCE projects some sort of application process is necessary, but how to design something that doesn’t exacerbate pressures within the system? How to ensure that we don’t mirror the very thing that we are seeking to disrupt at national and local level?
The established wisdom is to have a scored assessment approach. We would need to weight certain answers, and develop a scoring matrix. In terms of transparency, it would be important to publish the scoring matrix…and in our efforts to be co-productive we would need to consult on the scoring matrix…and in the interests of rigour we would need to seek feedback from a representative group and their feedback would also need to be weighted. Before you know it we’ve spent three months developing selection criteria, and have created another machine for local areas to feed.
The assumption is that scoring applications ensures objectivity. As with scored risk assessment toolkits, they provide something firm and measurable – which is an attractive prospect when faced with messy human issues. Professor Sarah Brown’s work on child sexual exploitation risk assessment helpfully highlights the problems with this approach (see the EIF study and CSA Centre study). Scoring can infer scientific rigour, but in fact is often a highly subjective process, applied inconsistently. The tendency is to think that scored assessment processes can insulate ourselves from criticism. But – recognising parallel processes – we are trying to model defensible, not defensive, practice. Just as the practitioner working with a child at risk must apply professional judgement, not simply rely on a simplistic tool, we too need to be able to defend our thinking and decision-making without relying on arbitrary scores.
So, in what feels like an almost radical move, we won’t be scoring applications at all. We will be discussing them as a group, challenging each other’s views, and using our judgement as to which areas it looks like we can most usefully help. Of course, we will then look over each cycle of applications to try and ensure we have a diverse mix of local areas.
We don’t want perfectly crafted bids. We don’t want colleagues spending their weekends trying to present their ‘best side’. What matters is that in the applications:
- We can see you’ve had a meaningful multi-agency conversation before applying and have shared goals for the project.
- You are able to articulate the change you are aiming for, and have a realistic sense of how the TCE Support Programme can contribute to this.
- Senior leaders are supportive of the work.
- The local area partnership is ready and able to make the most of TCE support – these projects will involve effort on the part of local areas.
- You are open with us about your challenges; these are small projects and we need to understand your circumstances so we can hit the ground running.
And, in the spirit of learning and adaptation, if this approach doesn’t work and local areas don’t find it helpful, then we’ll invite you to help us change it. It feels exciting to be learning together, and doing differently, with the sector.