Using integrated data to inform design, reach, quality and performance

Published: 02/07/2025

Lambeth Early Action Partnership (LEAP) team talk us through 10 Years of LEAP in their presentation 'Developing an Outcomes Framework for Early Years Services: Using integrated data to inform design, reach, quality and performance'.

Sarah Rothera, a Research in Practice Associate, recently engaged with the Lambeth Early Action Partnership (LEAP) and their 10-year journey to transform the lives of babies and toddlers and their families.

The Children’s Information Project Learning Network is funded by the Nuffield Strategic Fund and hosted by the University of Oxford in partnership with the University of Sussex, the London School of Economics and four Local Authority partners.

In this podcast episode, Claire Dunne and David Wood talk us through:

  • The LEAP project.

  • The opportunities and challenges of developing a shared system of measurement, including:
    • Aligning priorities and vision across a diverse and distributed network of people, and
    • the importance of leadership culture for bringing people together and achieving a shared vision.
  • The opportunities and challenges of measuring soft outcomes.

[Intro] 

Sarah: So, hello and welcome to the Children's Information Project Podcast. My name is Sarah Rothera, a research in practice associate and today we'll be taking a closer look at the Lambeth Early Action Partnership, or the LEAP Project. Today, we will hear from David Wood and Claire Dunne from the National Children's Bureau. David was involved in working alongside services and setting up a shared measurement system and Claire led on evaluation and research for the LEAP project. Together, they bring a wealth of knowledge about what it takes to bring people together to agree and measure outcomes across a diverse and distributed network of people. Welcome Claire and David, thank you very much for joining us today. 

David: Hi.

Claire: Hello.

[What is LEAP?] 

Sarah: Claire, I wonder if you can start off by telling us a little bit about LEAP? 

Claire: Great, absolutely. So, LEAP was one of five local partnerships that formed A Better Start, which was a ten year programme funded by the National Lottery Community Fund. So, the whole programme ran between 2015 and 2025, and overall aim of the programme was to improve the life chances of babies, very young children and families in some of England's most economically deprived areas. So, the five sites included Lambeth, Bradford, Blackpool, Nottingham and Southend. So, the LEAP Project was the Lambeth Partnership and our programme focused on four areas within Lambeth. So, Stockwell, Myatt’s Field, North Brixton and Tulse Hill. LEAP was hosted by the National Children's Bureau, so we worked with a wide range of children, families, practitioners and organisations across Lambeth. And, over the ten years funded and supported over twenty local services to meet the needs of families, from pregnancy and early years, through to the early years of childhood. And, overall the programme worked to give thousands of children in the LEAP area aged between zero and three, a better start in life. And, how the programme worked was, so LEAP was a collective impact initiative which meant that all of our services and activities linked together, and so we shared a shared goal, a shared aim to improve outcomes for very young children.

[The data platform] 

Sarah: It was a huge project, and as part of that work you built a data platform. David, maybe you could tell us a little bit about the platform that you built? 

David: So, basically the premise behind the platform, when we first started, you know, we kind of shied away from it slightly, because it was such a big task, and I don't think we realised initially how much data would be involved and how much of an important role it would play in all of our work. When we first set up, it was very much around implementing services. But then, it was evident that, you know, not all services had databases, so it's a bit of a snowball effect into realising the importance of data across the programme. So, then we realised that if we did get sets of data, it would all be siloed data, because it wouldn't be joined up. So, we needed something to show our programme reach, so it's how many people were involved within the programme, not how many different services they might've been involved in. And, we couldn't even tell which services the same people were involved in. So, that was the initial driver, but then as Claire said, with the collective impact, it seemed like a good opportunity to not only have that reach figure, but also have a better understanding of who is accessing which services and what's the collective impact of multiple service involvement. So, ideally, you know, the theory being that if outcomes should improve, the more services that you'll access within a programme or set area, so that was the idea and it was very complex, I'm not going to lie. So, a lot of investment went into it, and we had to come up with a proof of concept initially around the platform, and what did we want out of it, so a business case really around what it would do. And then, we realised it had to focused around three areas, which would be user data, so that's your characteristics, contact details, etc. Although, that wouldn't be part of the platform itself because it would be a lot of consent involved around that. And then, you'd have engagement data and then you'd have outcome data.

And, of course, what Claire is talking about, collective impact, is focusing primarily around outcome data, because that's the difference it's made by, that's the impact, the important bit if you like. But, the rest of stuff was really important, so it was really important to understand where people are engaging, which services they're engaging. Some services were lighter touch, like community engagement type things, so that would be the inroads into other services, through other referral pathways, etc. So, we wanted to collect as much of that information as possible, so that meant setting up each of the different providers. So, we had, like, a 20, sometimes 30 projects on the go at one time, so it's all these different sets of data coming into one place and making sure that we could have a read across to say 'Ok, we know that person has attended that service. And, through a pseudonymisation approach, we are able to identify that person in other services as well.

So, I'll just talk about the pseudonymisation approach to get it out of the way because it's a bit technical. So, you have different criteria of data, so ours would've been, say, the gender of a child. It could be the parents' phone number, of the parents' email address and then you might have the first three letters of a child's name, because you get twins. What's really annoying is when the twins are named very similarly to each other, because it doesn't really work. But, generally yes, so that'd be the focus. Once all of these bits of criteria are lined up in the same kind of way, with the same… So, whether it's upper case or lower case. You know, you have to have lots of validation areas within each cell, to make sure that when they come through this kind of sausage maker, or pseudonymisation software, what comes out the other side is exactly the same code.

So, as long as those same bits of information at the start are the same and in the same format, when it comes through, it should always come through with the same code. You can't reverse engineer this, so that's what helped us to get around some real consent, or data governance issues, or concerns, especially from Caldicot Guardians. So, NHS providers, are probably the toughest people, along with refuge for obvious reasons, to make sure that their data is kept safe and you only take what's needed for your analysis or evaluation. So, that was the idea and then we're setting that up within a platform. So, lots of programming went into it initially, testing. But, the important thing is around the user data sets, so the characteristics, and those would be the formation of the pseudonymised bits of information or code, that would then throw out this, kind of, 60 digit bit of code. Which just looks like nonsense to other people, but that code matches across services and then you can tell how many people you have reached throughout your programme. So, that's the premise of it, and that was the format that we undertook with the platform. So then, it was the challenge to get all of these services set up, using the right software. If they weren't very technical, then we had a team that could help them with their data. So, if you're really careful with that as well, so sometimes it meant my team working on their laptop to get the spreadsheet in the right place, to make sure it's pseudonysed before it comes through to us. Or, sometimes they would send it through without the pseudonymisation, because we had a data sharing agreement for that. Sometimes, that would normally be, kind of, a Children's Centre database, or because the platform was housed within the council, that wasn't so much of a risk. So, it's council data within a council data environment. Or, sometimes there would be smaller charities that didn't have that capability, so they're out there to engage with community, not to set up pseudonymisation approaches to their data.

And, getting them to collect data sometimes was a challenge anyway. So, yes, kind of small challenges that built up. It was quite a beast to manage, but we managed to get it done in the end, and it did work. So, yes, it was a long journey getting there, it took, like, three years to get that set up and in place and working efficiently.

[The process] 

Sarah: So, what you've described there is a distributed network of people who are at a different level of data literacy and have different data standards. And, have different competing priorities as well. Talk us through a little bit about what that process looked like in terms of getting people together to think about the way forward, to think about what data was going to be collected, how it was going to be collected and to help prioritise what was most important. 

David: We had some consultants that we worked with who previously worked with Lambeth Council, which worked for us, because they knew the local council systems, and that helped with networks and getting the right people in the room. Can't overemphasise that enough. But basically, we would've had, I suppose, sessions, focus groups around what data people were already collecting, what wasn't being collected, what would it take it we wanted to get them start collecting, say outcome measures that they weren't collecting, or even engagement measures. And then, how would they record that. And then, beneath that, it was around the time of GDPR [General Data Protection Regulation] fortunately, so we really made sure that all of our documents were GDPR proofed, so the consent processes, the right to withdraw consent, all of these things that were a little bit shoddy before then, we'd actually really sharpened them up, so we were really on it with regards to data management, data security and also data sharing agreements. So, which then we subsequently got everybody to sign that participated in the programme, as in each organisation and then as a result of that then we could focus more on what the outcomes were that we wanted to collect. And, we set up an outcomes database for some of those outcomes, it didn't always work for some of the outcome areas, so we just set up a different on-boarder to accept other outcomes too. So, it was tailoring to each organisations' ability, or what was already available. And then, when it came to decide what outcomes were to be collected, I'm going to hand over to Claire, because she was pivotal to that process.

Claire: Great, thanks David. So, yes, it was definitely quite a complex and involved process of deciding what were the outcomes to collect, and also we were led very much by deciding what was proportionate to the services that were being delivered. So, it was a long process. Essentially we started with commissioning an evidence review of early childhood development outcomes and place-based approaches to understand the previous ten years of published literature, what works really to improve early childhood development outcomes. So that we were really focused on the entire programmes theory of change. So, the idea was that we would have a really solid, evidence-informed programme of theory of change, that would inform those long-term outcomes that would then be shared and mapped across all the services that were being delivered. So, there was some criteria from the funders, so the National Lottery Community Fund, that specified a few strands that we had to focus on in terms of outcomes for children. So, that was social and emotional development, communication, language development, diet and nutrition and systems change. And then, also a maternity strand which we included. So, those overarching areas and there were active services working to improve outcomes in those areas. But, what hadn't necessarily happened as the services grew and developed, is that those outcomes weren't necessarily aligned, so it might be that a couple of services were aiming to improve attachment between the parent and infant, but they might be measuring that in a slightly different way. So, what was really important was to try and seek alignment across the 20, 30 services that were being delivered. And, the way of doing that was to have this overarching programme theory of change that we could see the direction of travel that people were aiming to work towards. Like, that ultimate shared goal of improving outcomes for all children, and narrowing inequalities in outcomes.

So, essentially trying to support those at greatest risk of poor outcomes. So, essentially, the programme’s theory of change was looking at the various points at which services were supporting families, practitioners and children, so looking at the outcomes along the way that were being supported. So, in order to reach that decision and make sure that the outcomes were acceptable and pragmatic and practical for practitioners and families, we had conversations with the practitioners, with services, with local families, with senior managers. So looking at doing basically an audit of the existing outcomes and measures, and then looking at where there was alignment. And then, we would have a process of then reshaping the theory of change and our suggested measure, again based on evidence around outcome-based measurement tools. And then, we'd have consultations with local families and senior managers and some national measurement experts to look at making sure that we had… where it was possible, for the longer term outcomes, it was possible to have standardised measures. So, that was kind of our gold standard, if and when it was possible to include those standardised measures, that's what we were hoping for. But, for some services that were much lighter touch, like say for example there was a breastfeeding service, and that breastfeeding services could be more intensive, but for some mothers who came to the service and were looking for support, it might've been a 30 minute conversation, or meeting with a practitioner to support the breastfeeding, and that would've been successful. So, it wouldn't have been proportionate to then ask them to complete a measure or feedback form, or whatever it was. So, we had to really tailor the way that we were implementing those measures. But, the goal was that all of the measures were aligned to the outcomes that services were aiming to achieve.

[Leadership culture] 

Sarah: What you've described there, it sounds like you brought together a lot of different services and a lot of different people who were working on the same thing, but there's a lot of overlap in there. But, there are also a lot of competing priorities as well, in terms of people and their own outcomes, they were working towards their own particular goals and measures. You mentioned there that was some work that needed to be done around aligning the priorities, to be able to build the outcomes framework. And, it sounds like leadership culture played a big role in that. Tell us a little bit more about the leadership culture, what that looked like and how the leaders actually came together to drive some of this forward, and agree on what matters most and what has the greatest impact. 

Claire: So, definitely, yes, when we first started this work, I think the LEAP programme was maybe in its third, or fourth year of running and so there was already lots of services happening, lots of practice, lots of data collection. And so, we really took stock initially of what was happening first of all, and also then realised that it was in services interest to evaluate and monitor their impact. Because, they obviously want to see whether what they're delivering is effective and to what extent it's effective and for who it's effective. So, there was this, like, inherent interest in services, measuring and monitoring and documenting their learning, but there was, like, a huge variety of the ways that that was happening. So, as my initial role in evaluation research, people were asking for support and they wanted bespoke evaluations of their service. And, you know, they were really keen to learn and document and evidence what they were doing, but essentially, at that point in 2018, it was a small core team that we had that was supporting all of these services. And so, we kind of had to say we want to assess the impact of the programme as a whole, and that will show what services are contributing. But, we need to evidence the whole programme collectively. So, there was a huge piece of work around hearts and minds, to engage, you know, service leads as you were saying Sarah. It was definitely this sense of trying to bring a more cohesive culture around measurement and learning because there were these two competing priorities around. Individual services wanted to evidence their impact, but then the LEAP programme and our funding, you know, requiring us to show the impact as a whole. So, there was a renewed strategy around measuring what matters the most and the way that we tried to communicate with practitioners and service leads, was that they were collecting a lot of data at this point.

A lot of key performance indicators and monitoring data, and what we were trying to emphasise to them was a slight shift towards looking at outcomes in impact data. So, we were saying, you know, hopefully by tailoring what we're measuring, this will reduce the burden of data collection for them, and that was key message that we were trying to deliver. Was that, by having this, kind of, alignment across the programme, there should hopefully be more support centrally. So, like David said earlier about his team supporting people with data collection, that was something that we were then better placed to offer. Because, we were using the same measures, there was consistency, there was shared databases. Whereas, I think initially in the programme, it was a lot more organic the way it developed. And, our director, Laura, always referred to the programme as building the boat while you're sailing it, kind of thing. Like, a lot was happening at once, but also we were trying to get these systems and processes in place so that we could actively support practitioners and service leads. And, I think that was one message that we were using, was we wanted to make really evidence-informed decisions so that we could best support practitioners to measure what mattered and so that we could align that also to national data sets that were being collected. So, that ultimately, when we came to do our final programme evaluation, we would compare the LEAP area and the impact that we were having to the more national data, the administrative data sets that were being collected. So, that's why it was really important to make that alignment, and that was how we were trying to influence. But, it took a long time and I think we did factor time in for that and we definitely did lots of consultation and lots of the practitioner forums. That again, David mentioned earlier, they were really powerful and informative, but there was some resistance.

I think you mentioned it earlier Sarah, this idea of the variety of data literacy and standards and practices. That was one of the bigger challenges that we had, was you're working with, on the one hand, psychotherapists who are used to delivering standardised measures, you know, 50 times a day. To then, voluntary sector organisations who just don't have that requirement, or experience necessarily. And, David's team was instrumental in supporting people no matter what stage they were at. This was the initial work around, like, hearts and minds, but then we did a lot of work of the implementation of the shared measurement system. In terms of offering training, frameworks.

David: Also as well, don't forget, there were some measures that were already being collected. So, if you've got, like, a national organisation like Henry, and you're commissioning them, they will want to use their own measures, because they want a national view of the impact of their programme. So, they want like-for-like as well. So, sometimes it was a bit of a battle, empowering people, empowering communities was another one. Where, we didn't quite get what we wanted and I think that comes down partly through the commissioning process, and clarify around what you want for your shared measurement system, if you're going to commission services to be part of that. So, that's a learning point for us.

[Information governance] 

Sarah: Yes, and there's a lot of practical steps to be taken to develop a shared measurements system in that way, and you've spoken to some of those just now. Build on that picture for us a little bit, walk us through some of the information governance steps that you took, as well as thinking about some of the types of data that you were collecting and why they were meaningful to the project? 

David: We had to come up with data sharing agreements for all the organisations that we were working with. Obviously, the easiest sets of data were ones where NCB [National Children’s Bureau] were managing them, because they didn't need that. But, there were challenges, so sometimes we had to pay for the adaptations for the data that we wanted. Charities don't always have finance for that kind of research and evaluation. Most of the time, it could be a real struggle with health partners, because even though we might have the data sharing agreement signed off, the people that still held the data, getting them to send their data was painfully slow. So, that was a problem, to get a very senior to get what we wanted in the end. So, it's horrible to have to do that, but needs must. And, if your evaluation is dependent on it, but most organisations were quite compliant, you know, there's something about levels of maturity and understanding data protection within some organisations. So, we were very, very clear, very clear around our privacy statements and the tools that we were using to collect the data, where it could be electronic monitoring systems, that was even better. And, as we found out after COVID, you know, a lot of the lighter touch activities, or even some parenting programmes, they would be set up so that you could book your place online, and then you could ask all the questions that we would ask, generally. Because people want to attend, so like, when they turn up somewhere and they've got to fill out this form, and you might get a name and a number and that's it. Not the ethnicity or age group, etc. So, that worked well for us, putting everything online, so that would be another top tip. Get everything electrified as soon as you can, because it saves a lot of paperwork and processing. So, yes, the processes could be difficult in certain stages. So, another example would be, because we were using a new piece of software, which was such a simple file, which was never going to be corrupt, etc.

Getting that through NHS, kind of testing processes, would be just the same as if you were going to introduce a really big database that was case management. So, they would treat it on a par with that, and then we had to explain why it was a very, very low priority for them, and a very low amount of resource needed to test that. So, once you got that across the line, that made it a lot easier too. So, there were some real challenges - you're not part of the day job in these kind of programmes. So, if you were then you'd be adhered to a lot quicker within certain management structures. So that's always a challenge. Local authority as well as NHS systems, but generally, yes, when you're paying people money to do stuff, then it does eventually get done. So, some areas where that wasn't always possible, because we might be asking child care settings that are struggling for information without giving them anything for it. We did try and pay them for data sometimes, because we were quite a cash rich programme, so we had that privilege, but you wouldn't get that if you worked in a local authority. So, there were opportunities for us, but there were still challenges around that. So, if you're paying somebody to deliver a service, you can say to the staff in that service, 'We need you to do this.' And, generally they'll comply. If you're doing something with a child care setting and you're saying, 'I need you to collect 15 welcome measures this week.' You know, they might have other priorities, so they might not do that. And also, we're not paying them any money towards that, so there is a little bit of incentive needed sometimes, and then probably that comes back to the leadership within the setting. If they really believing in it, and see the value then they'll do it. Part of our process, once we've collected the data, would be to present it back to them, their monitoring essentially. So, it's a bit embarrassing for the first couple of years, we couldn't really do that, because we didn't have the facility or the resource within the programme.

So there was reliance on when the project manager was sending us their data, they would have kind of a count on, I don't know, how many people from a particular background, or what age group was the most prolific within their service. Then, because we had the data as part of the platform, we were able to feed that back to them with these quarterly service reports. And then, our quarterly service meetings would be really richly informed by that. So, it wasn't really a quick win for us in the sense that suddenly we could identify people who were not engaging in that cohort of the community. Or, geographically, you know with that community engagement programme, we could suddenly see through a heat map. 'Oh, there's a big gap there-, why aren't we…' And you know, ‘We know that there's an area of deprivation there that could really benefit from the services that we're offering.’ So, it was about overlaying different sets of data to give us different perceptions or understandings of levels of engagement, you know, the type of the community that were engaging with us. So, through kind of ethnicity, age, where we knew there were children as well. So, it did give us richer information which fed back into our engagement strategies, and we could challenge people then as well. And, we could challenge people then as well, and when it came to service reviews. Saying, 'Why aren't you engaging more people from black and global majority backgrounds?' You know, we know that we've identified that they're a bigger cohort in the IMD [Index of Multiple Deprivation] quintiles one and two. So, we had all that information to hand and we could show that back to them. So, we wouldn't always find the answer to why, or money wouldn't be able to resolve the issue, because some people just weren't attracted to certain services, even though we tried to find out why that was the case. So, yes, that was quick wins. And then when you get to the outcomes element and then you get to see the impacts that you're having, that helps then frame it all into place. Feeding that information back to them, was a really useful tool, partly for motivation. You know, you can see the difference that you're making and they know that as practitioners. But, there's a very strong evidence base there, of the difference that you're making to people that are accessing your services, the linking of that was the kind of icing on the cake.

[Measuring soft outcomes] 

Sarah: What struck me about the LEAP shared measurement system, is that it was a really good combination of hard outcomes and soft outcomes. And, there was data in there that was speaking to social, emotional development and things like, parents knowledge and skills and confidence and things like that. Measuring soft outcomes is really challenging, particularly when you're working with a diverse group of services in the way that you were. Where, there's all kinds of challenges in terms of reliability of collecting that information, how it's reported, how it's stored, all of those sorts of things. Could you talk us through some of the opportunities and challenges you had with measuring soft outcomes? 

Claire: Yes, definitely, I think yes, this is something that when we worked with the consultant on our final decision-making round, like, what measures to… which outcomes to measure, and how to measure them. So, we did a review of outcome measures in this field, so looking at strands that I mentioned, but also generally when working with children and families. So, there were a couple of potential measures that we looked at, I think the STAR outcome was one of them.

Sarah: The outcome STAR. 

Claire: The outcome STAR, that's it. Yes, which I think is used quite routinely in, like, social work, yes. We grappled with this quite a lot around what would be the best way of measuring. Again, it was this idea of measuring progress towards the ultimate long-term outcome of each service individually and the programme overall. So, what we did was, we met with every service lead and looked at their individual services' theory of change. And, a lot of what we realised there, the medium-term outcome that would need to be achieved or at least be worked towards before they achieved their long-term outcome, is that there would have to be a behavioural shift. So, the way that we decided to measure that was around knowledge and confidence and skills. So, whether it was the knowledge, confidence and skills of parents, or of practitioners, that was kind of the focus that we took. Then, in terms of the monitoring evaluation learning frameworks that were developed for every service, based on these conversations with service leads and ultimately we developed an in-house form. Which again, you know, it has got lots of questions and it definitely wasn't perfect in terms of the reliability and the objectivity of it. But, it was what we were really hoping it would be used for. So, it would be for any service where there was multiple time points that practitioners would meet with families, or practitioners. The goal was that there would be a sense of the direction of travel. So, if it was that parents were flagging that they, they didn't feel confident, didn't feel like they had improved schools or confidence. Then, it would be an indication to the practitioners leading that service that more work needed to be done with that parent. So, it definitely wasn't, you know, standardised, scientific level measure, but it was something that was standardised across the programme.

So, there was a requirement for all services to complete that, to have that data collection at the midpoint of their service, which varied for every single service. It might have been one week for some, it might've been a month in for others, but there was that requirement that people engaging with the service would complete that form, and then it would be an indication of the extent to where people were gaining skills. Gaining confidence, gaining knowledge in whichever area that the service was targeting. Another system that we set up internally in terms of monitoring and learning, was so the service reviews that David mentioned just then. They were, like, regular meetings between a core team at LEAP and the public health team at LEAP and the service leads. So, we developed these quarterly service reviews and the data that was shared with us, that would then show the measures that were being collected in that quarter, to see the extent to which parents, or practitioners were saying that they were gaining skills, gaining knowledge, gaining confidence. And again, it was not a perfect science, but better than nothing, ultimately. And, I think just having that consistency across the programme was useful to just inform our learning and actually one of our first… We did two annual learning reports that looked at all of the data that we were collecting across LEAP in our first annual learning report looked at those measures of knowledge and confidence and skills that were being collected. So, again, it was just a good indication of direction of travel, rather than this perfect sense of, yes, those soft outcomes which are really difficult to collect. And also, particularly when for some of the services you were looking at very sensitive or vulnerable areas. Such as, attachment between a parent and a child, or a domestic abuse service and so again, like, it just kept going back to this idea of proportionality and making sure that the data we were asking to collect, that we could ethically justify that the questions we were asking were for a valid reason. And, that the data we were collecting was going to be beneficial, ultimately, for the service improvement and for the learning as well. So, yes, it was a tricky balance and I think, yes, one that I think we definitely did our best. But, I think there was a lot of learning around how to collect those outcomes.

David: Yes, another thing that we did to reinforce that behaviour change, I suppose, or to keep it in practitioners' minds. Was, that when we came to… So, lots of iterations to the way that we would've monitored… So, very basically, initially, where they would fill out an Excel file. So, eventually having something that was quite slick. So, they would send us their data, in an Excel file generally, or a CSV format, and then on top of that, they would send a narrative report. And, the structure of the narrative report was very much focused around the theory of change. So, the five elements of our theory of change process and we got them to report on each of those different theories of change. So, sometimes I think when you get to monitoring programmes, people are kind of looking at their… well, the KPI [Key Performance Indicator] should reflect what's on the theory of change. But, it doesn't always pick up on the nuances of the theory of change. So, we were very keen to have to theory of change at the heart of all of our monitoring, understanding how well services were doing. So, that for me seemed a bit of a no brainer, following the work of setting up these very, kind of, narrative driven (mw 31.20). It wasn't just a diagram, it was all embedded with the evidence. So much time went into that with the public health team and the evaluation team which Claire led. So, that was key as well, so from softer outcomes, to help the people focus. 'Ok, so what did you really learn from that?' You know ‘What was a great case study this quarter dealing with that soft outcome where you saw an improvement?’ Or where it didn't work as well, it's really important to know when you're not doing a good job, because you can course correct, hopefully, or adapt your service to improve.

[Learning from the LEAP journey] 

Sarah: Yes, and there's something in there around how you make sense of the information that you collect as well. So, it's recognising that there are a number of different factors that will influence soft outcomes, it could be that someone has woken up on the wrong side of the bed. It could be the way that the information was collected, or just how things have been doing that day. You know, and when it comes to working with children, or being a parent, you know, children go through different developmental stages, and it takes time to adjust to that, so your confidence might dip. So, there are so many factors that influence soft outcomes, and that piece around how you interpret the information is probably just as important as how you collect it and what you're collecting. Sort of, more broadly on that topic of making use of the information that you're collecting. So, listeners can go to the main presentation and have a look at the outcome measures and all of the different outcomes and indicators and the measures that we used as part of the programme, to get a bit of a sense of the scale of the information that was being collected. While being proportionate, it's very organised, it's very structured and it's quite comprehensive in terms of working with children and families. Let's move to the wrap up, ten years of LEAP, you've learned a lot of lessons along the way, you've brought a lot of people together to build a very impressive tool and measure lots of different outcomes. Could you share some of your learning from your journey and tell us what advice you would give to someone who is starting out on a similar project? 

David: I think initially, if you're going to build something like the platform, the business case has to be about what your intentions are. Sometimes, we weren't really sure about our business case until, say, year four or five, when we went out to tender. And then, you know, get professionals involved at an early stage as well. So, we had a data architect that could quality assure every process that we set up. You know every step of the way, he had a lot of IT [Information Technology] background too, so we wouldn't have had the platform without this guy, the way that it worked, that it did. So, and that made it really reassuring for me, so if you're project managing something where there are lots of technical things and you don't have that background, you need that person there available for you to ask the thornier questions before you challenge the providers who are meant to be doing the work for you. So, that was something that I learned that was a really vital asset for us.

Claire: Yes, definitely I think on top of that in hindsight, there's definitely something around doing a bit of an audit around the, kind of, data that's already being collected and mapping that and working with, in our case, many providers including the local authority and several NHS trusts, to kind of have a sense of what data is already being collected, and the data that is non-negotiable from their commissioning perspective. Understanding that, understanding as well, kind of, going hand-in-hand with what data is being collected. What skills there are, because I think LEAP in the sense, that we working with NHS practitioners but also your voluntary sector organisations. There was a real mix of skills and confidence around collecting data, inputting data, cleaning data, all of that kind of stuff. And, I think had we known that a bit earlier on, in terms of what support we needed to offer internally. You know, we ended up having a bit of a restructure internally, midway through the programme, to support all of this work, because we recognised that we needed a data collection manager. You know, David needed a bigger team, I needed a bigger team, like that we realised midway through. And, I think had we been a bit more ahead of that curve and done that audit a bit earlier and got that expertise on board, that process might've happened a bit quicker and a bit smoother.

David: You know, it's taking people with you, so I said about coming up with a business case. And, you know, for the SMS [Shared Measurement System], really you need senior buy in from your key partners, and also data protection and setting up data sharing agreements, you need access to the people that have got a million and one things to do, especially in the NHS. So, we were lucky that we had an obstetrician who was a big fan of ours, we funded a project that was one of his passions, I suppose, around reducing maternal obesity from pregnant women. So, that really worked in our favour, so we suddenly got access to people quite high up in IG [Information Governance] within, say, Guys or St Thomas', as a result of this guy. So, if we didn't have that, we would've been in the back of the queue. So, there is something about knowing the right people in the right places and knowing that you can pull on them when you need to. So, only as a last resort, you don't want to power copy everything. But, you know, it just really helps to have those people in your back pocket. Because, when you did need a quick decision, they were the people to unblock basically. And, there was a very, very strong, a feeling of partnership within LEAP. As I said, we didn't really have hard-nosed commissioning, we had a partnership approach. And, sometimes that has drawbacks as well, because you can't… I suppose you can't take the hard-nosed commissioning approach when people are underperforming. It's much more of a gently approach to say and do better, I suppose, or improve otherwise your programme will have to end. But, you know, it's a much more gentle approach, as opposed to a hard-nosed commissioning, you could make horrible decisions, I suppose, in some people's eyes. Whereas, because we had a partnership approach, it was a bit softer but it kind of worked with that soft power to get what you needed done.

Sarah: Fantastic, thank you so much for joining us today. It has been very interesting to hear about your journey through LEAP. 

Claire: Thanks Sarah.

David: Thank you.

[Outro] 

Thanks for listening to this Research in Practice podcast. We hope you've enjoyed it. Why not share with your colleagues and let us know your thoughts on LinkedIn @researchIP and X (formerly Twitter).

The LEAP team talk us through 10 Years of LEAP in their presentation Developing an Outcomes Framework for Early Years Services: Using integrated data to inform design, reach, quality and performance. The presentation provides an overview of the LEAP programme, and responds to some key questions including:

  • What is LEAP?

  • How did we establish a programme-wide outcomes framework.

  • What data does LEAP collect?

  • How do we link the data?

  • How do we use the data?

  • Analysis and evaluation of integrated data.

Length: 21 minutes.

Professional Standards

PCF - Knowledge