Outcomes not outputs
This is the eighth post in a blog series looking at the lessons I’ve learned from a recent review of the payment by results literature. Perhaps the most common reason to commission by PbR is to ensure that a service makes a real difference. PbR schemes focus on outcomes — for instance the number of people getting and keeping jobs — rather than outputs, the number of CVs and job search plans. This post examines what the literature says about defining and quantifying outcomes.
[button-blue url=”https://www.russellwebster.com/Lessons%20from%20the%20Payment%20by%20Results%20literature%20Russell%20Webster%202016.pdf” target=”_self” position=”left”]You can download the full literature review here[/button-blue]
Defining and quantifying outcomes
There is a very strong consensus within the literature that the outcomes set in a PbR contract (and the related incentives, which we will examine in a few weeks time) have a very strong influence on the way in which the service is designed and delivered.
Although there is no agreement of the best approach to structuring outcomes, a number of clear themes emerge from the literature of factors which influence the appropriateness of the outcomes set.
Unsurprisingly, many commentators prize clarity in outcome measures, stressing, in particular, the importance of ensuring that measures are meaningful to providers; and, ideally, capable of being understood and monitored using existing data recording systems.
There is much less agreement in the literature about how simple or complex outcome measures should be.
Several government departments and researchers have placed emphasis on keeping outcomes simple and understandable; arguing that the more complex measurement becomes, the less helpful it is, leading to a focus on the numbers themselves rather than on the purpose of the service.
Conversely, other commentators point out that many PbR schemes are designed to tackle entrenched social problems with end-users often requiring co-ordinated and extensive interventions from a range of providers, making it hard to define outcome measures which are both simple and accurate.
Some argue that in this sort of scheme, outcomes need to take into account “distance travelled” – how much progress individual service users have made towards their goals. The risk here is that providers then inevitably start to focus on these milestones or targets, rather than on the original outcomes.
A good illustration of the debate around whether outcome measures should be simple or complex is in the reducing reoffending sector. Commissioners have sometimes proposed that reoffending should be measured using a binary approach – paying the providers on the simple fact of whether individuals re-offended or not. However, most offenders give up crime over a period of time – reducing the frequency and seriousness of their offending – rather than simply stopping offending on a specific date. The danger of the binary approach is that as soon as an individual reoffends, there is no incentive for the provider to continue to provide them with a service since they cannot earn any income from that individual.
Although there is no agreement in the literature about exactly how to define an appropriate outcome measure, there are a number of recommendations:
- Even though simple measures may be desirable, it is often necessary to go through quite a sophisticated and complex process analysing current and likely outcomes before arriving at a clear definition of an appropriate outcome measure.
- Measures which are found fit for purpose are often co-produced by commissioners, providers and, sometimes, service users; or, if not, co-produced often the product of lengthy discussions and negotiations.
- For these discussions to be honest and therefore productive, it is easier for them to take place pre-procurement and involve a number of potential providers.
When setting the number of desired outcomes, it is important to base predictions on existing service provision and to recognise that performance increases will take place over time and not immediately on contract award. When historical baselines are not available, performance can be measured by “yardstick comparison” comparing the outcomes achieved by different providers (as in the Work Programme)
Commissioners will want to take into account “deadweight”, a straightforward technical term which refers to outcomes which would have happened anyway without any public funded intervention. By way of illustration, a proportion of offenders will go straight without the help of the probation service, and a number of long-term unemployed people will find work without any official information, advice and guidance.
Therefore, the research recommends that commissioners ensure that they factor deadweight into the outcome measures they set to avoid paying for achievements which would have occurred anyway.
In any PbR contract, providers will invariably focus intensively on specified outcomes in order to ensure that they get paid. Not only do outcomes therefore have to be accurate and realistic but commissioners need to consider that providers will quite reasonably de-prioritise work which is not governed by an outcome. Providers need to be confident they can meet specified outcomes before deciding to tender.
Discussions between commissioners and potential providers prior to the procurement process can be invaluable.
Next week’s post turns our attention to how to measure and verify outcomes, getting a balance between reliability and affordability.
I reviewed the literature as part of a project funded by the Oak Foundation to develop an interactive tool to assist commissioners and providers to decide whether a payment by results approach might be an effective approach to commissioning a particular service.
The tool is now live – please check it out at: www.PbR.russellwebster.com