The Missing Middle Way: How Management by Results can help us not just measure, but also improve outcomes

Joy MacKeith argues that Payment by Results can cause as many problems as it addresses.  Management by Results, which supports ongoing learning and collaboration, is the missing middle way between ignoring outcomes on the one hand, and linking them to financial incentives on the other.

In early September I was privileged to participate in the fifth Social Outcomes Conference, organised by the Government Outcomes Lab at Oxford University. Contributions from both academics and practitioners from all over the world made for a very rich debate in which everyone had their eye on the prize of improving social outcomes.

The debate got me thinking about the limitations of Payment by Results and an alternative – an approach I am calling Management by Results.  This blogpost explains the difference between the two and how Management by Results has the potential to unlock performance improvement.

Why I am a fan of an outcomes approach

In the old days we didn’t measure outcomes.  We counted inputs and outputs.  We collected case studies.  Occasionally we commissioned evaluations or user surveys.  Then came the outcomes revolution.  I have been part of that revolution, spending much of the last 20 years helping organisations to measure their outcomes.

I am a fan because I have seen that defining, measuring, and managing outcomes enables service providers to create services with a clarity of purpose, identify issues and gaps, and ultimately improve what they deliver for service users. It undoubtedly is a good thing for organisations to focus on outcomes.

But what happens when financial imperatives are introduced into the equation?  What happens when a project or organisation’s survival becomes dependent on evidencing that they have achieved certain outcomes?

Why I’m wary of linking outcomes with financial incentives

In the employment sector where Payment by Results (PbR) has been in operation for some time the consequences are quite well documented (Hudson., Phillips, Ray, Vegeris & Davidson, 2010[1]).  Organisations can be incentivised to focus narrowly on the specific targets which are linked to payment and ignore everything else.

This can lead to a narrowing of their work with individuals (just making sure they get a job rather than working on longer-term issues such as addiction or mental health problems that are likely to impact on their ability to keep the job for example).  It can lead to short-termism with less focus on long-term impact and sustainability.  It can lead to ‘cherry picking’ of clients who are most likely to achieve the target (also called ‘creaming’) and not ‘wasting resources’ on those who are not likely to achieve the target within the timescale of the project (also known as ‘parking’).

The fact that there are widely used terms for these kinds of gaming practices reflects the fact that these perverse incentives are widely recognised and understood. In the financial sector Goodhart’s Law[1] that any financial indicator that is chosen by government as a means of regulation becomes unreliable is well accepted. In the words of the anthropologist Marilyn Strathern “When a measure becomes a target, it ceases to be a good measure”.[2]

In addition to this, there are other more subtle but nevertheless powerful impacts.  In Triangle’s work helping organisations to measure their outcomes we have seen time and again that when the impetus for this measurement is commissioner requirement, the organisation is likely to see outcomes as something that is done for the commissioner rather than something they own.

The result is that the quality of the data collected is poorer and the service provider just passes it on to the commissioner rather than mining this outcomes gold for learning and service development.  This is very unfortunate because sending outcomes information to commissioners doesn’t improve outcomes, whereas using it to better understand delivery does.

Another impact of PbR is that it focuses attention on the work of the service provider in isolation as opposed to looking at how the service delivery system as a whole is working. In practice often it is the network of service provision that achieves the outcome rather than a single provider.

Finally, in the market for social outcomes, providers find themselves in competitive rather than collaborative relationships, which can make system-wide cooperation and information sharing more difficult.

The missing middle way
There were several speakers at the recent GoLab conference who argued that financial incentives can work – if they are done well.  I am writing primarily from personal experience rather than extensive research and I trust that what they say is true.  I am also aware myself of PbR contracts and Social Impact Bonds that have been sensitively implemented with all parties understanding the risks and the funding mechanisms carefully designed to build the right incentives.

My concern is that too often the approach isn’t done well and also that the alternative of MbR is not recognised and considered.  In our enthusiasm to embrace outcomes we have gone from one extreme of not talking about or measuring outcomes at all, to the other extreme of linking payment to outcomes.  Between these two poles there is a middle ground – a third way which can unlock the potential of outcome measurement without so many of the downsides.

So what does Management by Results look like and how is it different from Payment by Results?

The Management by Results mindset
Both MbR and PbR involve identifying and measuring outcomes.  But in MbR the emphasis is on the service provider using this information in the management of the service to identify strengths, weaknesses and issues to be addressed.  Whereas in PbR the emphasis for the service provider is on using the information to secure the funding the organisation needs to survive.

For commissioners MbR means requiring the service provider to measure their outcomes and then drawing on that information to assess their performance.  But crucially in MbR the commissioner draws on other information as well and has room for judgement.  PbR is black and white.  Target achieved = good, payment made. Target not achieved = bad, no payment made.

MbR allows for greater subtlety and a more rounded assessment.  The commissioner looks at the data, but they also look at the organisation’s narrative about the data.  Is it a coherent narrative? Are they learning from their data and using the lessons to improve service delivery?  What do others say about the service?  What do you see if you visit and what do service users have to say?

The commissioner draws on all this information to make their assessment.  Of course, life would be a lot easier if you didn’t have to do this and could reduce a project’s effectiveness to a few numbers.

But you can’t.

There is always a wider picture, for example in the employment sector, what is happening in the service user’s personal life, what is happening in the local economy, what other services  the person is receiving and what impact are they are having. The numbers have a part to play but they are never the whole answer.

How Management by Results changes the questions and supports learning
An organisation that is managing by results will take a systematic approach to collecting and analysing outcomes data and will then use that data for learning and accountability.  The job of the manager is to ask: “Why did this work – what good practice can we share?”  and “Why didn’t this work, what do we need to change and where can we learn from others?”

The job of the commissioner or investor is to assess “Is this organisation taking a sensible and systematic approach to measuring its outcomes? And is it learning from its measurement and continually changing and improving what it does?” PbR encourages hiding of poor results and exaggeration of positive results as well as the creaming and parking described above.  This positively hinders learning and obscures what is really happening.

MbR encourages collaboration between service provider and commissioner in identifying and achieving their shared goals.  PbR obscures these shared interests by incentivising service delivery organisations to prioritise their own survival.

The table below summarises the differences:

Payment by ResultsManagement by Results
A black and white approach.  Achieving the target is assumed to equate to successRecognises the complexity of service delivery and that success must be interpreted in context
Payment is linked to achievement of targets.  There is no room for skilled judgement or for considering wider contextual informationOutcomes information is placed in a wider context.  There is room for skilled judgement
Obscures the shared goals of commissioner and service provider and encourages service providers to focus on organisational survivalEmphasises the shared goals of service provider and commissioner and encourages the provider to focus on achieving intended outcomes
Encourages a gaming culture because service providers are assessed on whether they have met the targetBecause service providers are assessed on whether they are using outcome measurement to address issues and improve services it encourages a learning culture
Service providers are incentivised to withhold information from commissioners and even falsify dataService providers are incentivised to share information and learning with commissioners and problem solve together for the benefit of clients

Management by results is not easy but it is worth the effort

Management by Results is not easy.  At Triangle we support organisations to implement the Outcomes Star and in practice this means that we are supporting them to build a MbR approach.  This involves forging new habits, behaviours and organisational processes, creating new interdepartmental links, new reports and new software.

It isn’t easy and it takes time, even for the most willing and able.  But we also see the benefits for those that stick with it – managers with a much better handle on what is happening in their services, who can pinpoint and address issues and share good practice as well as evidence achievements.

I believe that if the sector put more energy, funding and research into supporting organisations to manage by results, it would really start to unlock the potential to not only measure, but also improve outcomes.

What do you think?

[1]Hudson, M., Phillips, J., Ray, K., Vegeris, S., & Davidson, R. (2010). The influence of outcome-based contracting on Provider-led Pathways to Work (Vol. 638). Department for Work and Pensions.

[2] Goodhart, C.A.E. (1975). “Problems of Monetary Management: The U.K. Experience”. Papers in Monetary Economics (Reserve Bank of Australia

[3] http://www.atm.damtp.cam.ac.uk/mcintyre/papers/LHCE/goodhart.html

***

Triangle is the social enterprise behind the Outcomes Star™. Triangle exists to help service providers transform lives by creating engaging tools and promoting enabling approaches. To talk to Joy MacKeith or another member of the Triangle team, or for any other information, please email info@triangleconsulting.co.uk.

September newsletter round-up

Our September newsletter included updates on the Star Online system. We also introduced two new Outcomes Stars for mental health, the My Mind Star and the new and improved edition of the Recovery Star as well as updated research for the new edition of the Recovery Star™

Find out more

  • My Mind Star™ is an Outcomes Star for building and tracking well-being and resilience in young people and is for young people’s mental health and well-beining
  • The new and improved edition of the Recovery Star™: the Outcomes Star for mental health and well-being. This Star has been designed to support and measure progress towards recovery for adults experiencing mental health issues and contains changes to make the Star more appropriate, accessible and effective.
  • The updated Psychometric validation of the Recovery Star.

Read the full newsletter here.

Contact Triangle at info@triangleconsulting.co.uk or
+44(0) 20 7272 8765 for more information on our Outcomes Stars or to
find out more about how the Star can empower service users and
keyworkers to make and measure positive change. Sign up for our newsletter here.

The interplay of internal and external factors in creating change

In this blog Joy MacKeith, one of Triangle’s founding Directors, explores the interplay of internal and external factors, particularly in the context of austerity in the UK, and sets out Triangle’s approach to the Outcomes Star. 

The context of austerity
In the 12 years since the first version of the Outcomes Star was published, the political climate in the UK has changed dramatically; funding for services to support vulnerable people has been cut and both employment and housing are increasingly insecure. 

The Outcomes Star is a suite of tools which are designed to help service users and service providers work in a constructive partnership toward achieving greater well-being and self-determination.  They are designed to be used in the context of an adequately funded service delivered by well trained staff.  They are also rooted in the assumption that decent housing and employment are available.

Increasingly in the UK these foundations are not in place and that has understandably led to anger on the part of some service users and service providers.  Whilst we believe that the Outcome Star tools continue to have an important role to play in helping people to deal with the challenges they face we also recognise that personal change on its own is not enough and that addressing structural issues of poverty, poor housing, insecure employment and rising inequality are essential to creating a society in which everyone can thrive and contribute.

The agency of the individual
In a climate of cuts and reduced services, some may be sceptical about the benefit of services and tools that focus on the agency of the individual.  At worse it could feel like people are being asked to ‘pull themselves up by the bootstraps’ or even blamed for their difficulties without adequate recognition of the very real challenges that they face and the sense of despair that can build when the odds are stacked against you (Johnson and Pleace (2016), Friedli and Stearn(20 15)).  So how do skills, habits and attitudes – the main focus of the Star, interact with opportunity and life situation?  I think the Cycle of Action model presented by NESTA and OSCA in their report ‘Good help: bad help” helps to answer that question.

Their model draws on the ‘COM-B model’ developed by Susan Michie, Director of the Centre for Behaviour Change, University College London.  It shows these internal characteristics in a dynamic interaction with a person’s life circumstances so that not only do difficult circumstances decrease confidence, purpose and ability to act, but also these internal states can also impact on outer circumstances.

The implication is that those disadvantaged by difficult life circumstances such as physical or mental health issues, poverty, discrimination or homelessness are likely to also experience less confidence and ability to act and that ‘good help’ which builds these aspects is likely to positively impact on life circumstances.  The report is at pains to stress that practical help is also necessary because the external barriers are very real.  But the right kind of help, that which builds confidence, sense of purpose and ability to take action, can create a positive, reinforcing virtuous cycle of change.  Whilst bad help does the reverse.

 

The Outcomes Stars have always focused on the agency of the service user.  For many of the original Stars, including those for use in the homelessness, mental health and family sectors, the model of change revolves around a shift from an external locus of control (“things happen to me and there’s nothing I can do”) to an internal locus of control (“I want things to be different and there are things I can do to make that happen”). 

However, as time has gone on, two things have happened. The first is that we have developed versions of the Star for service user groups who have less control over their circumstances, such as children or those with profound learning disabilities.  The second is that the service delivery climate has become more challenging and therefore the external barriers faced by many service users have increased.

For these reasons we increasingly recognise the importance of acknowledging and recording the external barriers as well as supporting the motivation and capability of the service user to overcome barriers. We do this in a number of ways including our guidance for workers on how to use the Star and in the introductions to the User Guides which are used directly with service users . The version of the Star that recognises the importance of external factors the most comprehensively within the tool scales themselves is My Star – the Outcomes Star for children in which some of the scales measure the child’s progress towards resilience and some of the scales measure the extent to which those caring for the child are providing them with what they need to thrive.

Although our intention is to keep the focus of the Stars on supporting agency and ability to act, new editions of existing Stars and new versions are developed with a heightened awareness of the importance of acknowledging the external barriers.

As the NESTA report (page 20) states:

 “‘Good help’ is not a substitute for addressing in-work poverty, structural inequalities or discrimination, but it has an important role to play in supporting people to manage the elements that are within their control.”

Our aim in that the Outcomes Star suite of tools enable good help, support people to manage the elements of life that are in their control and help service providers to point to the barriers that are getting in the way so that commissioners and policy makers can play their part in making change possible.

*****

Sources

Friedli, L. and Stearn, R (2015)  “Positive affect as coercive strategy: conditionality, activation and the role of psychology in UK government workfare programmes’ Critical Medical Humanities. This article does not mention the Outcomes Star but the authors are linked to Recovery in the Bin who have published the ‘Unrecovery Star’ ((https://recoveryinthebin.org/unrecovery-star-2/)

Johnson, G. and  Pleace, N. (2016)  “How Do We Measure Success in Homelessness Services?: Critically Assessing the Rise of the Homelessness Outcomes Star” European Journal of Homelessness.  The focus of this article is a critique of the Homelessness Outcomes Star

Wilson, R. and Cornwell, C. and Flanagan E. and Nielsen, N. and Khan, H. (2018) “Good and bad help: how confidence and purpose transform lives” NESTA and OSCA 

Psychometric testing of the Outcomes Star

The Outcomes Star has been tested psychometrically. A new set of psychometric factsheets demonstrate the validity of the Outcomes Star, and reveal how the Star can produce informative and valuable outcomes data for commissioners, funders and organisations.

Psychometric testing tells us how confident we can be in the data produced by a measurement tool including whether it measures what it claims to measure and produces consistent scores.

Triangle has published a set of factsheets to demonstrate the psychometric properties of every version of the Star. We are also in the process of having an article validating the Family Star Plus published in a peer reviewed journal. Dr Anna Good has produced a psychometric factsheet for each of the Outcomes Stars, providing the findings from a number of these tests. She explains a bit more about the process and importance of the ensuring the Stars are tested psychometrically.

At its essence, validity means that the information yielded by a test is appropriate, meaningful, and useful for decision making” (Osterlind, 2010, p. 89).

Psychometric validation has been used in some form for over a hundred years. It involves tests of validity (usefulness and meaningfulness) and reliability (consistency), for example:

  • expert opinion about the content of the measure
  • clustering of ‘items’ or questions into underlying constructs
  • consistency across the readings produced by each item
  • consistency across ‘raters’ using a tool
  • sensitivity to detect change over time
  • correlation with, or predicts of, other relevant outcomes

Why is it important to test the Star psychometrically? What are the benefits of testing the Outcomes Star?  What’s the background to the research?
Triangle recognises the importance of having ‘evidence and theory support the interpretations of test scores’ (AERA, APA & NCME, 1999, p.9), both because we are committed to creating scientifically sound and useful tools and because policy advisors, commissioners and managers require validated outcomes measures and want assurance of a rigorous process of development and testing.

The validation process is an important part of the development of new versions of the Star – we need to know that the outcome areas hang together coherently, whether any outcome areas are unnecessary because of overlap with other areas or have readings that cluster at one end of the Journey of Change.

Once there is sufficient data, we also conduct more extensive psychometric testing using data routinely collected using the published version of the Star. This is beneficial for demonstrating that the Star is responsive to change and that Star readings relate to other outcome measures, which is important both within Triangle and for evidencing the value of our tools externally.

What was involved in producing the psychometric factsheets?
The initial validation work for new Stars is conducted using data from collaborators working with Triangle during the Star development and piloting process. It involves collecting Star readings and asking service users and keyworkers to complete questionnaires about the acceptability and how well the Star captures services users’ situations and needs.

The further testing of the published version uses a larger sample size of routinely collected Star data and assesses the sensitivity of the Star in detecting change occurring during engagement with services. Whenever possible, we collaborate with organisations to assess the relationship between Star readings and validated measures or ‘hard outcome measures’ such as school attendance.

We have also been working to assess consistency in worker’s understanding of the scales using a case study method. This method is described fully in an article published in the Journal of Housing Care and Support (MacKeith, 2014), but essentially involves working with organisations using the Star to develop an anonymised case study or ‘service user profile’, and comparing the readings assigned by trained workers with those agreed by a panel of Star experts. The findings tell us how consistent and accurate workers in applying the Star scales when given the same information. 

Conclusion: An evidence-based tool
The Outcomes Star is an evidence-based tool. The development of new Stars follows a standardized and systematic process of evidence gathering through literature reviews, focus groups, refinement, initial psychometric analyses and full psychometric testing using routinely collected data.

Psychometric validation is useful in the development of new Stars and to provide evidence that the Outcome Star can produce data that meaningfully reflects the construct it is designed to measure.

Organisations can use Triangle’s psychometric factsheets alongside peer reviewed articles to demonstrate the validity of the Outcomes Star to funders and commissioners, and to have confidence that provided it is implemented well, the Star can produce informative and useful data.

Interested in finding out more about psychometrics testing and the validity of the Star?
Take a look at our research library. For more information on the key terms and to read the psychometric factsheets please read the Psychometrics Overview or visit www.outcomesstar.org.uk/about-the-star/evidence-and-research/star-psychometrics. Contact Triangle at info@triangleconsulting.co.uk for more information.

*****

References:

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing (4th ed.). Washington, DC: American Educational Research Association. 

Mackeith, J. (2014). Assessing the reliability of the Outcomes Star in research and practice. Housing, Care and Support, 17(4), 188-197.

Osterlind, S. J. (2010). Modern measurement: Theory, principles, and applications of mental appraisal (2nd ed.). Boston, MA: Pearson Education.

*****

Dr Anna Good: Dr Anna Good is a Research Analyst at Triangle: a large part of her role involves testing the psychometric properties of the Star, conducting research and supporting organisations to make the best use of Star data. After completing an MSc and a PhD in Psychology with specialisms in behaviour change interventions and psychological research methods, Anna spent a number of years as a post-doctoral researcher, including two years as principal investigator on a prestigious grant examining health behaviour change.

For more information on evidence and research into the Star please visit our Research Library or contact us: email Triangle at info@triangleconsulting.co.uk, or call on +44(0)202  7272 8765.

Interested in collaborating with Triangle on Star research?

Triangle is looking for organisations to collaborate with on research into versions of the Outcomes Star.

We are particularly interested in carrying out psychometric analysis to evidence that the Stars are both valid and reliable measurement tools, but are open to designing the research with you and adding other research questions that are of particular interest to you.

This is an opportunity to be part of original research on the Star, contributing to the growing body of knowledge about the use of different versions of the Star and to debates surrounding outcomes measurement more generally. We will aim to write up and publish the research either as a Triangle document or in a peer reviewed journal.

In return for research collaboration, Triangle can produce a Key Findings Report for your organisation and provide an in-depth analysis of your Star data.  Linked to this, collaborators will have access to the new Scale Checker tools and support with Star implementation. 

If you are interested in collaborating with Triangle on a research project, please contact us for an initial discussion, or get in touch directly with our Research Co-ordinator, Emily Lamont – emily@triangleconsulting.co.uk.