We are sometimes asked whether changes in Star readings actually reflect the changes that occur during service provision. We have three responses to this: the first two are based on the practices we have in place for the development of new Stars and for training and implementation of the Star. The third is to present the research evidence that the Star readings can be applied accurately and that readings correlate with other measures in the expected way.
New versions of the Star are created alongside managers and practitioners to ensure the Journey of Change captures key changes occurring for those using services.
Pilot data is statistically analysed to check that the scales are sensitive enough to detect change.
Service users and practitioners provide end-of-pilot feedback about the extent to which the Star captures the changes made.
Training and Implementation
For the Star to accurately reflect change, practitioners should be well-trained with ongoing support to continue using it well. This is why training is mandatory, we provide free CPD for licensed trainers and encourage refresher training, regular supervision, and auditing.
Convergent validity: Star readings have been shown to correlate with other validated measures in our own, as well as in external peer-reviewed research.
Predictive validity: Star readings, and change in Star readings, predict hard outcomes such as securing accommodation, employment, and school absenteeism
Inter-rater reliability: different practitioners are able to consistently assign Star readings
We are keen to conduct further analyses of the relationship between Star readings and other measures, so please get in touch if you have linked data and are interested in us exploring it.
To read our three page briefing providing a more detailed version of the above, please download it here (PDF).
As the creators of a suite of measures capturing distance travelled towards ‘hard outcomes’ we are sometimes asked whether there is evidence that Star readings correlate with or predict outcomes such as offending or employment. In some cases, we hear there is resistance to using the Star and instead commissioners, managers or funders are only interested in how many service users have ticked the box of meeting these hard outcomes. This misses out on capturing important achievements, ignores the role of internal change in maintaining concrete achievements and disincentives working with those most in need of support.
This briefing describes some of the evidence we have of the ‘predictive validity’ of the Star – that it does in fact predict outcomes such as school attendance, employment, training and accommodation status. This includes findings reported in two articles recently published in peer-reviewed journals.
In it, we also explain the value of the Outcomes Star in measuring the full journey leading up to and including changes in behaviour or circumstances.
The author of this briefing, Dr Anna Good, draws on her expertise in behaviour change theory to summarise the strong evidence base supporting the importance of the changes assessed by the Star. It is clear from the research literature (and our extensive experience of working with service providers), that early steps on the Star’s ‘Journey of Change’ such as acknowledging problems and accepting help are often essential to subsequent change in hard outcomes. Moreover, change in skills, confidence and beliefs are often key factors in the maintenance of life-changing improvements.
The Outcomes Star is well established as a tool for supporting effective keywork and demonstrating achievements. Here, 'Triangle's Research Analyst, Dr Anna Good, discusses a third benefit, the opportunity for internal learning. This new briefing describes how Star data can be used to improve service delivery.
Learning from Star data at all levels of the organisation
Over three-quarters of Outcomes Star users in our client survey said Star data reports were ‘useful for learning how their service was ‘doing’ and ‘helpful in managing or developing the ‘service’. Indeed, Star data can provide meaningful management information at all levels, from a service manager reviewing a single ‘worker’s caseload to a senior management team reviewing data aggregated across services.
Alongside other data (e.g. satisfaction surveys, output and process data), Star data reports, such as those available from our upgraded Star Online System, allow organisations to ask increasingly focused questions about what is happening with the people they support.
Managers can gain essential insights by looking at differences in starting points and change across outcome areas, client groups, and service settings. Because these insights are likely to be greatest when compared against prior expectations, Triangle has produced resources to support ‘Star data ‘forecasting’.
Learning from Initial Star readings
The distribution of first Star readings provides a valuable overview of people’s needs coming into the service. Star readings can be compared against expectations to ensure that service users are entering the service appropriately and are offered suitable interventions.
An excellent example of the use of first readings is in Staffordshire County Council, where they look at start readings to see if the families are at the right level of service. In our interview with the Commissioning Manager at the time, she told us that “if we have families in our Family Intervention service that have readings of five, I look a bit deeper to see if we’re really using our resources correctly”.
Learning from change in Star readings
Movement in Star readings for each outcome area also provides an opportunity to learn where things are going well and when further exploration of service delivery may be warranted.
For example, if one service shows different outcomes to another service, this is a starting point for further investigation:
Is there other evidence that one service facilitates better outcomes than another?
Are there reasons why one service might be supporting people better than another?
Is the service user profile different in the different services?
Is practice significantly different in that service, and might there be lessons for other services?
A more in-depth analysis of the movement from each Journey of Change stage is also possible, offering more significant potential for learning than typical numerical outcome scales. Managers can explore which stage transitions are happening frequently and where there may be blockages to making other transitions. For example, a service may be very good at helping service users to begin accepting help but struggle more with moving them towards greater self-reliance, limiting the progress currently being made. Specific changes to service delivery might then need to be developed.
Want to know how to create charts, and data driven reports to support funding and measure your impact?
What will it cover?
This short session is an introduction to the main reports dashboards available on the Star Online. It will cover key functions including filters, engagements and how to create instant and engaging report charts to support funding bids, reports and clearly illustrate progress made by service users and also how the Stars are being used across the service.
Who is it for?
This short session is designed to support managers and other staff who use the Star Online.
When is it?
This session will be held online, via Zoom, on September 14, 2021 10:00am (London)
In a follow-up to our webinar introducing the Integration Star, research analyst Dr Anna Good tells the story of how the new Star for refugees came into being.
Help for refugees to integrate into this country has long been under-resourced and patchy. Specialist refugee organisations are doing brilliant work, but many other services struggle to work out how best to support refugees. And until recently, there’s been little in the way of solid outcomes data that can help shape service delivery.
It’s this context that spurred the creation of the Integration Star – a tool for services working with refugees that enables both better conversations and better outcomes.
“We had often felt that there was more we as an organisation could do to demonstrate a consistent way of measuring an individual’s progression as a result of our support”
Andrew LawtonHead of Integration, Refugee Council
The new Star has come out of an exciting and timely meeting of minds. For some years, Triangle had been interested in developing a Star for refugees. “It was on our radar, and several refugee organisations had said it would be great to have an Outcomes Star,” says Triangle director Sara Burns. “I could see it could really work. But because refugee support services tend to be small organisations and quite poorly funded, there was never the support necessary for the collaboration.”
“So I was delighted when in 2018 the Refugee Council approached us and said they wanted to collaborate on a Star. They’d just received a tranche of European funding for a refugee integration programme, and as part of that they had undertaken a commitment to collaboratively create a tool for refugee integration.”
The wider integration and employment programme, New Roots, was led by the Refugee Council in partnership with organisations in Yorkshire and Humberside, and has supported some 2700 refugees, often with complex and multiple needs. In our recent webinar,Better Conversations, Better Outcomes, Refugee Council head of integration Andrew Lawton explains that this programme gave the organisation an excellent opportunity ”to consider how we assessed the impact of our services, not just for the Refugee Council but also for its clients and for others working in the same space”.
“We had often felt that there was more we as an organisation could do to demonstrate a consistent way of measuring an individual’s progression as a result of our support,” he says.
At the time, the Home Office was working on a new framework to support its integration strategy, Indicators of Integration. However, that didn’t include a practical tool for service delivery organisations to measure outcomes. So the participants in the New Roots programme decided to collaborate on a tool that could work for people providing help on the ground, aligned with the Home Office Indicators of Integration.
“We wanted to work towards a set of outcomes that could be used across a range of front line services and that could be shared with other services doing similar work,” says Andrew Lawton.
The Refugee Council was already aware of the Outcomes Stars and approached Triangle about a new Star for refugees. And so the collaboration – between Triangle, the Refugee Council, four New Roots partners and ten refugee community organisations – was born.
These organisations formed the expert committee that helped develop the outcome areas and Journey of Change for the Integration Star. As research analyst at Triangle, I carried out an initial literature review around important outcome areas for working with refugees and mapped these onto the domains in the Home Office’s framework. This research was used to inform Triangle’s tried and tested iterative process of working closely with managers, practitioners and service users to draft and refine the new version of the Star.
“The result? ‘An evaluation tool that places the beneficiary at the centre of their own journey.’”
Throughout the process we were careful to make sure that new Star could work both for refugees arriving through a government resettlement programme and for those who enter the asylum process after arrival. While resettlement refugees receive a package of support that starts with meeting them at the airports and encompasses finding accommodation and providing day-to-day integration casework, the same specialist support doesn’t exist for other refugees. “It’s left to refugee support organisations and the wider voluntary sector to intervene depending on capacity, funding and services they have available,” says Andrew Lawton.
The result of the collaboration? In Andrew Lawton’s words, “an evaluation tool that places the beneficiary at the centre of their own journey, providing them with a tool that is visual, that helps them recognise their own achievements, and really track their own progress with the support of an adviser”.
Following extensive testing and revision, the final version of the Integration Star was published in autumn last year.
“It was a long time coming,” says Sara Burns. “But we’re delighted it happened – it’s a really important tool for the refugee sector.”
Collaborators in developing the Integration Star The Refugee Council RETAS (Refugee Education Training Advice Service), Leeds PATH Yorkshire Humber Community Advice Services (H-CAS) Goodwin Development Trust.
10 community refugee organisations Leeds Refugee Forum, Refugee Action Kingston, Iranian Association, Diversity Living Services, Bahar Women’s Association, Action for Community Development, West Yorkshire Somali Association, DAMASQ, Stepping Stone 4 and Leeds Swahili Community.
It is an exciting time to be part of the world of measurement and evaluation. Having attended three conferences this autumn, it is clear that those with a critique of the traditional ways of doing things are finding a voice, and being given a platform. In the wake of Black Lives Matter everyone seems more open to looking deeper into the implicit assumptions that we make about each other, and along with that, into the power dynamics of measurement and evaluation.
NPC ignites was one of these events and it was the session “Rebalancing data for the 21st century” that really captured my attention. Jara Dean Coffey, Director of the Equitable Evaluation Initiative presented a five-year plan she is leading to change the way funders in the United States think about evaluation. Bonnie Chui of The Social Investment Consultancy is leading an initiative bringing together people of colour working in evaluation. Here were some of their key messages:
Co-create knowledge rather than extract data
Traditional approaches to the evaluation involve experts collecting data and taking it away to analyse and draw conclusions. The subjects of the evaluation are passive in the process. Bonnie described this as like using research as tool of ‘command and control’. Jara argued, like several others I have heard this year, that we learn more when knowledge is co-created – researcher and subject bringing together their very different expertise to build a more complete and informed picture. This is one way to challenge the power relationships in evaluation and promote greater equity. The Outcomes Star’s collaborative approach to measurement brings these ideas alive in day-to-day service delivery.
What is the Outcomes Star
The Star is underpinned by three values – empowerment, collaboration and integration
Jara made the case that although funders who commission evaluations want certainty and yes/no answers, the complex reality of service provision can’t be reduced to a few numbers. Funders and evaluators need to embrace the complexity that comes from working in open systems where it isn’t possible to control all the variables and come up with answers that are always true no matter what the context. Bonnie also made the point that top down funder-driven monitoring and evaluation frameworks can perpetuate power imbalances. It is difficult for funded organisations to raise these issues because of their dependence on the funders so it is important that evaluators use their influence. This very much echoes points we have been raising at Triangle for some time. Data is helpful but must be interpreted in context. The numbers help to focus our questions rather than providing definitive yes/no answers.
Bonnie Chui argued that we need to ‘decolonise’ evidence and ensure that people of colour are both reached by research and represented in the research and evaluation community. Jara is promoting multi-cultural validity alongside statistical validity, a point which chimes with issues Triangle has raised about moving beyond traditional formulations of what is a ‘good’ tool (keep an eye on our homepage for a blog on this coming out soon).
Both presenters made the case that evaluation is a human process. Those doing the evaluation have to do their own personal work to understand their own implicit biases as well as those that are hardwired into the context in which they are working. The biases identified were racial ones as well as foundational ideas such as the preference for doing over being and our belief in scarcity rather than abundance.
I found it very inspiring to hear an analysis connecting up racism, core orientations towards life and the way services are valued and measured. I can’t do it all justice here, so I recommend that you take a look at the recording of the session.
Triangle is the social enterprise behind the Outcomes Star™. Triangle exists to help service providers transform lives by creating engaging tools and promoting enabling approaches. To talk to Joy MacKeith or another member of the Triangle team, or for any other information, please email email@example.com.
Joy MacKeith argues that Payment by Results can cause as many problems as it addresses. Management by Results, which supports ongoing learning and collaboration, is the missing middle way between ignoring outcomes on the one hand, and linking them to financial incentives on the other.
In early September I was privileged to participate in the fifth Social Outcomes Conference, organised by the Government Outcomes Lab at Oxford University. Contributions from both academics and practitioners from all over the world made for a very rich debate in which everyone had their eye on the prize of improving social outcomes.
The debate got me thinking about the limitations of Payment by Results and an alternative – an approach I am calling Management by Results. This blogpost explains the difference between the two and how Management by Results has the potential to unlock performance improvement.
Why I am a fan of an outcomes approach
In the old days we didn’t measure outcomes. We counted inputs and outputs. We collected case studies. Occasionally we commissioned evaluations or user surveys. Then came the outcomes revolution. I have been part of that revolution, spending much of the last 20 years helping organisations to measure their outcomes.
I am a fan because I have seen that defining, measuring, and managing outcomes enables service providers to create services with a clarity of purpose, identify issues and gaps, and ultimately improve what they deliver for service users. It undoubtedly is a good thing for organisations to focus on outcomes.
But what happens when financial imperatives are introduced into the equation? What happens when a project or organisation’s survival becomes dependent on evidencing that they have achieved certain outcomes?
Why I’m wary of linking outcomes with financial incentives
In the employment sector where Payment by Results (PbR) has been in operation for some time the consequences are quite well documented (Hudson., Phillips, Ray, Vegeris & Davidson, 2010). Organisations can be incentivised to focus narrowly on the specific targets which are linked to payment and ignore everything else.
This can lead to a narrowing of their work with individuals (just making sure they get a job rather than working on longer-term issues such as addiction or mental health problems that are likely to impact on their ability to keep the job for example). It can lead to short-termism with less focus on long-term impact and sustainability. It can lead to ‘cherry picking’ of clients who are most likely to achieve the target (also called ‘creaming’) and not ‘wasting resources’ on those who are not likely to achieve the target within the timescale of the project (also known as ‘parking’).
The fact that there are widely used terms for these kinds of gaming practices reflects the fact that these perverse incentives are widely recognised and understood. In the financial sector Goodhart’s Law that any financial indicator that is chosen by government as a means of regulation becomes unreliable is well accepted. In the words of the anthropologist Marilyn Strathern “When a measure becomes a target, it ceases to be a good measure”.
In addition to this, there are other more subtle but nevertheless powerful impacts. In Triangle’s work helping organisations to measure their outcomes we have seen time and again that when the impetus for this measurement is commissioner requirement, the organisation is likely to see outcomes as something that is done for the commissioner rather than something they own.
The result is that the quality of the data collected is poorer and the service provider just passes it on to the commissioner rather than mining this outcomes gold for learning and service development. This is very unfortunate because sending outcomes information to commissioners doesn’t improve outcomes, whereas using it to better understand delivery does.
Another impact of PbR is that it focuses attention on the work of the service provider in isolation as opposed to looking at how the service delivery system as a whole is working. In practice often it is the network of service provision that achieves the outcome rather than a single provider.
Finally, in the market for social outcomes, providers find themselves in competitive rather than collaborative relationships, which can make system-wide cooperation and information sharing more difficult.
The missing middle way There were several speakers at the recent GoLab conference who argued that financial incentives can work – if they are done well. I am writing primarily from personal experience rather than extensive research and I trust that what they say is true. I am also aware myself of PbR contracts and Social Impact Bonds that have been sensitively implemented with all parties understanding the risks and the funding mechanisms carefully designed to build the right incentives.
My concern is that too often the approach isn’t done well and also that the alternative of MbR is not recognised and considered. In our enthusiasm to embrace outcomes we have gone from one extreme of not talking about or measuring outcomes at all, to the other extreme of linking payment to outcomes. Between these two poles there is a middle ground – a third way which can unlock the potential of outcome measurement without so many of the downsides.
So what does Management by Results look like and how is it different from Payment by Results?
The Management by Results mindset Both MbR and PbR involve identifying and measuring outcomes. But in MbR the emphasis is on the service provider using this information in the management of the service to identify strengths, weaknesses and issues to be addressed. Whereas in PbR the emphasis for the service provider is on using the information to secure the funding the organisation needs to survive.
For commissioners MbR means requiring the service provider to measure their outcomes and then drawing on that information to assess their performance. But crucially in MbR the commissioner draws on other information as well and has room for judgement. PbR is black and white. Target achieved = good, payment made. Target not achieved = bad, no payment made.
MbR allows for greater subtlety and a more rounded assessment. The commissioner looks at the data, but they also look at the organisation’s narrative about the data. Is it a coherent narrative? Are they learning from their data and using the lessons to improve service delivery? What do others say about the service? What do you see if you visit and what do service users have to say?
The commissioner draws on all this information to make their assessment. Of course, life would be a lot easier if you didn’t have to do this and could reduce a project’s effectiveness to a few numbers.
But you can’t.
There is always a wider picture, for example in the employment sector, what is happening in the service user’s personal life, what is happening in the local economy, what other services the person is receiving and what impact are they are having. The numbers have a part to play but they are never the whole answer.
How Management by Results changes the questions and supports learning An organisation that is managing by results will take a systematic approach to collecting and analysing outcomes data and will then use that data for learning and accountability. The job of the manager is to ask: “Why did this work – what good practice can we share?” and “Why didn’t this work, what do we need to change and where can we learn from others?”
The job of the commissioner or investor is to assess “Is this organisation taking a sensible and systematic approach to measuring its outcomes? And is it learning from its measurement and continually changing and improving what it does?” PbR encourages hiding of poor results and exaggeration of positive results as well as the creaming and parking described above. This positively hinders learning and obscures what is really happening.
MbR encourages collaboration between service provider and commissioner in identifying and achieving their shared goals. PbR obscures these shared interests by incentivising service delivery organisations to prioritise their own survival.
The table below summarises the differences:
Payment by Results
Management by Results
A black and white approach. Achieving the target is assumed to equate to success
Recognises the complexity of service delivery and that success must be interpreted in context
Payment is linked to achievement of targets. There is no room for skilled judgement or for considering wider contextual information
Outcomes information is placed in a wider context. There is room for skilled judgement
Obscures the shared goals of commissioner and service provider and encourages service providers to focus on organisational survival
Emphasises the shared goals of service provider and commissioner and encourages the provider to focus on achieving intended outcomes
Encourages a gaming culture because service providers are assessed on whether they have met the target
Because service providers are assessed on whether they are using outcome measurement to address issues and improve services it encourages a learning culture
Service providers are incentivised to withhold information from commissioners and even falsify data
Service providers are incentivised to share information and learning with commissioners and problem solve together for the benefit of clients
Management by results is not easy but it is worth the effort
Management by Results is not easy. At Triangle we support organisations to implement the Outcomes Star and in practice this means that we are supporting them to build a MbR approach. This involves forging new habits, behaviours and organisational processes, creating new interdepartmental links, new reports and new software.
It isn’t easy and it takes time, even for the most willing and able. But we also see the benefits for those that stick with it – managers with a much better handle on what is happening in their services, who can pinpoint and address issues and share good practice as well as evidence achievements.
I believe that if the sector put more energy, funding and research into supporting organisations to manage by results, it would really start to unlock the potential to not only measure, but also improve outcomes.
What do you think?
Hudson, M., Phillips, J., Ray, K., Vegeris, S., & Davidson, R. (2010). The influence of outcome-based contracting on Provider-led Pathways to Work (Vol. 638). Department for Work and Pensions.
 Goodhart, C.A.E. (1975). “Problems of Monetary Management: The U.K. Experience”. Papers in Monetary Economics (Reserve Bank of Australia
Triangle is the social enterprise behind the Outcomes Star™. Triangle exists to help service providers transform lives by creating engaging tools and promoting enabling approaches. To talk to Joy MacKeith or another member of the Triangle team, or for any other information, please email firstname.lastname@example.org.
The Outcomes Star has been tested psychometrically. A new set of psychometric factsheets demonstrate the validity of the Outcomes Star, and reveal how the Star can produce informative and valuable outcomes data for commissioners, funders and organisations.
Psychometric testing tells us how confident we can be in the data produced by a measurement tool including whether it measures what it claims to measure and produces consistent scores.
Triangle has published a set of factsheets to demonstrate the psychometric properties of every version of the Star. We are also in the process of having an article validating the Family Star Plus published in a peer reviewed journal. Dr Anna Good has produced a psychometric factsheet for each of the Outcomes Stars, providing the findings from a number of these tests. She explains a bit more about the process and importance of the ensuring the Stars are tested psychometrically.
“At its essence, validity means that the information yielded by a test is appropriate, meaningful, and useful for decision making” (Osterlind, 2010, p. 89).
Psychometric validation has been used in some form for over a hundred years. It involves tests of validity (usefulness and meaningfulness) and reliability (consistency), for example:
expert opinion about the content of the measure
clustering of ‘items’ or questions into underlying constructs
consistency across the readings produced by each item
consistency across ‘raters’ using a tool
sensitivity to detect change over time
correlation with, or predicts of, other relevant outcomes
Why is it important to test the Star psychometrically? What are the benefits of testing the Outcomes Star? What’s the background to the research? Triangle recognises the importance of having ‘evidence and theory support the interpretations of test scores’ (AERA, APA & NCME, 1999, p.9), both because we are committed to creating scientifically sound and useful tools and because policy advisors, commissioners and managers require validated outcomes measures and want assurance of a rigorous process of development and testing.
The validation process is an important part of the development of new versions of the Star – we need to know that the outcome areas hang together coherently, whether any outcome areas are unnecessary because of overlap with other areas or have readings that cluster at one end of the Journey of Change.
Once there is sufficient data, we also conduct more extensive psychometric testing using data routinely collected using the published version of the Star. This is beneficial for demonstrating that the Star is responsive to change and that Star readings relate to other outcome measures, which is important both within Triangle and for evidencing the value of our tools externally.
What was involved in producing the psychometric factsheets? The initial validation work for new Stars is conducted using data from collaborators working with Triangle during the Star development and piloting process. It involves collecting Star readings and asking service users and keyworkers to complete questionnaires about the acceptability and how well the Star captures services users’ situations and needs.
The further testing of the published version uses a larger sample size of routinely collected Star data and assesses the sensitivity of the Star in detecting change occurring during engagement with services. Whenever possible, we collaborate with organisations to assess the relationship between Star readings and validated measures or ‘hard outcome measures’ such as school attendance.
We have also been working to assess consistency in worker’s understanding of the scales using a case study method. This method is described fully in an article published in the Journal of Housing Care and Support (MacKeith, 2014), but essentially involves working with organisations using the Star to develop an anonymised case study or ‘service user profile’, and comparing the readings assigned by trained workers with those agreed by a panel of Star experts. The findings tell us how consistent and accurate workers in applying the Star scales when given the same information.
Conclusion: An evidence-based tool The Outcomes Star is an evidence-based tool. The development of new Stars follows a standardized and systematic process of evidence gathering through literature reviews, focus groups, refinement, initial psychometric analyses and full psychometric testing using routinely collected data.
Psychometric validation is useful in the development of new Stars and to provide evidence that the Outcome Star can produce data that meaningfully reflects the construct it is designed to measure.
Organisations can use Triangle’s psychometric factsheets alongside peer reviewed articles to demonstrate the validity of the Outcomes Star to funders and commissioners, and to have confidence that provided it is implemented well, the Star can produce informative and useful data.
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing (4th ed.). Washington, DC: American Educational Research Association.
Mackeith, J. (2014). Assessing the reliability of the Outcomes Star in research and practice. Housing, Care and Support, 17(4), 188-197.
Osterlind, S. J. (2010). Modern measurement: Theory, principles, and applications of mental appraisal (2nd ed.). Boston, MA: Pearson Education.
Dr Anna Good: Dr Anna Good is a Research Analyst at Triangle: a large part of her role involves testing the psychometric properties of the Star, conducting research and supporting organisations to make the best use of Star data. After completing an MSc and a PhD in Psychology with specialisms in behaviour change interventions and psychological research methods, Anna spent a number of years as a post-doctoral researcher, including two years as principal investigator on a prestigious grant examining health behaviour change.
The Outcomes Star Scale Checkers are tools developed by Triangle for organisations using the Outcomes Star, to evaluate how well members of staff understand the Journey of Change and scales for the Star they use.
They are designed to be completed by workers who use the Star, have Star licences and who have completed core Star training. They are free to use for licensed Star users.
Assessing how well staff members understand the Journey of Change provides a key aspect of quality assurance for your Star data. Completing the Scale Checker can help keyworkers and managers to identify where further training is required, either for individuals, for particular outcome areas or for particular stages of the Journey of Change.
The Scale Checker also allows Triangle to gather data to test the inter-rater reliability of each of the Stars. By testing the inter-rater reliability of the Stars Triangle is able to contribute to evidence showing that the Stars are reliable and valid outcomes measurement tools.
The Scale Checker is easy and simple to use. It consists of a fictional profile of a service user relevant to a specific version of the Outcomes Star and staff members are asked to decide where the service user is on the Journey of Change for each outcome area. The answers given are then analysed by Triangle and a report is provided to the service manager.
Currently we have Scale Checkers available for 4 versions of the Outcomes Star, with profiles in development and coming soon for all other versions:
“Don” for the Carers Star
“Pete” for the Drug & Alcohol Star
“Paula” for the Family Star Plus
“Tamsin” for the Youth Star
If you would like to use the Scale Checker in your service or have any questions, please contact us for more information.