The Outcomes Star has been tested psychometrically. A new set of psychometric factsheets demonstrate the validity of the Outcomes Star, and reveal how the Star can produce informative and valuable outcomes data for commissioners, funders and organisations.
Psychometric testing tells us how confident we can be in the data produced by a measurement tool including whether it measures what it claims to measure and produces consistent scores.
Triangle has published a set of factsheets to demonstrate the psychometric properties of every version of the Star. We are also in the process of having an article validating the Family Star Plus published in a peer reviewed journal. Dr Anna Good has produced a psychometric factsheet for each of the Outcomes Stars, providing the findings from a number of these tests. She explains a bit more about the process and importance of the ensuring the Stars are tested psychometrically.
“At its essence, validity means that the information yielded by a test is appropriate, meaningful, and useful for decision making” (Osterlind, 2010, p. 89).
Psychometric validation has been used in some form for over a hundred years. It involves tests of validity (usefulness and meaningfulness) and reliability (consistency), for example:
- expert opinion about the content of the measure
- clustering of ‘items’ or questions into underlying constructs
- consistency across the readings produced by each item
- consistency across ‘raters’ using a tool
- sensitivity to detect change over time
- correlation with, or predicts of, other relevant outcomes
Why is it important to test the Star psychometrically? What are the benefits of testing the Outcomes Star? What’s the background to the research?
Triangle recognises the importance of having ‘evidence and theory support the interpretations of test scores’ (AERA, APA & NCME, 1999, p.9), both because we are committed to creating scientifically sound and useful tools and because policy advisors, commissioners and managers require validated outcomes measures and want assurance of a rigorous process of development and testing.
The validation process is an important part of the development of new versions of the Star – we need to know that the outcome areas hang together coherently, whether any outcome areas are unnecessary because of overlap with other areas or have readings that cluster at one end of the Journey of Change.
Once there is sufficient data, we also conduct more extensive psychometric testing using data routinely collected using the published version of the Star. This is beneficial for demonstrating that the Star is responsive to change and that Star readings relate to other outcome measures, which is important both within Triangle and for evidencing the value of our tools externally.
What was involved in producing the psychometric factsheets?
The initial validation work for new Stars is conducted using data from collaborators working with Triangle during the Star development and piloting process. It involves collecting Star readings and asking service users and keyworkers to complete questionnaires about the acceptability and how well the Star captures services users’ situations and needs.
The further testing of the published version uses a larger sample size of routinely collected Star data and assesses the sensitivity of the Star in detecting change occurring during engagement with services. Whenever possible, we collaborate with organisations to assess the relationship between Star readings and validated measures or ‘hard outcome measures’ such as school attendance.
We have also been working to assess consistency in worker’s understanding of the scales using a case study method. This method is described fully in an article published in the Journal of Housing Care and Support (MacKeith, 2014), but essentially involves working with organisations using the Star to develop an anonymised case study or ‘service user profile’, and comparing the readings assigned by trained workers with those agreed by a panel of Star experts. The findings tell us how consistent and accurate workers in applying the Star scales when given the same information.
Conclusion: An evidence-based tool
The Outcomes Star is an evidence-based tool. The development of new Stars follows a standardized and systematic process of evidence gathering through literature reviews, focus groups, refinement, initial psychometric analyses and full psychometric testing using routinely collected data.
Psychometric validation is useful in the development of new Stars and to provide evidence that the Outcome Star can produce data that meaningfully reflects the construct it is designed to measure.
Organisations can use Triangle’s psychometric factsheets alongside peer reviewed articles to demonstrate the validity of the Outcomes Star to funders and commissioners, and to have confidence that provided it is implemented well, the Star can produce informative and useful data.
Interested in finding out more about psychometrics testing and the validity of the Star?
Take a look at our research library. For more information on the key terms and to read the psychometric factsheets please read the Psychometrics Overview or visit www.outcomesstar.org.uk/about-the-star/evidence-and-research/star-psychometrics. Contact Triangle at firstname.lastname@example.org for more information.
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing (4th ed.). Washington, DC: American Educational Research Association.
Mackeith, J. (2014). Assessing the reliability of the Outcomes Star in research and practice. Housing, Care and Support, 17(4), 188-197.
Osterlind, S. J. (2010). Modern measurement: Theory, principles, and applications of mental appraisal (2nd ed.). Boston, MA: Pearson Education.
Dr Anna Good: Dr Anna Good is a Research Analyst at Triangle: a large part of her role involves testing the psychometric properties of the Star, conducting research and supporting organisations to make the best use of Star data. After completing an MSc and a PhD in Psychology with specialisms in behaviour change interventions and psychological research methods, Anna spent a number of years as a post-doctoral researcher, including two years as principal investigator on a prestigious grant examining health behaviour change.