Best Practice for Using the CMIP REF Output

Understanding what REF results do -and do not- tell us 

 

What is this guidance about? 

This document provides practical guidance on how to interpret and use outputs from the CMIP Rapid Evaluation Framework (REF) in a careful and responsible way. The REF is a community-developed tool that offers standardized diagnostics, performance scores, and visualizations to help users compare climate models more efficiently. 

As the use of CMIP model evaluations has expanded—from research into areas such as IPCC assessments, national climate risk analyses, adaptation planning, and climate services—REF outputs are increasingly being used to support real-world decisions. This guidance helps users understand what REF results can and cannot tell us, and how they should be applied appropriately. 

Why is it needed? 

REF diagnostics are powerful, but they entail several challenges if taken out of context. In particular, summary performance scores may be incorrectly treated as definitive rankings of model quality, rather than what they are meant to be: relative, context-dependent indicators of model behavior for specific variables, regions, or processes. 

The use of REF outputs has grown faster than the development of shared norms and interpretive guidance. This document responds to that gap by outlining key principles, cautions, and questions that users should keep in mind before drawing conclusions from REF results. 

Who is this guidance for? 

This guidance is intended for a broad range of users, including: 

  • Climate scientists and model developers 
  • Researchers using CMIP outputs for analysis or synthesis 
  • IPCC authors and assessment teams 
  • Practitioners involved in climate risk assessment, adaptation, and resilience planning 
  • Climate service providers  
  • Other users of model evaluation tools

It is especially relevant for users who may not be directly involved in developing model evaluation frameworks but are using REF outputs to inform scientific or decision-relevant work. 

What does the guidance provide? 

The document sets out high-level principles for responsible use of REF outputs, along with reflective considerations and simple checklists to support informed judgment. Key topics include: 

  • The intended role of the REF as a screening and comparison tool, not a final arbiter of model quality 
  • How summary scores relate to the underlying diagnostics they summarize 
  • The importance of accounting for uncertainties in reference datasets 
  • Trade-offs in model performance across different phenomena, and processes 

Rather than prescribing specific workflows or applications, the guidance aims to help users ask the right questions, understand limitations, and avoid overinterpretation. 

What is the scope—and what comes next? 

This guidance represents a first step toward more comprehensive support for the responsible use of REF outputs. Future work will build on these foundations by providing concrete examples and use cases, including how REF results can serve as an initial step in assessing whether models are fit for a particular purpose. 

How was this resource developed? 

This document was developed collaboratively through the RIfS–CMIP Responsible Data Use Task Team, the Model Benchmarking Task Team, and contributors involved in the development of the REF itself. It draws on established best practices in climate model evaluation and the broader scientific literature, with input from the CMIP Steering Committee and user communities such as IPCC authors. 

Please cite the document as:

Morrison, M. A., Daba, D., Abrha, H., Chung, C., Samanta, D., Hoffman, F. M., Swaminathan, R., Brands, S., and Bonou, F. (2026), Best Practice for Using the CMIP Rapid Evaluation Framework Output. https://doi.org/10.5281/zenodo.18775782.