Researchfish reflection on the Stern Review

August 18, 2016

Following the recent recommendations by a committee, led by Nicholas Stern, on how to improve the UK’s Research Excellence Framework (REF), we thought we would focus our minds on metrics and research impact assessment.

 

The REF

 

 The Research Excellence Framework (REF) is the system for assessing the quality of research at UK universities(1).  The United Kingdom’s first Research Assessment Exercise was held in 1986, and following that exercise, the concept of a national evaluation of publicly funded research has expanded to many other countries.

 

Successive research assessment processes have helped to drive up the quality of research undertaken in the UK, and the Stern Review concluded that an assessment of the quality of research was vital.(2)

 

Metrics is the quantitative analysis of research outputs (publications and beyond!) and their impacts.  “Metrics include a variety of measures and statistical methods for assessing the quality and broader impact of scientific and scholarly research(3).  Metrics are required to be comprehensive and wide-ranging and assess elements including the process, reach and impact of research.

 

In line with findings from the The Metric Tide(3) which showed that BIS should identify ways of linking data gathered from research-related platforms (including Gateway to Research, Researchfish and the REF),  the Research Council UK’s priorities for the future include “the continued development of the Researchfish approach. This will ensure that entering data is made as simple as possible, that exchange of data with other systems is increased, and that use of the information to support evaluation and strategy development is maximised”.

 

Research Impact Assessment

 

Research Impact Assessment has always been fundamental in monitoring and evidencing the success of related investment in research, but the metrics adopted for measurement and their role, can differ greatly and it is often down to individual funders to set the metrics themselves.  It is important to emphasise that this process is not easy, and currently there are few well-defined markers of progress.

 

To date, most of the work on the “measurement” of research and research impact has tended to focus on publication and citation data, mostly due to the availability of international databases of bibliographic and patent information, and wider citation.  This has left a visible gap for structured information of outcomes from the pathways to impact, which misses many important steps along the path.

 

Impact is clearly one of the success stories of REF2014, with a resulting database of case studies providing a unique and valuable source of information on the impact of UK research.

 

It is, therefore, in the interests of researchers and institutions to track the impact of their research. An obvious place to start is with the impact of research outputs.  Researchers can then start to use some metrics as indicators of impact which could become the basis of demonstrating impact in a case study.  Researchers could also track who is reusing their data, which may uncover opportunities for collaboration, and identify communities who have an interest in the data, even if they were not who the information was originally intended.

 

A more holistic view….

 

The Research Councils UK wanted a more holistic picture of research, such as collaborations within the user community, further funding and influence on policy and practice, and to avoid only capturing a simplified final report from researchers, Research Councils UK adopted Researchfish, to capture this information using the Common Question Set.  The information collected within Researchfish interestingly shows around 49% of the data captured in the system is on publications and the other 15 output types make up the remaining 51%.

 

Adhering to the use of the common Question Set means that data can be readily compared making it easier to analyse across research disciplines and enables better sharing and benchmarking of captured information.

 

As seen within the Stern Review, Recommendation 10(2): “Where possible, REF data and metrics should be open, standardised and combinable with other research funders’ data collection processes in order to streamline data collection requirements and reduce the cost of compiling and submitting information”. The use of Researchfish, for many funders and research organisations, enables them to capture their data in such a standardised way, using the Question Set and members also now have access to an more enriched dataset as we support Open Access through metadata.

 

Researchfish also includes an integration with ORCID that allows researchers to associate their ORCID and Researchfish accounts and choose to move publications between the two.

 

Simplification

 

One of the biggest proposals made within the Stern Review, appears to be around the reduction of burden on the researchers and research staff, and the need for simplification for the REF and the reduction of time and costs.

 

Researchers have always been asked to provide information on the outputs of their research, usually as large final reports, but increasingly funders, research organisations and third parties have asked for ongoing and more structured information. As the information requested by different stakeholders can be very similar, this has led to calls for those gathering the information to make the collection as easy as possible, and find ways to reuse and share existing information wherever possible.  

 

Researchfish has a longstanding commitment to reduce the burden of reporting for researchers, whilst improving the quality of the data collected and ensuring that it can be easily reused. As such we have an interoperability project, which aims to understand and solve the technical challenges of exchanging data between university systems and Researchfish,  and further information is available here.

 

Conclusion

 

When it comes to acting upon the outcomes of research assessment, funders have vastly differing viewpoints. The one issue on which they tend to agree is that any worthwhile evaluation of research — whether it be for informing future funding allocation or for encouraging excellence — needs to be based on a range of measures, not just the quantity of publications and how often they are cited by others.

 

 

(1) Calling Science to account, Nature 24 July 2014 http://www.nature.com/articles/511S57a.epdf?referrer_access_token=BzC7nrVBIYQa9Qkv0YFd9tRgN0jAjWel9jnR3ZoTv0PzUhnEuy1TuWkcL3vJnFCY7MlGJU4kstRKPERuiiUnnzByw5_Zb2gfJzLjTDymjTYN8WwPssW3zMu_Hzu5hYyUMCxEtrsGOvJzjnjI3Jhdhe_rvn8uafPfUKNNCY1pbpa5iqId1eahG4u5FXdO7tpoodhPD1cksz5ELees36Q-3M8prrZnSY37RiROTOCb7B2YoIcTzKrSWzzKjnm2c5hN&tracking_referrer=www.nature.com

(2) An Independent Review of the Research Excellence Framework, July 2016 https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/541338/ind-16-9-ref-stern-review.pdf

(3) The Metric Tide, July 2015 http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/The,Metric,Tide/2015_metric_tide.pdf

 

 

 

 

 

Please reload

Contact Us

Research Fish Ltd

The Barn

Horningsea Road

Cambridge

CB5 8SZ, UK

enquiries@researchfish.com

 

 

© Researchfish Ltd | Researchfish Ltd is a private limited company registered in England and Wales company number 07820803, VAT registered number 125979868.

Privacy Statement

  • Twitter Social Icon