The 2014 Research Excellence Framework evaluated the impact of UK research for the first time. As such, it is not surprising that much commentary focused on it.
HEFCE presented the results thus:
“For the first time, the assessment provides evidence of the impact of UK research. Impressive impacts were found in all disciplines, and from many diverse UK universities with submissions of all sizes.
On average across all submissions, 44% of impacts were judged outstanding (4*) by over 250 external users of research, working jointly with the academic panel members. A further 40% were judged very considerable (3*).
Outstanding impacts on the economy, society, culture, public policy and services, health, the environment and quality of life – within the UK and internationally – were found. These reflect universities’ productive engagements with a very wide range of public, private and third sector organisations, and engagement directly with the public.”
As Jonathan Wolff points out: “[M]eritocratic hiring, vibrant research environments, impactful research, and open-access publishing […]” could not happen without assessment. Frequent assessments could benefit workload, tracking impact, and reduce stigma for non-participants.
Caution has been raised against the overinterpretation of REF2014 results. Simon Marginson argues that ‘game-playing’ biases the assessment of research. It is hard to imagine an evaluation in which the requirements are not made known to researchers in advance. Besides, a certain degree of game-playing is a factor in many professional situations.
Moreover, impact may be easier in certain subjects. This may be why most articles in an upcoming literature review for the STEaPP impact project are from health disciplines.
Jack Stilgoe warns against an over-reliance on the concept of “excellence”. He claims this makes for a blinkered view of how science and technology operate in society. The proportion of “world leading” or “internationally excellent” research in REF2014 is so high as to suggest inflation.
To coincide with the publication of REF results, we carried out an analysis of case studies from UCL BEAMS. Our focus was on identifying where and how impact occurred, as well as on whom.
Most of the impact of BEAMS research had businesses and public policy as beneficiaries. Most took place within the UK, though several case studies also reported international impacts. This may be because researchers may not know exactly when and where their research has been used.
By not relying on published REF results for our analysis, we distanced ourselves from what Chris Shore has termed “the audit culture”. Instead, we focused on how a subset of case studies described impact, and the evidence used.
We must remain careful in our interpretation of results for impact. The “reach and significance” of research beyond academia depends not only on the research itself, but also on the context. Individual case studies paint a nuanced picture of the wealth of impact activities. The important thing is that impact cannot be compared in league tables and rankings.