In the run-up to REF 2014, there is much talk about the ‘impact agenda’ and on how, if at all, the impact of research can be measured. This is an oft-expressed concern in our interviews for the STEaPP impact project.
Given that impact is likely to be of increasing importance for REF 2020, it may now be worth revisiting the debate about impact measurement.
Altmetrics promise to “help discover and share the full impact of research”. Their evolution in recent years may reflect some academics’ embracing of blogging and tweeting as ways to share research. Yet that is not to say that altmetrics do not have their disadvantages. Criticism centers on how they act as an incentive to share research articles on the basis of catchy press releases, not scientific merit. This is nowhere more true than for studies of nutrition and health.
Some have called for the replacement of peer review in the REF by some combination of metrics. While this may reduce bureaucracy (and associated costs), it is unlikely tocapture how impact happens.
A point often made by participants in our interview study is that impact is about “people, not papers”. Networks and timing (“being in the right place at the right time”) are crucial for ensuring research is used outside academia. Yet these often serendipitous processes are difficult to document in the four pages of a REF impact case study.
If metrics replaced case studies, the problem would become worse. In our report, we identified knowledge transfer and collaboration as the main “pathways to impact”. These are umbrella terms which include a host of not necessarily measurablce activities.
The way forward would be to recognize that research impact is a process, not a product. Processes can be non-linear, serendipitous and fragmented. They involve conversations, emails, telephone calls, tweets, Facebook messages or even non-verbal communication. It is these things that form the context under which products can be used, and, in the case of impact, context is key.