Snowball Metrics: a university-driven, bottom-up initiative
- Published: Thursday, 28 May 2015 13:48
- Written by Imperial College London
Case Study By Elsevier
John Green, then Chief Coordinating Officer at Imperial College London, was asked to merge five independent medical schools to achieve efficiency in clinical services. He chose to develop an evidence-based model that was agreed on, and supported by, the faculty, but ran into problems when trying to use academic CVs because these are, of course, customised to the strengths of the academic.
Green asked the academics to define a range of criteria appropriate to the specialty against which they would be assessed, and then secured the information to systematically generate these criteria in a single system so that his evaluation would be consistent, robust and evidence-based.
Green then lead a joint study conducted by Imperial and Elsevier, funded by JISC(1), which found that research institutions across England were also frustrated by the absence of an agreed way to look at performance. Funders’ requests for performance indicators tended to be similar, but different enough to cause new work to be done each time the request was made, so that institutional systems were driven by funders’, rather than institutional, needs.
“The formats of research-related information HEIs are required to submit to external bodies are rarely harmonised, with even minor differences between requirements enforcing additional administrative effort. The information generated may be of benefit to the requiring agency though is not necessarily aligned to how institutions would wish to be assessed.
"With eight research intensives devising the Snowball Metrics, both in terms of the scope and the details of their calculation, institutions can be confident that the metrics created are useful, reliable and where appropriate, generated using existing data.”
Ian McArdle, Research Systems and Information Manager, Imperial College London
The lack of consensus also made it impossible for institutions to benchmark themselves against each other using metrics based on their own data.
Snowball Metrics(2) was initiated by eight leading UK research-intensive universities, to address these shortfalls. Snowball Metrics is owned by universities around the globe, which agree on “recipes” - methodologies that are robustly and clearly defined - so that the metrics they describe enable the confident comparison of apples with apples. The recipes are available free-of-charge to be used by any organization, and the resulting benchmarks provide reliable information to help understand research strengths and weaknesses, and thus to establish and monitor institutional strategies.
“We are approaching benchmarking in a very pragmatic way, by first organising our internal research data to create a rounded picture of performance at a variety of levels. This has been achieved through our implementation of Elsevier’s Pure.
"Currently we are working on a new project to aggregate this data strategically and monitor our progress against external comparators using Snowball Metrics. The Snowball framework has helped us to prioritise the base data we need to create meaningful metrics and to ensure that we collect and analyse data in line with agreed sector definitions.”
Scott Rutherford, Director of Research and Enterprise, Queen’s University Belfast
Snowball Metrics are data source- and system- agnostic, meaning that they are not tied to any particular provider of data or tools. The institutional project partners were keen that the recipes should be feasibility-tested on real data, drawn from a range of institutional systems, before they were published, and approached Elsevier to conduct this testing.
“If it was not for the involvement of Elsevier, and their project management skills, we’d probably still be talking about the definition of the first metric. They absolutely sat around the table deciding on the metrics, but they have not had any casting vote. They have provided technical expertise in terms of feasibility, as well as project managing the initiative.”
Malcolm Edwards, Head of Planning and Resource Allocation, University of Cambridge
Elsevier also project manage the initiative and support the communication of its outputs.
“Our participation in Snowball Metrics is in line with Elsevier’s mission is to advance science, technology and health. We have significant skills in house in handling large data sources, communicating globally, and in program management. We would be able to learn an enormous amount from the universities involved, and use this input to help build the best research intelligence systems, tools and services.”
Nick Fowler, Managing Director Research Management, Elsevier
The aspiration is for Snowball Metrics to become global standards that are endorsed by research institutions internationally, and to cover the entire spectrum of research activities. The institutional project partners first tackled the “low-hanging fruit” metrics to work out a best practice of reaching consensus and feasibility testing; the next phase focused on some of the less obvious metrics, including those in the area of knowledge transfer(3).
“Different countries have varying knowledge transfer objectives, and this often makes international comparisons difficult. Nevertheless, University of Bristol is one of the leading universities in the world and it is important to benchmark knowledge exchange internationally alongside our research, and to learn from best practice wherever it takes place. We appreciate the international Snowball Metrics perspective alongside the UK view.”
Sue Sundstrom, Head of Commercialisation & Impact Development, University of Bristol
These metrics were somewhat inspired in the UK by the HEBCI survey, since one of the principles of Snowball Metrics is to reuse existing definitions as a starting point where sensible. They cover industry collaborations, whether consultancy or contracted research, and commercial outputs - IP and spin-offs by both volume and income.
“At Imperial College London, we used robust metrics for many reasons: to evaluate staff performance, to indicate relative strengths of groups, to benchmark against our competitors and so on. They are not the only evidence one uses in any particular scenario but they form part of the diagnostic tools to understand one’s business.”
John Green, Life Fellow of Queens’ College, University of Cambridge