The latest and third iteration of the Knowledge Exchange Framework (KEF) published today marks yet another key milestone in our development of the performance framework. The first opportunity to delve into a fuller suite of its capabilities for drawing more meaningful comparisons in university knowledge exchange (KE) performance.

The KEF has been instrumental  in pushing forward the quality and use of KE data and evidence since the first publication in March 2021 – by stimulating more strategic approaches to the internal collection of university data, providing more meaningful performance comparisons, and highlighting areas where we need to improve metrics in the long-term. KEF provides a scope description of KE, shedding light on where we have good metrics already, where we need to improve metrics – and the areas needing focused expert attention to populate these with any metrics (the work of our national centre development, working with the Higher Education Statistics Agency (HESA), UCI[1] and others).

Why KEF3 is important to help us shift the dial in KE policy and funding

The first two iterations of the KEF have provided novel ways to compare Higher Education Provider (HEP) performance with that of their peers, but the latest, more settled, KEF3 allows us to compare performance across multiple years. This is an important step – to be able to examine dynamic change over time, and consider contexts and drivers.  This potentially may give us tools to describe where improvements are accelerating, and also then prompt discussion of why – and also of why improvement is not happening in others. The whys could relate to a range of factors – positive or negative drivers related to national policy and funding, places or partners or from improvement policies and practices of HEPs.

We can do this through a number of lenses including across types of HEP, by cluster, and across types of knowledge activities, by KEF perspective.Over time we build up a more robust time series to examine a huge variety of potential improvements. We have started to trial some analysis, which I set out for the first time in this blog.

What are we learning from KEF3?

Delving into some of the details coming to light, KEF3 shows us where there have been differences from KEF2 by type of university. Notably in the ‘working with business’ perspective large universities with a broad discipline portfolio (cluster E ) increased their performance on average relative to the rest of the sector whilst the average performance of very large, highly research intensive universities decreased slightly relative to the rest of sector.

Conversely, STEM specialist institutions’ performance in  ‘research partnerships’  increased while the number of research partnerships in  cluster E HEPs decreased in comparison from KEF2. Overall however, in other KEF perspectives the relative performance of different types of HEPs remained relatively consistent from KEF2.

We are interested in comments on whether the differences we see described above resonate with those working in HE KE on the ground – and observations on drivers and future trends of these.

These initial pieces of analysis point to a deeper set of analysis which may be possible with further future iterations of the KEF, building a richer toolset by which we can understand KE performance and use in policy development. This, combined with our ambition for new and improved metrics through our national centre, will enable us to exploit the potential of the KEF and advance towards a KEF-informed funding design in the long-term. We would also then welcome comment on types of analyses that we might trial in future iterations of the KEF.

[1] University Commercialisation and Innovation policy evidence unit, at the University of Cambridge