Fair and Robust Assessment of Teaching Performance

It would probably be a surprise to many outside HE how small a role teaching traditionally played in academic recruitment and promotion.  The rise of tuition fees has brought with it a change of focus and institutions are now attempting a more rigorous approach to assessing teaching performance.  With staff setting and marking their own assessments and in the absence of external assessment of teaching, there is no obvious ‘gold-standard’ to use.

The current KPI based system in use at Swansea has received widespread criticism.  In an attempt to avoid using subjective assessments from other academics, we instead face the tyranny of bad data.  As a theoretical physicist my own predictions stand or fall in the face of experimental data, but that data must be rigorously measured and analysed, with all biases fully accounted for.  There is a reason why scientific results are presented in papers which describe the experiment and analysis rather than simply stating a number.  Unfortunately, the current KPI system has none of these checks and balances: bias in student feedback is well established, response rates can be minimal and even the rudimentary statistical information recorded at the module level is stripped out to generate a bare number bereft of context or statistical basis.

Following representations from the local UCU committee, the University asked for UCU input into reforming the system.  As discussed at the UCU General Meeting on May 26th, a working group was established to make recommendations for a new system.  Details of the working group’s response can be found here.  Whilst recognising the importance of the student voice, it is vital that all data is presented in context.  To this end we propose a system in which subject experts undertake peer review of teaching and provide an agreed narrative statement to provide context and address the broader picture of teaching contributions.  This approach is similar to ones used elsewhere and provides a much more nuanced and knowledge-based assessment than the current system. All KPIs provide promotion panels with sticks with which to beat applicants if they so wish – it seems to be at the panel’s discretion whether to take a ‘computer says’ approach or not.  This does not seem to us a good way to take this matter forward