A Reading of Glassick, Huber, and Maeroff.

Boyer’s 1990 Scholarship Reconsidered made the case for carefully broadening the range of activities recognised as a good use of a scholar’s time (previous blogpost). Glassick, Huber, and Maeroff wrote Scholarship Assessed in 1997 to deal with some of the difficulties which had apparently arisen in the interim. Specifically, they seem to focus on how these very different activities can be assessed, with a focus on the tenure system.

Their core thesis is that excellent examples of all Boyer’s Scholarships have a common structure: 

  1. Clear goals

  2. Adequate preparation

  3. Appropriate methods

  4. Significant results

  5. Effective presentation

  6. Reflective critique

Assessing scholarship becomes a process of seeing whether the scholar can evidence these six items about their activities. They go on to discuss how this approach will fail utterly if the institution’s internal processes lead to a lack of trust in the system.

There is a lot to like here. Pushing away from a 1990s USA model of publication count, the framework is broad enough to embrace most things which conform to a project model. Teach a course? Tell me what your goals were. Served on a committee? Tell me what significant results you accomplished. Developed an interdisciplinary project? Tell me how you presented your findings.

The obvious objection is that there is a huge gap between the vague 6-point framework and any specific piece of scholarship. This gap leaves significant scope for interpretation on the part of the assessor. This is not catastrophic, but does open the door to perceptions of unfairness (which the authors emphasise are catastrophic).

My own concern is with the Scholarship of Teaching and Learning. As I’ve discussed before, Boyer saw SoTL as a way of valuing time spent teaching. Not researching teaching (though this is another worthy activity), not administering teaching (though this, too, is A Good Thing). The actual, coal-face process of supporting student learning.

“Significant results” is the really interesting item. The authors point to the over-use of student evaluations of teaching to address this, and talk about how it can be deeply onerous to design and gather robust evaluations.

What is a “significant” piece of teaching? One that helps a cohort to reach a good average mark? One that changes a life? One that changes 10% of lives in the cohort? One that allows students to engage with subsequent courses? One that students enjoy? One that allows all students to grasp the core concepts? One that allows a few students to gain true mastery of a topic? One that improves students’ self-efficacy? One that students use to secure their first job? One that they realise the value of in ten years’ time?

I sort of wonder where the answer to this question should come from. Do I say “I wanted students to learn how to construct a Frost diagram, and when I tested them on it they could do it”? Does my institution say “we want our students to learn about ways low-carbon technologies can solve global challenges”? Does the RSC say “we want Chemistry graduates to be Problem Solvers”? Should students have a say? Should the question be asked of an individual lecture or a single exam paper or a whole degree programme?

I think the Scholarship Assessed model is impracticable until the issue of significance is resolved, but it’s also spot-on for articulating the question which needs answering.