Re-thinking assessment – a programme leader’s guide

Chris Rust argues that student achievement could be improved, and many of the persistent problems with assessment in universities overcome, if a strategic approach is taken to the design of the course programme and its assessment.  Here is  an advanced view of a chapter to be published in the book “Designing undergraduate curriculum” edited by Eeva Leinenon and Romy Lawson in 2018.

“A grade can be regarded only as an inadequate report of an inaccurate judgement by a biased and variable judge of the extent to which a student has attained an undefined level of mastery of an unknown proportion of an indefinite amount of material”

(Dressel, 1957 p6)

Despite this wonderful summary of the problems with assessment, distilled into just one sentence, sixty years later assessment practices that lack reliability and validity are the norm rather than the exception in universities across the sector and around the world.  It is the intention of this [paper] to offer guidance to anyone, but especially programme leaders and their teams, who wants to take a fresh look at their assessment practice, offering practical ideas based on the appropriate assessment literature.

Why this is important, can possibly best be summarised in the following observation by David Boud, “Students can, with difficulty, escape from the effects of poor teaching, they cannot (by definition if they want to graduate) escape the effects of poor assessment” (Boud, 1995 p35).

And as to what is actually wrong with many of our practices, a more detailed, and therefore more useful thesis than that offered by Dressel, can be found in the work of the PASS project in the UK, which argues that there are six common problems with the assessment of student programmes in universities today (programme meaning the combination of modules or units that comprise the total student course experience).

  1. Failure to ensure the assessment of the espoused programme outcomes.

Because students may have choices between modules and the route they take through the programme, and even if that is not the case, because assessment of module learning outcomes is often aggregated, so not all may have to be passed to pass the module, it can’t be guaranteed that students graduating the programme have met all the stated module learning outcomes, let alone the programme outcomes.

  1. Atomisation of assessment: focused, at the micro-level, on what is easy to assess; failure to integrate and assess complex, higher-order learning; the sum of parts not making the intended whole.

Even if all module learning outcomes do have to be met in order to pass, for the above reasons, they may not actually add up to the espoused programme outcomes.

  1. Students and staff failing to see the links/coherence of the programme.

Even if the previous two issues have been satisfactorily addressed, and the module assessments do link together to ensure that learning from different modules is integrated, and the programme outcomes are met, do the faculty understand the linkages and structure of the programme? And even if they do now, will they after three or four years when there has been some turnover in the staffing?  And even if faculty do understand, do they ensure that the linkages and structure are communicated to the students sufficiently that the students understand how the programme is intended to fit together?

  1. Modules are too short to focus and provide feedback on slowly learnt literacies and/or complex learning.

Modules can often be just a term or a semester long. Can every topic be reduced and taught in such short periods of time? What about weaker students who may need longer? What about more complex parts of the curriculum?

  1. Students and staff adopting a ‘tick-box’ mentality, focused on marks, engendering a surface approach to learning that can ‘encourage’ plagiarism and ‘game-playing’.

Again, even if the module assessments do link together to ensure that learning from different modules is integrated and the programme outcomes are met, does the apparent compartmentalised structure of modularity coupled with the importance of the marks awarded and their effect on the student’s final result for the programme, encourage a surface approach focussed on gaining marks rather than focussed on learning?

  1. Too much summative assessment, leading to overworked staff, not enough formative assessment and inability to ‘see the wood for the trees’ in the accumulated results.

Compared with traditional, linear courses, modularity inevitably increases the amount of summative assessment. Each module has to be assessed, the argument goes, if it is to count – and very quickly one assessment is considered insufficient and it becomes two, or three or more.  The challenge of assessing a range of diverse of learning outcomes is a major problem here.  The purposes of summative assessment to make decisions on student progression, and ultimately whether they should pass and, in some cases, about fitness to practice, could be done with far fewer assessments, and probably more accurately.  It is also arguable that with the increases in summative assessment caused by modularity, plus the significant increase in class and cohort size caused by the general ‘massification’ of higher education, the activity that staff have given up in order to cope is formative assessment.  And formative assessment is vital in offering opportunities for students to undertake assessment tasks that don’t count for marks where they can take risks and learn from their mistakes.

(From the PASS Project Position Paper – http://www.pass.brad.ac.uk/position-paper.pdf)

I would suggest that any programme leader, or Head of School/Department, who seriously wants to improve their assessment practice should start by a detailed consideration of these six general problems, and identifying the degree to which they apply and are issues on their particular programme/s.

As for the solutions, they can be summarised thus:

Less but better summative assessment

Summative assessment should explicitly link to learning outcomes, especially programme learning outcomes, and the assessment of integrated learning (what the PASS project called programme focussed assessment).

Reconsider how best to record summative assessment of learning outcomes and consider ways of moving to simple grading (e.g. pass/fail or pass/merit/distinction) as opposed to marks or percentages.

More formative assessment, and actively develop students’ assessment literacy within a community of assessment practice

The rest of this chapter will consider each of these in more detail.

 

Programme-focussed assessment

Assessment should be “specifically designed to address major programme outcomes rather than very specific or isolated components of the course. It follows then that such assessment is integrative in nature, trying to bring together understanding and skills in ways which represent key programme aims [valid]. As a result, the assessment is likely to be more authentic and meaningful [relevant] to students, staff and external stakeholders.”

[From the PASS Project Position Paper – http://www.pass.brad.ac.uk/position-paper.pdf – my emphasis, and additions in brackets]

The first basic step in achieving this is for the programme team to ensure that it is designed according to Biggs’ principle of ‘constructive alignment’ (Biggs, 1999). “The fundamental principle of constructive alignment is that a good teaching system aligns teaching method and assessment to the learning activities stated in the objectives so that all aspects of this system are in accord in supporting appropriate student learning” (Ibid). This principle seems to be widely accepted across much of the sector, but almost solely at the level of the module.  It should also be noted that while Biggs refers to ‘objectives’, generally the sector has preferred to use the notion of ‘outcomes’, which is arguably more helpful.

The concomitant three-stage design model – the essence of ‘constructive alignment’ – is:

  1. Identify the “desired” outcomes?

  2. Identify the appropriate teaching methods which are likely to require students to behave in ways that are likely to achieve those outcomes

  3. Identify assessment tasks that will show if the outcomes achieved by the student match those that were intended or desired

Before considering individual modules, this approach needs to be applied first, at the level of the programme which, by necessity, means bringing the ‘team’ of module leaders together at this initial design stage, and starting with the identification of the desired programme learning outcomes – obviously taking account of professional body requirements, national subject benchmark statements, and the like, where appropriate. Often module leaders do not feel that they are part of a team, but if a programme is to be successful it needs to be seen by all involved as a team enterprise – and this initial meeting is vital to initiating that view.  In addition to agreeing the programme outcomes, it is also helpful to further identify specific outcomes for each year of the programme – what would you expect a successful a student to know and be able to do by the end of their first year, second year, etc.?

Through the process of working to help programme teams take a programme-focussed approach to course design, a group of developers from Dublin Institute of Technology and University College Dublin identified the importance of what they call “curriculum sequencing” which, they argue, has three vital elements (O’Neill, Donnelly & Fitzmaurice, 2014):

  1. Develop a collective philosophy – what do you want to be distinctive about your course and your graduates? What is the course designed to achieve? Why teach it in this particular way?

  2. Communicate sequencing to students and staff – ensure that all staff teaching on the programme understand the relationships between modules and how the programme is meant to come together and be seen as an integrated ‘whole’ (which may well need to be repeated as staff change), and that this is then further communicated to the students.

  3. Develop strong building blocks – which are very likely to be linked to ensuring the ‘delivery’ and assessment of the programme outcomes.

In relation to the first two elements, it is interesting to note that they are strongly supported by some work done by Anton Havnes some ten years ago.  In a detailed study into what might explain the differences between programmes that had higher than average student results with other programmes in the same institution with below average outcomes the only clear significant difference he could find was the degree to which staff and students had a sense of how the modules fitted together and what the overall programme was trying to deliver. His conclusion was that where there is a greater sense of the holistic programme, students are likely to achieve higher standards than on more fragmented programmes (Havnes, 2008).

And if gaining programme coherence cannot be achieved because it’s a multidisciplinary programme, and staff cannot work together sufficiently (perhaps they’re not geographically co-located, or are in very diverse disciplines) then at least they should make clear the differences between the different parts of the programme, and especially their different epistemic frameworks, and how this will be communicated to the students.

So far as the notion of ‘strong building blocks’ is concerned, it may be useful to consider the following three practical examples of what that might look like:

An Australian university has considered including in its assessment policy that all programmes must identify capstone and cornerstone (small capstones) modules – a capstone module being one that is explicitly intended to draw together and integrate, and assess that integration of, learning from preceding modules. There would be no prescription as to where or how many such modules there should be, just that the programme must have and identify them. This is a simple but incredibly innovative and useful idea.

A European Business School has a course structure for each semester which contains three parallel modules for the first ten weeks followed by one integrating module for the final five weeks, in which students are put into groups and given a task specifically designed to integrate the learning from the preceding three parallel modules. In the Australian university’s terms, this final integrating module would be an excellent example of a cornerstone module.

The first year of a UK automotive engineering course has one substantial year-long module to build a working go-cart.  In each of the three terms, there are also smaller term-long modules covering different subjects vital to that central purpose and therefore contributing directly to that core module. I’m not sure whether, in the Australian university’s terms that central module would be deemed a cornerstone or a capstone but it would certainly be one or other.

Reconsider how best to record summative assessment of learning outcomes

In my view, one of the greatest scandals in university education around the world is the perpetuation of indefensible unscholarly marking practices and assessment systems, primarily based on the statistically illiterate misuse of numbers (Rust, 2007; 2011).  The logic of an outcomes-based approach to course design requires that the assessment should test whether the outcomes have been achieved and therefore the sole criterion necessary should be “has the outcome been met?” Yet most assessment systems around the world persist in grading or, even worse, percentage marking suggesting a level of precision that is not humanly possible. They also often tend to use criteria that bear little direct relation to the espoused outcomes. Then they aggregate these results. This is done despite the fact that they assessed different things, and the ranges will have been different, thus obscuring the different types of learning outcome purportedly represented by the separate scores.

There is considerable research evidence showing how unreliable marking can be (e.g. Laming,1990; Newstead & Dennis,1994; Newstead, 2002), and not just between markers, but even as individuals, the same marker often giving a different mark to the same piece of work if marked again after a gap of time (Hartog & Rhodes,1935; Hanlon et al, 2004). Essentially, “Grades are inherently ambiguous evaluations of performance with no absolute connection to educational achievement” (Felton & Koper, 2005 p562).

But stubbornly, the higher education sector continues to ignore this research on the unreliability of marking, and the unscholarly marking and grading practices, even though this is at a great cost, particularly in terms of staff time. The result is vast quantities of unreliable and unhelpful data, which nevertheless is treated as objective truth and affects students’ lives and careers.

In terms of the primary purpose of summative assessment – making decisions about progression, qualification, license to practice, etc. – we don’t need anywhere near the amount of data most programmes produce, and arguably this surfeit unhelpfully creates a ‘wood-for-the-trees’ situation. Far fewer, but more rigorous, assessment results would enable better decisions and save considerable amounts of staff time. Trying to decide if a piece of work deserves 63 or 64 is not time well spent!

My recommendation to any programme team is to seriously consider whether summative assessment decisions need to be anything more than pass/fail, or possibly pass/merit/distinction if some kind of ranking is deemed to be desirable. And if this is seen as just too radical a step to take, at least consider the following compromise. If you have accepted the earlier arguments for ‘programme-focussed assessment’, ‘curriculum sequencing’ and for the need for ‘building-block’ modules, why not at least restrict summative grading to the cornerstone and capstone modules, where the focus is on programme outcomes, and the assessment of integrated learning is taking place (in the UK they could be classed as ‘honours’ modules) and make all the other modules essentially formative by simply assessing them pass/fail?

Make summative assessment tasks more effective

Once the programme team has dealt with the big picture, ensuring constructive alignment and curriculum sequencing across the programme, attention should then be given to the quality of the summative assessment tasks to be used. There are arguably three linked, but different, qualities that can help to make the task effective:

  • Validity – does the task truly assess what you claim it assesses? Writing a 500 word essay on “How to give safe injections” would not actually assess whether a nursing student could give safe injections

  • Authenticity – is it ‘real’ world task? Does it look like something someone might ever be expected to do in a setting outside the university? And such tasks can be even more effective if they can actually be undertaken in a ‘real’ world setting such as a placements, or by undertaking a ‘live’ project (so literally doing it ‘for real’)

  • Relevance – can the student see why this topic is important? Why they need to know this? How it fits with the rest of the subject and the bigger picture? And even better if there can be personal relevance and it can be something which the student is personally interested in, and wants to know more about or be able to do – so can there be elements of choice about the task/s undertaken?

Both authenticity and relevance should make the activity meaningful to the student, and there is strong of research evidence that that should therefore increase the likelihood that the student will be motivated to engage with the task (the exact antithesis of what common parlance means by an ‘academic exercise’, namely something that is essentially pointless!)

Creating a community of assessment practice

For faculty

In order to achieve any degree of marker reliability, it is necessary to bring the markers into a community of assessment practice.  In the UK, this was recognised by the Higher Education Quality Council (the precursor to the Quality Assurance Agency) twenty years ago:

“Consistent assessment decisions among assessors are the product of interactions over time, the internalisation of exemplars, and of inclusive networks.  Written instructions, mark schemes and criteria, even when used with scrupulous care, cannot substitute for these”   (HEQC, 1997)

It is, therefore, slightly surprising to observe that much of the emphasis of the QAA and quality assurance since then has in fact been on the latter activities, with ever greater requirements for instructions, marking schemes and criteria to be in writing.  Recently, however, especially in exploratory work in Australia, there seems to have been a growing recognition for the need for these social processes highlighted by HEQC and for markers’ judgements to be ‘calibrated’ (Sadler, 2012; Watty et al, 2013). If programme teams are to ensure the best possible reliability of their assessment decisions, it can only be achieved by the programme instituting internal, faculty calibration processes

For students

But it is also vital that the students are brought into this community of assessment practice, specifically, as a key aspect of the wider disciplinary community of practice, for a number of linked, but distinct reasons.

Firstly, there is the fairly basic argument of Sadler that an indispensable condition for improvement in student learning is that “the student comes to hold a concept of quality roughly similar to that held by the teacher” (Sadler, 1989). The sooner the students understands what counts as ‘good’ work the better, as they will then be able to produce better work.  And this acculturation regarding standards is best developed through participation because “participation, as a way of learning, enables the student to both absorb, and be absorbed in the culture of practice” (Elwood & Klenowski, 2002, p. 246).

Secondly, this is arguably the key to solving the feedback dilemma currently plaguing higher education. That dilemma being that all the research evidence suggests that potentially feedback has a crucial role to play in the assessment cycle in supporting and developing students’ learning, but in reality students are hugely critical of our feedback practices, and the quality of the feedback they receive, and therefore tend not to engage with it (Price et al, 2010; O’Donovan et al, 2015). There have been a growing number of studies that show that the passive receipt of feedback has little effect on future performance (e.g. Fritz et al, 2000) and that dialogue and participatory relationships are key elements of engaging students with assessment feedback (e.g. ESwAF FDTL, 2007). But sadly many attempts by institutions to address the feedback dilemma have ignored this research choosing instead to try and improve the existing practices.  This could be characterised by the mantra ‘more feedback quicker!’  But this completely ignores David Boud’s apposite observation that feedback that has no ‘effect’ cannot be seen as feedback – it is simply input (Boud and Molloy, 2012)).  Given the evidence that much, current feedback is not engaged with by the students, is largely ignored and has no discernible effect on the students’ subsequent work (e.g. Hounsell, 1987; Fritz, 2000; MacLellan, 2001) – more of what we are doing, however quick, is not going to improve feedback.

Why the ‘more, quicker’ approach is ineffective and almost certainly doomed to only very limited success, if not failure, is also eloquently addressed by David Nicol when he says that it is not sufficient to make feedback a better ‘monologue’ – feedback must be seen as a dialogue (Nicol, 2010).  And he goes on to say that it is not pedantry to point out that, technically, a dialogue is not just two people (that’s a duologue) – it is three or more.  He is not arguing simply to turn feedback into a conversation between student and tutor – it needs to go much wider than that, including student with student, in the wider community of practice.

Finally, there is powerful research evidence that there is a direct correlation between student involvement and engagement in interactions at all levels with others in the programme (both faculty and students), and their academic success.  Based on the huge database provided by the US National Survey of Student Engagement, Astin has shown that the most significant factor in student academic success is student involvement, fostered by student/staff interactions and student/student interactions (both formal and informal) (Astin, 1997).  And a study by Graham Gibbs, trying to identify what departments in elite universities around the world, rated highly for both their teaching and their research, had in common, concluded that the only identifiable similarity was high levels of student involvement (Gibbs et al, 2008).

I would argue that a major benefit to the student, gained through interaction and involvement, which helps to explain their subsequent academic success, is the acquisition of ‘assessment literacy’.  If the students are to reach their true potential in terms of their assessed performance the cultivated development of the students’ assessment literacy should therefore be of prime importance to the programme team in planning a programme’s assessment.

Assessment literacy encompasses:

  • an appreciation of assessment’s relationship to learning;

  • a conceptual understanding of assessment (i.e. understanding of the basic principles of valid assessment and feedback practice, including the terminology used);

  • understanding of the nature, meaning and level of assessment criteria and standards;

  • skills in self and peer assessment;

  • familiarity with technical approaches to assessment (i.e. familiarity with pertinent assessment and feedback skills, techniques, and methods, including their purpose and efficacy); and

  • possession of the intellectual ability to select and apply appropriate approaches and techniques to assessed tasks (not only does one have the requisite skills, but one is also able to judge which skill to use when, for which task).

(Price et al, 2012 pp10-11)

It should be noted that students undertaking multidisciplinary degrees may well be compelled to negotiate between the different assessment literacies bound up in the different disciplinary cultures and practices, and this needs to be explicitly recognised by the faculty involved.

Central to the development of assessment literacy is the involvement of students in self and peer assessment, and the opportunities this offers for all-important dialogue about assessment. Involving students in the assessment process is not a new idea, and has been widely introduced and researched (e.g. Falchikov, 2004).  What is needed is a conceptual shift whereby a commitment is made, at the programme-level, to the planned development of student assessment literacy (or literacies in a multidisciplinary programme). This will, in turn, require the programme team to make strategic decisions at the programme and planning and design stage, about the use of self- and peer-assessment. Such an approach pre-empts an approach that relies on the interest, enthusiasm or skill of individual module leader who may, if permitted to make module design decisions in isolation from considerations of the student experience and programme design, choose to use or reject self- and peer-assessment as potential ways of varying the design of assessment tasks.

There is also a strong connection between assessment literacy and the employability agenda [and what David Boud, in Australia, has termed “sustainable assessment” – see https://www.uts.edu.au/sites/default/files/davidboudKeynote.pdf ].  A key aspect of employability is critical self-awareness – the “students’ awareness of the [graduate] attributes and their understanding of their own personal development of the attributes” (Rust and Froud, 2016 p9).  Put simply, it is surely an essential attribute of a graduate and/or a professional to be able to assess the quality of their own work and also the quality of the work of their colleagues and peers?

More formative assessment and improving the effectiveness of feedback

If we are to maximise the potential of feedback to support and improve student learning, there needs to be an increase in formative assessment opportunities, where students can learn from their mistakes, and possibly take risks without jeopardising their final grades. While, as previously stated, it is unquestionably true that ‘we assess too much’ when it comes to summative assessment, given that formative assessment is primarily focussed on learning, there can’t really be too much formative assessment – that would be like saying ‘we have too much learning’. This, of course, comes with the caveat –providing that the overall work-demand on the students (i.e. expected total student learning hours) is realistic.  And if the potential benefit of formative assessment tasks is to be maximised, the students will need effective feedback, whether that comes from the tutor, themselves and/or their peers.

Val Shute, in the US, memorably argues that if feedback is to be effective, you need ‘MOM’ – Motive, Opportunity, and Means (Shute, 2008). To engage with feedback students need to be motivated, and that is much more likely if they will have an opportunity, in the foreseeable future, to put the feedback into practice. This can be in the form of having another go, either redoing the same piece of work or undertaking a new but similar task.  Even if this is achieved, however, she argues that it may well not be effective in helping to improve the student’s work.  If, for example, your feedback tells them that their analysis is not very good, this may help to explain any indicative grade given but it doesn’t help the student analyse better.  Their analysis may well not be very good because they don’t understand exactly what you mean by analysis, or what good analysis looks like. In order to undertake better analysis they need the ‘means’ to help them.  And depending on context, this could take all manner of forms – ‘come and see me for a tutorial’, ‘suggest you re-read chapter three of the text book’, ‘go and look at x on the course website’, etc.

David Nicol (University of Strathclyde) advises that when you start designing a course, as well as considering the constructive alignment of the summative assessment, early consideration should be given to designing what preceding, formative assessment will be undertaken, and that explicit feedback loops should be built in.  [For detailed, practical examples of ways to increase formative assessment and to improve the effectiveness of feedback look at the REAP project at Strathclyde – www.reap.ac.uk – and also O’Donovan et al, 2015]

Given that if tutor feedback is to be done well it can be expensive, particularly in terms of time, a review of resource allocation, with a view to focussing resources on high-value feedback where it can have the most impact on student learning, is recommended The points at which such feedback is of most value is likely to be where the students are most challenged – points of ‘troublesome knowledge’, where they may be confronted by epistemological jumps and/or where there are changes in the levels of support and autonomy (O’Donovan, 2010).

Summary

This chapter argues that student achievement could be improved, and many of the persistent problems with assessment in universities overcome, if a strategic approach is taken to the design of the course programme and its assessment.

In particular, this approach should:

  • develop a common philosophy

  • ensure both that the programme is constructively aligned, and the curriculum ‘sequenced’ with ‘building block’ modules, identifying explicitly where the programme’s outcomes and integrated learning are assessed

  • ensure that staff and students understand the programme’s philosophy, and the raison d’être of the programme structure

  • reduce the summative assessment points, focussing them explicitly on the assessment of the programme outcomes, and improve the rigour and quality of those assessments

  • adopt as simple a grade-based marking system as your institution will allow

  • increase the opportunities for formative assessment, and actively engage students in the assessment process, especially through dialogue, with the specific intention of developing their assessment literacy and improving the effectiveness of assessment feedback.

  • Explicitly recognise the ability to self and peer assess as an essential graduate outcome

Arguably, none of this should actually be difficult. And ridiculously, probably the hardest requirement for many programmes will be the initial bringing together of faculty concerned, getting them to see themselves as a programme team, and to accept the idea of jointly, collaboratively, designing the programme together.  And if anything this is arguable getting harder as staff become more isolated in narrowly focused departmental structures built around research rather than teaching demands (Macfarlane, 2011).

Bibliography

Astin, A. (1997) What matters in college? Four critical years revisited. San Francisco: Jossey Bass.

Biggs, J. (1999) Teaching for quality learning at university, Buckingham: SRHE & Open University Press

Boud, D. (1995) Assessment and learning: Contradictory or complementary? In P. Knight (Ed) Assessment for Learning in Higher Education, 35-48, London: Kogan Page)

Boud, D. & Molly, M. (2012) Rethinking models of feedback for learning: the challenge of design, Assessment & Evaluation in Higher Education, 38 (6)

Dressel, P.L. (1957) Facts and fancy in assigning grades, Basic College Quarterly, Winter 6-12

Elwood, J. and Klenowski, V. (2002) Creating communities of shared practice: the challenges of assessment use in learning and teaching, Assessment and Evaluation in Higher Education, 27, 243-256.

ESwAF FDTL (2007), ‘Final Report’. Available online at: https://mw.brookes.ac.uk/display/eswaf/Home

Falchikov, N. (2004) Improving Assessment Through Student Involvement: Practical Solutions for Higher and Further Education Teaching and Learning, London: Routledge Falmer

Felton, J. & Koper, P.T. (2005) Nominal GPA and real GPA: a simple adjustment that compensates for grade inflation, Assessment and Evaluation in Higher Education, 30 (6), 561-69

Fritz, C.O., Morris, P.E., Bjork, R.A., Gelman, R. and Wickens, T.D. (2000) When further learning fails: stability and change following repeated presentation of text, British Journal of Psychology, 91, 493-511.

Gibbs, G. et al (2008) Disciplinary and contextually appropriate approaches to leadership of teaching in research-intensive academic departments in higher education, Higher Education Quarterly, 62 (4), 416–436.

Hanlon, J., Jefferson, M., Molan, M. and Mitchell, B (2004) An examination of the incidence of ‘error variation’ in the grading of law assessments, United Kingdom Centre for Legal Education, Retrieved from http://www.ukcle.ac.uk/projects/past-projects/mitchell/

Hartog, P. & Rhodes, E. C. (1935) An Examination of Examinations, London: Macmillan

Havnes, A (2008). There is a bigger story behind. An analysis of mark average variation across programmes. European Association for Research into Learning and Instruction Assessment Conference, University of Northumbria.

Higher Education Quality Council (1997) Assessment in higher education and the role of ‘graduateness’, London: HEQC.

Hounsell, D. (1987) Essay Writing and the Quality of Feedback. In Student Learning:

Research in Education and Cognitive Psychology, edited by J. T. E. Richardson, M. W.

Eysenck, and D. Warren-Piper, 101–108. Milton Keynes: Open University Press.

Laming, D. (1990) ‘The reliability of a certain university examination compared with the precision of absolute judgments’, Quarterly Journal of Experimental Psychology, vol. 42 pp. 239-254

Macfarlane, B. (2011).  The Morphing of Academic Practice: Unbundling and the Rise of the Para-academic, Higher Education Quarterly, 65 (1), 59–73.

MacLellan, E. (2001) Assessment for Learning: The Differing Perceptions of Tutors and Students, Assessment and Evaluation in Higher Education 26 (4), 307–318.

Newstead, S.E. & Dennis, I. (1994) Examiners examined: the reliability of exam marking in psychology. The Psychologist, 7, 216-219.

Newstead, S. (2002) Examining the examiners: why are we so bad at assessing students? Psychology Learning and Teaching, 2 (2), 70-75

Nicol, D (2010) From monologue to dialogue: improving written feedback in mass higher education, Assessment and Evaluation in Higher Education, 35 (5), pp 501-517.

O’Donovan, B. (2010) Filling the Pail or Lighting a Fire? The Intellectual Development of Management Students, International Journal of Management Education 9 (1): 1–10.

O’Donovan, B., Rust, C. & Price, P. (2015) A scholarly approach to solving the feedback dilemma in practice, Assessment & Evaluation in Higher Education, DOI: 10.1080/02602938.2015.1052774

O’Neill, G., Donnelly, R., & Fitzmaurice, M. (2014) Supporting Programme Teams to Develop Sequencing in Higher Education Curricula, International Journal of Academic Development, 19 (4) Available at http://www.tandfonline.com/doi/full/10.1080/1360144X.2013.867266

Price, M., Handley, K., Millar, J., & O’Donovan, B. (2010) Feedback: all that effort, but what is the effect?. Assessment & Evaluation in Higher Education, 35 (3), 277-289.

Price, M., Rust, C., O’Donovan, B., Handley, K. with Bryant, R. (2012) Assessment literacy: the foundation for improving student learning, Oxford: Oxford Centre for Staff and Learning Development

Rust, C. (2007) ‘Towards a scholarship of assessment’, Assessment and Evaluation in Higher Education, Vol. 32 (2), 229-237

Rust, C. (2011) The unscholarly use of numbers in our assessment practices; what will make us change? International Journal for the Scholarship of Teaching and Learning, 5 (1), January 2011. Available at http://academics.georgiasouthern.edu/ijsotl/v5n1/invited_essays/PDFs/_Rust.pdf

Rust, C., & Froud, L. (2016) “Shifting the focus from skills to ‘graduateness’” Phoenix, Issue 148, June, 8-9

Available at: http://viewer.zmags.com/publication/1a21a329#/1a21a329/2

Sadler, D. R. (1989) Formative assessment and the design of instructional systems, Instructional Science, 18, 119-144.

Sadler, D.R. (2013) Assuring academic achievement standards: From moderation to calibration. Assessment in Education: Principles, Policy and Practice, 20 (1), 5-19

Shute, V. 2008. Focus on Formative Feedback, Review of Educational Research 78 (1), 153–189.

Watty, K., Freeman, M., Howieson, B., Hancock, P., O’Connell, B., de Lange, P., and Abraham, A. (2014) Social moderation, assessment and assuring standards for accounting graduates, Assessment & Evaluation in Higher Education 39 (4). DOI: 10.1080/02602938.2013.848336

About the author

Chris Rust

Emeritus Professor of Higher Education, Oxford Brookes University. Before retiring in September, 2014, after over 25 years at Brookes, Chris had been Associate Dean (Academic Policy). Previously, for ten years, he was Head of the Oxford Centre for Staff and Learning Development (OCSLD), and Deputy Director of the Human Resource Directorate. Between 2005 – 2010 he was also a Deputy Director for two Centres for Excellence in Teaching and Learning – ASKe (Assessment Standards Knowledge Exchange) and the Reinvention Centre for undergraduate research (led by Warwick University).

In OCSLD, with thirteen colleagues, he helped to provide both staff and educational development support to the University’s academic Faculties and support Directorates for 23 years. For six years he was Course Leader for the University’s initial training course for new teaching staff.

He achieved a PhD by publication in 2003 and became a professor in March, 2010.

He has researched and published on a range of issues including:

  • the experiences of new teachers in HE
  • the positive effects of supplemental instruction
  • ways of diversifying assessment
  • improving student performance through engagement in the marking process
  • the effectiveness of workshops as a method of staff development.

Mostly he has focused on researching and writing about assessment, including:  improving student learning through active engagement with assessment feedback, and the significance of both explicit articulation and socialisation processes in improving students’ understanding of assessment requirements and assessment feedback.

He is also interested in the design, development and use of social learning space in universities, as well as the development of research-based learning in the undergraduate curriculum, including its potential effect on university organization.

In the 90s he contributed to the design and delivery of a national programme of staff development in higher education on the issue of teaching more students and over the years has run numerous workshops around the country and internationally on a range of issues including teaching large classes, developing assessment strategies, and engaging students with assessment and feedback.

Most recently he has been involved in a research project into the effectiveness of the external examiner system and how it might be improved

He has been a Fellow of the RSA, a Senior Fellow of SEDA (Staff and Educational Development Association) and was one of the first fourteen Senior Fellows of the UK Higher Education Academy, for whom he was also an accreditor.

You may also like...