What is it?
In recent years, educators have expressed concern about falling attendance (Marburger, 2006;. Edwards and Clinton (2019), and post-pandemic, in an online context, many students are perceived to be failing to engage in seminars through a lack of preparation or an unwillingness to speak, or even turn on cameras. Giving a small percentage weighted grade to summatively assess the participation of individuals in class/synchronous online events has been mooted as a solution to this issue. This can be in the form of homework worksheets etc to be handed in, but more commonly is assessed through the way students contribute to the discussion or activities in the class.
Summative assessment incentivises students to attend, turn on cameras, speak etc, because student prioritise that which is assessed (Sambell et al, 2013). However, should we do it?
Why might I use it?
- Participation in a group environment is a real-world skill; therefore, forms a learning outcome of many programmes or modules. To align learning outcomes with assessment (Biggs and Tang, 2011), and if there is no other space to assess this on a module, it might make sense to assess this continually through seminar participation.
- In larger modules, students are often more reluctant and have fewer opportunities to participate, and having this as a summative assessment encourages them to engage with this skill. Moreover, it sends messages to students about how this behaviour is valued in higher education (Bean and Peterson, n.d ). Because the assessment is summative, it provides opportunities to use feedforward to students about issues such as turn-taking, justifying an argument, cultural competency, inclusion and pedagogies of empathy (Gilbert, 2018).
How has it been used?
- LSE http://blogs.lse.ac.uk/education/2016/05/26/assessing-class-participation-report-on-an-lse-pilot/
- University of New South Wales https://teaching.unsw.edu.au/assessing-classroom-participation
Known issues and things to consider:
Research shows the act of participation is the source of the learning gain (Allsop, 2002; Chen and Lin, 2008). The assessment of it, however, is more controversial and research points to problems which can negatively impact on those benefits.
- Many educational theorists have cautioned against using assessment to elicit desired behaviour (Harland et al, 2015; Jessop, 2016; Boud, 2000, Carless, 2006; Torrance, 2015). Assessment is a ’moral act’ (Brown and Knight, 2013) and there are ethical implications of using it as a punitive measure to address falling attendance or lack of engagement.
It does not follow that assessing participation is necessary if your reasoning is to simply ‘make’ your students active. Student activity is an outcome of being motivated, and preferably that means them being intrinsically motivated. A necessary function of the facilitative teaching role in the active classroom is to develop intrinsic motivation by designing interesting and meaningful activities that drive the curious student to respond. Assessment is an extrinsic force; it comes from without rather than from within. Sheffield Hallam University: https://blogs.shu.ac.uk/learningspaces/assessing-classroom-participation-in-the-active-learning-classroom/?doing_wp_cron=1548865106.0867359638214111328125
- Assessing engagement, participation etc is often used as part of a regime of continuous assessment. While there are many benefits of this approach, it has been called into question given that summative assessment is the measurement of achievement (of learning outcomes) and the fidelity of the assessment can be misrepresented if student learning is considered to be cumulative rather than a collection of what is done at different points during the course (Sadler, 2009). This is why portfolio assessment, for example, if more than a collection of pieces of work but a coherent whole that represents student learning by the end of a course of study.
- Most research acknowledges the issues of reliability with this type of assessment as it is based on a subjective judgement, usually by one member of staff (often a GTA) with a variety of independent variables. Marking, even with double-marking/moderation procedures has been shown to be unreliable (Bloxham et al, 2016; McConlogue, 2010). Student perceptions of subjectivity and unfairness can cause resistance to this type of assessment, which can lead to further disengagement (Jacobs and Chase, 1992).
- There are many reasons why students might not choose to ‘participate’ and it might be worth considering these:
- cultural expectations and values (for example students familiar with more didactic approaches to learning which put less value on student participation),
- lack of confidence (for example, concerns about saying something that makes them look foolish in front of their peers),
- language issues (for example students believing that their English isn’t good enough or needing more time to formulate a response in a language that is not their own),
- learning preferences (for example students preferring to learn by listening to others),
In online settings there can be additional factors such as:
- Technical issues: students may be working from computers or phones with inadequate functionality or bandwidth issues.
- Situational issues: students may be participating from spaces in which others are living or working that makes participation more challenging.
Assessing participation could add further stress to students in these situations and there are other more constructive ways to address them.
In addition, if we think about how we can ensure more student participation amongst increasingly diverse student cohorts, we have to consider what type of participation we want, and potentially question the notion of participation and engagement itself. Gourlay (2015) talks about the tendency to privilege western ways of learning around talking, something she calls the ‘tyranny of participation’. She argues that
“in expecting a certain type of student participation that is ‘communicative, recordable, public, observable and often communal’ institutions are imposing upon students ‘restrictive, culturally specific and normative notions of what constitutes “acceptable” student practice’ ”
We might for many reasons of welfare monitoring, fostering a learning community and sense of belonging, want students to be ‘observable’ and visible to us and to each other. However, for the reasons above, assessment is unlikely to be the best way to do this and could exacerbate welfare etc issues longer term – even if on the surface our students display the behaviour we want once we start grading!
- Seminars are often where the learning happens (see a meta-analysis by Czekanski and Wolf, 2013). Students build communities, relationships, offer opinions, experiment with ideas, explore their understandings and build skills. This should be a formative experience of learning through engaging with ideas, knowledge and concepts in a comfortable low-stakes environment. If students are not participating, it could be a good opportunity to question and examine your learning environment, either in class or in asynchronous activities. Dyment et al (2020) has a great paper on when setting online work for student ‘engagement’ becomes ‘busy work’!
How can I assess ‘participation’ if I still want to?
Explore the rest of Assessment for Learning at King’s for examples of ‘engagement’ assessment which has meaningful learning at heart (quizzes, drafting, critical incident questionnaire, peer assessment, self-assessment, exit tickets and more).