Interviews for evaluating teaching

Focus group by the Smart Chicago Collaborative

This is an evaluation guide.

What is it?

An interview is a discussion, individually or in a group, prompted by a series of questions. Interview questions tend to be open-ended (if you need to ask closed questions use a questionnaire instead) to explore complex or abstract phenomena by eliciting participants’ experiences, acts, decisions, reasons, opinions and and perceptions. Most interviews are semi-structured; the wording of questions is determined in advance but there is leeway for the interviewer to prompt participants to elaborate or clarify. Individual interviews create space for deep discussion about particularities and a degree of anonymity which may draw out participants on sensitive matters. Group interviews are more efficient with time and effort, allow participants to respond to each other’s perspectives, but may discourage individual participants from speaking freely (Cohen et al, 2017, p527). Accordingly, interviews are far more resource-intensive than questionnaires – but they can be used to inform questionnaire questions and give insights into questionnaire data and naturalistic research such as observations.

There’s a lot to know about interviews – though don’t let that deter you from getting started. This guide aims to introduce some basic and less-obvious principles and refers you to some education-specific handbooks for further information.

What can it tell me about teaching?

Here are some examples of work colleagues at King’s have carried out. Francis and colleagues (2019) conducted six focus groups with Geography students to explore their understanding and experiences of assessment feedback and what they felt could be improved. They gained insights into students’ high, unmet expectations and perceptions of a lack of agency, which they published in the Journal of Geography in Higher Education. Mountford-Zimdars and colleagues (2015) used interviews in a mixed-methods project on attainment gaps in higher education which yielded a framework to address differential outcomes; among their publications is a special issue of the Higher Education Research Network Journal titled ‘Teaching in the context of diversity: reflections and tips from educators at King’s College London‘. Dickson and colleagues (2018) carried out a focus group as part of a mixed methods study (including a questionnaire which enabled them to ask a larger number of students some open-ended questions) which explored the potential of peer-assessed oral case presentations in Psychology. The students interviewed said that observing peers present made them feel more able to present themselves, and although the responsibility of peer assessment was uncomfortable, it brought some benefits.

How could I use it?

Use interviews when you have open questions to ask. Interviews can contribute to formulating questions for questionnaires, and help to make interpret data from observations and questionnaire responses.

How can I collect the data or evidence?

After settling on a specific aim for the evaluation, operationalise your research questions as interview prompts which connect back to your aim.

Recruit participants. Because interviews are resource intensive your sample may not be representative but try for that anyway; as far as possible recruit a sample whose characteristics resemble those of the wider population you are interested in.

Because interviews explore complex abstract concepts, when developing questions, Kember and Ginns (2012, p61) caution against putting words into participants’ mouths by asking them directly about the phenomenon you’re interested in. Participants may not be familiar with, say, threshold concepts, feedback for learning or backward design. Instead focus respondents on concrete acts of study such as their most recent lecture or assessment task, and ask questions which allow you to make inferences based on what you already know from your research into that phenomenon.

Interviews are almost always audio-recorded. If the subject is sensitive e.g. about assessment feedback, to obtain full and frank responses, the interviewer almost certainly needs to be somebody who does not know the participants and who is not responsible for assessing them. Gain informed consent according to ethical procedures (at King’s see REMAS) including an information sheet, signed consent form, explicit licence to stop participating at any time and contact details. It is often important to students that their identity is treated as confidential, so with group interviews it is a good idea to request that participants maintain each other’s confidentiality.

When scheduling, ‘piggy backing’ the interview directly after any event students are already attending will help to avoid drop-out.

When considering questions (Cohen et al, 2017 p513 drawing on Arksey and Knight 1999) summarise some principles:

  • keeping vocabulary simple;
  • avoiding prejudicial language;
  • avoiding ambiguity and imprecision;
  • considering when it is justifiable to use leading questions;
  • avoiding double-­barrelled questions (asking more than one point at a time);
  • avoiding questions which make assumptions e.g. that a participant carries out the required preparatory work for sessions;
  • hypothetical or speculative questions;
  • considering whether it is justifiable to ask sensitive or personal questions;
  • avoiding assuming that the respondent has the required knowledge / information;
  • recall (how easy it will be for the respondent to recall events etc).

How can I analyse the data or evidence?

Usually interviews are transcribed, although software such as Nvivo does allow raw recorded data to be analysed.

Kember and Ginns (2012, p66) demystify data analysis by decomposing it into a series of elements. In practice progress through these elements is rarely a straight line but involves, as they say, a lot of to-ing and fro-ing as the investigator develops, refines and reconsiders their ideas. These can be summarised as:

  • first pass – exploring the transcripts and making preliminary notes about recurring ideas and themes, commonalities and emerging classifications;
  • theorising about the message from the data; relating these back to your original purposes and aims; identifying variables (ideas and themes), describing these so that they can be understood and distinguished from each other (p74), and succinctly labelled so that they can be used to code the data.  Some examples of variables are ‘active learning’, ‘lecturing’, ‘source of experience’, and ‘accommodation for students’ characteristics’;
  • classifying variables into different subgroups or categories – you will eventually use these when you code your data. In practice categories can be quite messy – disrete, related, hierarchical or overlapping. For example the category ‘teaching approach’ may comprise variables along continuums – for example, ‘content-centred motivation / learning-centred motivation’. The number of categories is a matter of judgment, but should be focused enough to be able keep all of them in mind while coding.
  • coding – first trying out your classification scheme on your data by going through part of the data applying the relevant categories . Here software like Nvivo (freely available at King’s via the Software Centre) is invaluable – it lets coders highlight a passage of text and click on a category you have specified, or specify an new one on the fly. It is almost inevitable that your classification scheme above will change as you discover new categories in the course of your coding or realise that you aren’t using a category at all. The coding can be as deep (sentence by sentence) or as high level (case by case) as your purposes suggest and your constraints allow. Often these kinds of project are carried out alone but if you are working with other coders, carry out comparisons to ensure reliability between coders (p80).
  • drawing overall conclusions about what the data is saying, related to your purposes and aims. Using Nvivo is a semi-quantative approach which allows you to see which categories are prevalent, explore associations between variables and visualise relationships and causal models. You can represent conclusions in narrative form, as a concept maps or tree diagrams, or in any way you find helpful (p82-83).

What else should I know?

For the kind of internal work this guidance mainly supports, you don’t need ethical approval (though we would love to you consider publishing beyond King’s, in which case you do need it). That said, it’s important to adhere to ethical principles of gaining informed consent – in particular being clear about confidentiality or identifiability, letting participants know what you will do with their data, reassuring them that participation will not affect any other part of their study or assessment, and letting them know they can withdraw participation at any time without penalty. King’s colleagues can consult Research Ethics guidance or contact King’s Academy.

Since recruiting for interviews often brings self-selected participants, one question that comes up is, how many is enough? Generally the advice is to carry on until no new themes are coming out. For focus groups, there are signs that three to six groups will surface 90% of themes.

Invaluable, down-to-earth, practical resources for this kind of educational inquiry are:

  • Cohen, L., Manion, L., & Morrison, K. (2017). Research methods in education. 8th edition (8th Ed). London ; New York: Routledge (available online through King’s Library).
  • Kember, D., & Ginns, P. (2012). Evaluating teaching and learning: A practical handbook for colleges, universities and the scholarship of teaching. London ; New York: Routledge (requested for King’s Library).

References

  • Cohen, L., Manion, L., & Morrison, K. (2017). Research methods in education. 8th edition (8th Ed). London ; New York: Routledge.
  • Dickson, H., Harvey, J., & Blackwood, N. (2018). Feedback, feedforward: Evaluating the effectiveness of an oral peer review exercise amongst postgraduate students. Assessment & Evaluation in Higher Education, 1–13. https://doi.org/10.1080/02602938.2018.1528341
  • Francis, R. A., Millington, J. D. A., & Cederlöf, G. (2019). Undergraduate student perceptions of assessment and feedback practice: Fostering agency and dialogue. Journal of Geography in Higher Education, 1–18. https://doi.org/10.1080/03098265.2019.1660867
  • Hutchings, P., & Carnegie Foundation for the Advancement of Teaching (Eds.). (2000). Opening lines: Approaches to the scholarship of teaching and learning. Retrieved from https://eric.ed.gov/?id=ED449157
  • Kember, D., & Ginns, P. (2012). Evaluating teaching and learning: A practical handbook for colleges, universities and the scholarship of teaching. London ; New York: Routledge.
  • Mountford-Zimdars, A., Sabri, D., Moore, J., Sanders, J., Jones, S., & Higham, L. (2015). Causes of differences in student outcomes. Retrieved from HEFCE website: https://dera.ioe.ac.uk/23653/1/HEFCE2015_diffout.pdf

Featured image source: Smart Chicago Collaborative, 2016. CUTgroup tester sharing. https://www.flickr.com/photos/smartchicagocollaborative/29695482654. Licensed as CC-BY-NC 2.0.

Be the first to comment

Leave a Reply

Your email address will not be published.


*