BAILII is celebrating 24 years of free online access to the law! Would you consider making a contribution?

No donation is too small. If every visitor before 31 December gives just £1, it will have a significant impact on BAILII's ability to continue providing free access to the law.
Thank you very much for your support!



BAILII [Home] [Databases] [World Law] [Multidatabase Search] [Help] [Feedback]

United Kingdom Journals


You are here: BAILII >> Databases >> United Kingdom Journals >> Gillespie, Just a Minute: The use of a Classroom Assessment '
URL: http://www.bailii.org/uk/other/journals/WebJCLI/2005/issue1/gillespie1.html
Cite as: Gillespie, Just a Minute: The use of a Classroom Assessment '

[New search] [Help]


 [2005] 1 Web JCLI 

Just a Minute: The use of a Classroom Assessment Technique in Legal Education

Alisdair A. Gillespie

Principal Lecturer in Law
University of Teesside

[email protected]

Copyright © Alisdair Gillespie 2005
First published in Web Journal of Current Legal Issues


Summary

The use of minute papers in facilitating student learning in law has been little examined hitherto.  This article looks at a piece of action research into how minute papers can be used to improve the quality of student learning.


Contents

Introduction
Evaluation
Assessing Students or Student Learning
Methodology
Selecting the class
Action-Research
Selecting the CAT
Implementing the Research
Evaluating the Research
Focus Group
Findings
Understanding students learning
Amending the Pilot
Findings
Conclusion

Bibliography


Introduction(1)

 This article discusses a piece of action-research that I undertook to examine the use of a Classroom Assessment Technique (CAT) in my teaching. The rationale behind this research was the question as to how a lecturer knows whether a student is learning what the lecturer intended them to learn. We know that the purpose of teaching is to facilitate student learning (see, for example, Canon and Newble (2000) p. 2) but at a conference I attended in 2003 the question was posed how to we know what our students are learning, or even whether they are learning?(2)

Evaluation

If we believe that teaching helps students to learn then it could be argued that evaluation of teaching could be one method of helping to reassure ourselves as teachers that our teaching is appropriate. There are a number of ways of undertaking this but each has its own problems. Perhaps the most common form of evaluation is that of the annual evaluation. At my own institution, as with many others, each module must be evaluated in the same way, usually in a set timeframe. Biggs (2003) argues that this approach is common in most universities, with each institution developing a standard evaluation questionnaire, often capable of being optically read. (p. 277).

 The difficulty with this method, of course, is that neither students nor programmes are the same. It must be questioned whether a science subject can be assessed in the same way as a humanities subject. Surely they are different? We are constantly encouraged to consider our student group when we devise our teaching techniques (see Biggs (2003)) and this must mean that teaching will be sufficiently different to make any homogeneous evaluation pointless.

 A problem with annual evaluation is it tends to be too late to do anything about it. What happens if the evaluation shows that the students have hated the module, did not understand any of it, thought the lectures and tutorials were pointless and are not confident about the examination? It is highly unlikely any remedial action could be taken given the relationship between the timing of the evaluation, the analysis of the results, and the date of the examination. Brown and Race (2002) also question whether students take them seriously, and cite the example of a student who points out that a tutor has consistently failed to achieve good feedback marks and yet still teaches the same subject each year (p. 169).

 Another standard method of evaluation is that of peer-observation of teaching. This is supposed to ensure that the teaching that is being provided to students is competent (see Brown and Race (2002) p. 179). If it works well then it can be a good way of ensuring that you have confidence in teaching, and are able to discuss possible alternative delivery mechanisms with a colleague. However if the colleague who undertakes the peer-observation is not supportive then it can lead to a crisis of confidence (Brown and Race (2002) p. 180) or if neither participant takes it particularly seriously then the information that is obtained is of little or no significance.

The most significant failing of peer-observation of teaching in the context of this paper, and also a failing that is repeated in the annual evaluations, is that it concentrates on teaching not learning. The peer-observation is about how the lecturer comes across in the teaching. Whilst this can be beneficial in terms of getting the observer to watch student behaviour, body-language etc. or the clarity of presentation, it cannot by itself be an indicator of whether students are actually learning what they need to learn. Similarly the annual evaluation is arguably more to do with who is more personable than whether students learnt anything, and the questions remain focused on the issue of teaching, and the support offered by teaching, than they do on actual learning.

Top | Contents | Bibliography

Assessing Students or Student Learning

Arguably the ultimate test of student learning is the assessment process, a position Brown and Glasner (1999, p. v) implicitly support when they argue:

 “Assessment matters…It matters to students, the tutors who assess them, the institutions in which they are assessed, the parents, partners and carers who support them, it matters to the employers who would like to offer them jobs on graduation and to the funders and who [sic]] pay for higher education and want to see the maintenance of standards and value for money.”

In other words assessment is the value-judgement of student learning, it allows the examiners to classify students and to decide what level of student learning and skills are possessed. Brown (1999, p. 6) tries to modify this slightly by differentiating between formative and summative assessment where she argues the difference is:

  “…formative assessment is primarily characterized by being continuous, involving mainly words and with the prime purpose of helping students improve, summative assessment instead tends to be end point, largely numerical and concerned mainly with making evaluative judgements.”

 Brown also argues that whilst summative and formative assessment is frequently referred to as opposites, they are different ends of the same continuum (ibid.). This, it is submitted, is the key point and very pertinent: assessment per se is about making a value judgement as to the ability of a student to perform a task and the difference between formative and summative is simply what one does after this value judgement. Formative assessment is not denying the student performance, its purpose is to help the student perform better in that task next time. However it must seriously be doubted whether assessment by itself necessarily helps us understand student learning. It is well known that students adopt varying approaches to student learning. Traditionally these were referred to as ‘deep’ and ‘surface’ learning but Entwistle identified a third approach that was common, that of the ‘strategic’ approach where the student will undertake whatever type of learning is best suited for the assessment (see Entwistle (1997) p.19). Thus it can be argued that assessment does not instinctively help a teacher decide what a student has learned. The implications for a teacher are more profound than this, however, because whilst there is undoubtedly a link between teaching and learning (although Laurillard (2002) argues that defining and expressing the link is not particularly easy as the psychological evidence appears somewhat confusing and ‘fuzzy’ (pp. 62-78 esp. 62-64)) a poor-performance in an assessment does not necessarily help the teacher understand whether it is a fault in the student-learning process or the student teaching. In other words, did the student fail despite the teaching, or did the student fail because of the teaching? 

Gaeddert (2003, p. 48) believes that the problem with assessment and traditional evaluation is quite profound:

 “Few, if any, [lecturers] know how student learning occurs. If professors receive feedback from students it is in the form of an end-of-semester rating.”

 Chen and Hoshower (2003, pp. 72-73) agree with this and argue that feedback from students is normally of questionable significance, even when it is formative feedback, because of the timing of the process. Gaeddert advocates the use of Classroom Assessment Techniques (CATs). Harwood (1999, p. 52) argues classroom assessment is:

 “[Where] a professor…continually assesses students who are taking his or her class… classroom assessment techniques…identify changes that will improve the learning of current students before the class term ends.”

 In other words the assessment is being used not in a summative or formative manner but in a more diagnostic capacity. Gaeddert (2003, p. 48) implicitly agrees with the notion of identifying changes when he argues that the idea behind CATs is that they provide simple formative feedback to the lecturer and thus this can be distinguished from traditional formative feedback which is usually from lecturer to student rather than vice-versa. Cottell and Harwood (1998, p. 552) state: 

 “CATs…create an ongoing student-professor feedback loop…The professor closes the feedback loop by telling students not only what the CAT said about their learning, but what the professor and students can do to improve it.”

The essence of a loop is important because the idea behind CATs is that it is an opportunity for both the student and the lecturer to modify practice (be that the student’s approach to learning or the lecturer’s approach to teaching) before the summative assessments take place. It is, therefore, a diagnostic aid that helps to remedy the blindness of the lecturing staff identified by Gaeddert.

Arguably the leading text on CATs is Angelo and Cross (1993) but it is important to note that this text is not, nor does it purport to be, an authoritative critique of the use of CATs or an original piece of work, but rather it is a collection of CATs that are presented in a manner that allows the lecturer to understand what CATs are available. Angelo and Cross do, however, sum up the benefits of CATs and postulate that it is possible that student learning is enhanced by their use, although this has been questioned, or perhaps more accurately, tested, by a number of authors.

Top | Contents | Bibliography

Methodology

The use of CATs intrigued me and I wanted to consider their use in my undergraduate teaching and so I decided to undertake a piece of action-research that would allow me to use one particular type of CAT.

Selecting the class

Before discussing the use of action-research it is perhaps worth pausing to briefly note what class I used this technique in. During the relevant academic year I lectured throughout the year on two subjects, “English Legal System”, a first-year core subject and “Criminal Law” which was, in fact, two modules (“Criminal Law 1” and “Criminal Law 2”) that were second-year core modules and taught in consecutive semesters during the year. The action-research that I was going to conduct was a personal project in that, although the subject group within which I work approved of the research, it would be conducted, initially at least, by me. This, in essence, ensured that the research would be conducted within the criminal law classes because this was the only subject that I would do all the lectures for as I shared lectures In English legal system with another lecturer. 

Action-Research

I decided that the methodology for this research would be based on action-research. Kember and Kelly (1993, p. 3) argue:

“Many educators have been concerned with research and theory on the one hand and daily practices of education on the other…”

 This is an interesting point and emphasies that research does not just have to be a remote inquiry which gathers data to be analysed, but could also be a learning opportunity itself. Kember and Kelly (p. 3) continue by arguing that action-research permits this perceived gap to close: 

“Action researchers try to close this gap between research and practice by creating a situation in which practitioners define research problems and conduct research in such a way that the outcomes are directly useful to classroom or other educational situations.”

 In an educational context this means that the academic has the opportunity to mix scholarship and teaching. The research element allows the teacher to investigate an issue that (s)he is interested in with the intent of identifying a scholastic point akin to ‘traditional’ research. However the action aspect allows the researcher to be involved in this rather than operate in an observational or inquisitorial manner (something that Fox (2003, p. 82) argues is the traditional form of social-science based research. 

Action-research is not uncontroversial, however, and Fox (2003, pp. 87-88) argues that its use has declined somewhat recently and become slightly marginalised. Fox (p. 88) continues by arguing that there are different forms of action research:

Whether it is possible to divide action-research into these categories, however, is more controversial and Kember and Kelly (1993, p. 4) argue that the third category, emancipation-based research, can be difficult and not always suited for the beginnings of a project. Other writers, most notably, McNiff et al (2003, p. 12) adopt a more fluid definition and concentrate on the fact that it is the ability of a person to learn by doing that is important . In this way it may be thought that it is experiential learning and it is undoubtedly a species of this learning theory but, it is submitted, action-research differs from Kolb-type experiential learning through its change in emphasis. 

Arguably the Kolb model concentrates on learning from reflecting on what has happened (see Light & Cox (2001, pp. 52-56) for a useful summary of Kolb’s work and using these experiences as an incentive to new learning. Some authors, for example Cowan (1998), have adapted the model to form experiential learning loops rather than cycles, to show that the Kolb cycle is not necessarily straight forward, but they are all based on learning from what has happened before. Although reflection is an important part of action-research (see, for example, Kember and Kelly (1993, p. 4) who argue that action-research attempts to “develop teachers who are not only active practitioners…but also reflective practitioners”) its emphasis is slightly different in that it is not a reflection on the past but on the current. McNiff et al (2003, pp. 12-13) argue that it is about reflecting upon what is happening rather than what has happened and Orland-Barak (2004, p. 38) appears to agree noting that there is a danger that reflection is misunderstood and it becomes the central feature rather than an integral part of a wider process. 

Cooper (2000, p. 280) has perhaps returns a more pragmatic approach and, in a statement that is somewhat salient for this piece of research, argues 

“Action research is a method of introducing and evaluating innovative ideas in an attempt to resolve practical problems.”

Arguably this is a little simplistic in that it is not merely evaluation that is being undertaken but a more substantive learning process, but it has an element of truth surrounding it in that evaluation is a crucial element to learning. Laurillard (2002, p. 233) argues that evaluation of teaching, and indeed of the whole course design, is essential as it helps ensure that teaching and material is appropriate. The problem with evaluation is, however, it does not go beyond the task, it is not a process of learning from performing an action but rather a check as to whether something works, in this way the actual process and outcomes differs but if Cooper’s point is adapted to say that evaluation is a part of the action research process then this would be undoubtedly correct.

 One possible problem with the idea of action-research and, indeed, action-learning is put forward by Kivinen and Ristelä (2002, p. 422) who note: 

“People are able to do many things…without being able to explain the rules and principles…upon which this action is based.”

 To illustrate this point they use the example of language, pointing out that many are able to speak and be able to construct appropriate sentences without necessarily either understanding or being able to explain the rules of grammar. What Kivinen and Ristelä appear to be arguing is that it is not enough to just be able to do an action, there must be a context surrounding the doing so that the researcher is able to demonstrate the requisite theoretical or pragmatic approaches to the action, i.e. be able to identify not only the results but the process by which the result occurred and its results. They do, however, accept that learning by doing is an important part of any work and they note what they term ‘the researcher’s reality shock’ when a young researcher is first-faced with practical research instead of theoretical research and faces the reality that model-based theoretical results do not necessarily accurately predict behaviour or real results (p. 427). Their point perhaps returns us to that made by Kember and Kelly (1993) and considered at the beginning of this section; that if done properly action-research should help bridge the divide between the theoretical and the practical: allowing theory to be tested in real-life situations but using this not as a test or an application, but as a learning process in itself. In this way, action-research can be considered to be part of cyclic learning in that the learning never stops: each cycle generating new potential areas of interest and research.  

Selecting the CAT

The term “Classroom Assessment Techniques” is used to describe a wide collection of techniques. Angelo and Cross (1993) list 50 separate techniques and they profess that their book is not supposed to be a comprehensive anthology but rather a collection of the most common types. The first task, therefore, was to consider which CATs to use as part of the pilot. 

An obvious CAT to use appeared to be the Minute Paper, not least because this is what had sparked my interest in the first place. Angelo and Cross (1993, p. 148), however, also make the point that out of all CATs this is perhaps the most used and most adapted and one that they describe thus:

 “[The Minute Paper] is a versatile technique…provides a quick and extremely simple way to collect written feedback on student learning.”

 The test appeared to meet with my criteria in that it was a simple way of gauging student learning experience. The CATs within Angelo and Cross (1993) are categorised to present their estimates levels of time and energy for: 

 In all of these categories the Minute Paper was presented as involving little time and yet, given its use, it seemed to me that it must be a valuable resource and one that could help solve the conundrum of measuring student learning. 

Although Angelo and Cross (1993, p. 151) note that the Minute Paper is widely adapted, its format always follows a common core. This core is that students are asked a short number of open-answer questions that require the student to reflect on a learning activity. The question that appears standard to the technique is to ask the students to write down what they have learnt in the class. However even this standard question sometimes differs slightly in that Almer et al (1998, p. 487) argue that it should be the biggest single thing that has been learnt whereas Cottell and Harwood (1998, p. 553) and Angelo and Cross (1993, p. 148) believe it is not necessary to restrict the student to a single point. The sheet then typically answers a second question but, again, there is a discrepancy between what this question will necessarily be. Almer et al (1998, p.487) argue it should be asking for the main unanswered question whereas Cottell & Harwood (1998, p. 553) argue it should be what questions (ie plural) remain to be asked and Chizmar and Ostrosky (1998, p. 4) believe that the question should be what is the muddiest point. 

The question asked by Chizmar and Ostrosky (1998) is interesting because Angelo and Cross (1993, p. 154) list this as a separate CAT in its own right. Chizmar and Ostrosky (1998, p. 4) argued that the reason why they wanted to use the muddiest point as the second question was: 

“The first question directs students to focus on the big picture, that is, what is being learned, whereas the second seeks to determine how well learning is proceeding." (see also Angelo and Cross (1993) pp.154-155).

 Before looking at the work of Chizmar and Ostrosky I had already decided to incorporate the muddiest point CAT into my Minute Paper but unlike their work, I intended to introduce it alongside the two standard questions, producing a three-question sheet. The reasoning behind this was that it appeared to me that asking students what questions they had was materially different to asking them what the muddiest point was. It is certainly conceivable that someone may understand a learning point but want to know more about it, or that someone may understand point x but not be clear how this could be extrapolated to point y. 

My questions on the minute sheet that I used became: 

This, I believed, was a useful adaptation of the papers. It is worth noting that for the first two questions I followed the approach suggested by Cottell and Harwood and Angelo and Cross by not restricting the answers to a single point. My reasoning for this was that I believed that this would be more accurate, especially in a subject such as criminal law where it is conceivable that a single lecture may well straddle more than one point of law. 

Top | Contents | Bibliography

Implementing the Research

Once the parameters of the pilot had been established, it was then necessary to identify how to implement it. One of the first factors to consider was when, and in what format, the Minute Paper would be used. Angelo and Cross (1993, pp. 151-152) argued that the minute sheet could operate at either the beginning or end of a class. The timing would depend on what was being asked (p. 151): 

 “If you want to focus on students’ understanding of a lecture, the last few minutes of a class may be the best time. If your focus is on a prior homework assignment, however, the first few minutes may be more appropriate.”

 I decided that the Minute Paper should be used at the end of the class because the intention was to focus on what the students understanding had been. However, whilst CATs are normally considered to be class-based, i.e. they are undertaken as part of the class, I decided that my first revision would be not to use the form for every class. My reason for this is that in law sometimes it is not possible to lecture on a topic in a single week, sometimes it is necessary for topics to follow-through to the subsequent week(s). Quite often the student may not fully understand the topic until they have experienced the full breadth of the topic, i.e. they can see all the pieces of the jigsaw. Accordingly I decided to use the minute sheet at the end of each topic rather than class. Thus at the end of the lecture during which the topic had been concluded I had decided that the Minute Paper would be used. 

There are different ways of undertaking the paper. Gaeddert (2003) notes that at its simplest the Minute Paper is asking the students the questions in the concluding minutes of a lecture, i.e. just asking them to write down their answers and pass them forward (pp.49-50). Costello et al (2002, p. 25), in an adaption of the paper known as a ‘reaction card’, altered this to include a box that the students filing out of a lecture theatre could ‘post’ their answers into. However Harwood (1999, p. 55) makes a good point in saying that pre-printed forms can be beneficial as it gives the students the impression that this is a pre-planned exercise. 

I agree with Harwood’s argument and I had always intended to let the students be fully aware of the Minute Paper. I therefore undertook two aspects of preparation. The first was to get a significant number of sheets pre-printed so that I could hand them out as appropriate. The second issue was to explain the minute sheet to the students. I did this in two ways. Firstly, I set out an explanation in the module handbooks for the modules where the pilot was to be undertaken. The handbook provided the students with an explanation of the reason for the pilot and what it hoped to achieve. It also showed the students what the forms would look like. Secondly, in the first lecture I expressly referred the students to that section of the handbook and explained to them orally why I was running this pilot, highlighting that the idea behind the CATs was, in part, that it should help them. Because assessment can cause anxiety (see Light and Cox (2001) pp. 169-171) I emphasised the fact that the test was, in part, a way of seeing whether I was communicating correctly and that the Minute Paper was, therefore, as much a way of testing my teaching as it was of their learning. I also emphasised the fact that the minute paper was completely anonymous. 

Anonymity is an interesting issue in this field. Almer et al (1998, p. 488) argue that anonymity is a strength of the Minute Paper, something that Harwood (1999), p. 55 recognises as an important feature in encouraging student honesty but she questions whether it is necessarily appropriate arguing that this leads to a situation whereby “professors…tend to follow up only on the most popular or interesting questions.” (Ibid.). This is an interesting point of view but I believe that it is slightly misplaced. The purpose of CATs is to take a broad-brush approach to assessing student learning, i.e. to examine how the body of students as a whole are doing rather than focusing on individual students (see Angelo and Cross (1993) pp. 4-6). 

I considered that the time that would be required to process each individual form, and respond directly to the individual student response, would be disproportionate to the task and could, arguably, disturb the point of the tutorials. However I also noted the possibility that the use of the Minute Paper may encourage some students who would traditionally be wary of asking questions to pose questions to the teaching staff. I wanted a mechanism through which such questions could be channelled and I decided therefore to create a discussion board on the module’s “Blackboard” site. The idea behind this was to allow the students to raise questions, debate issues and even answer questions. The teaching staff would monitor the board and reply to questions when appropriate. I allowed anonymous posting for the board, again to encourage ‘nervous’ students to ask honest questions they did not understand. 

One of the most important aspects of the Minute Paper is the feedback. Cottell and Hardwood (1998, p. 552) argued that a useful feature of a CAT is that it creates a feedback loop where both staff and students are feeding back to each other. Angelo and Cross (1993, p. 152) agree and suggest that the Minute Paper can be very valuable at doing this as it need only take a few minutes to feed back the most common features of the minute sheet although they then issue the caveat that time control can be very important because if one is not careful it is possible that the feedback becomes too long by inviting further questions etc. (p.153). This latter point was, to some degree, militated against by the fact that it was undertaken in lectures and there remains a tendency not to ask questions in a lecture and a recognition that tutorials are principally focused towards this. I made sure that the students were aware of the fact that the module tutor, although not undertaking the pilot, was appraised of the results and if they wanted to seek further clarification on an issue they could do it through lectures or by making an appointment to see either myself or the module tutor. 

I decided to feedback to the students in a very easy way. In my teaching I always prepare a link between the sessions by reminding students what we did in the previous week and ending a lecture by explaining what would happen in the next session. It was relatively easy, therefore, to insert an extra slide after this mini-recap to feedback the results of the minute sheet from the previous topic. It was noted above that Harwood (1999, p. 55) questioned whether precise feedback was given from the sheets and the answer to this must be “no”. If a lecture has any point (something which is perhaps the debate most often considered in pedagogic circles) then it must be to help improve student learning and thus it would not be appropriate to spend too much time focusing on the past. I saw the point of the feedback as being to allow the students to know what the major points from the sheet were. I gave very brief answers and directed them to appropriate reading. However I was also conscious of the fact that the minute sheet was not to be considered a wasted opportunity. Angelo and Cross (1993) argue that if it is used too often or too superciliously then the students would treat it as “a gimmick or pro forma exercise in polling.” Whilst I could not dedicate too much class-time to precise feedback I was not constrained in non-contact time. Each module had its own Blackboard site and I was keen to integrate my teaching to the module site. An obvious solution to the problem of feedback was to include fuller details of the minute sheet on the Blackboard module. I did this and the fact that it was a multi-media orientated environment meant that I could similarly direct students to further reading or relevant links at the same time. In this way I believed that the feedback loop would be strong and beneficial to both staff and students.

Top | Contents | Bibliography

Evaluating the Research

Before discussing the results of this pilot, it will be prudent to discuss how I evaluated it. The initial evaluation was conducted through the use of a questionnaire. Clarke (1999, p. 67) argues that the questionnaire is the most common form of evaluation undertaken and that it carries with it the advantage of simplicity. Ramsden (1992, p. 229) agrees that it is the standard evaluation tool but he questions why this is the case, arguing that it largely produces descriptive results. Whilst this is undoubtedly correct – and indeed Clarke (1999, p. 68) argues that this is precisely the purpose of the questionnaire – it does not necessarily mean that the questionnaire is a flawed tool. Ramsden’s argument appears to be that as a means of evaluating the learning experience a questionnaire is inappropriate (a point reinforced by Biggs (2003, pp. 276-279) who suggests that it is perhaps the weakest form of assessment but this is not the point of this evaluation. The evaluation undertaken here is not of the learning experience but of the use of the Minute Sheet. O’Neil and Pennington (1992, p. 25) believe that in the circumstances of evaluating specific issues, the questionnaire does remain a useful tool.

Burton (2000, p. 335) argues that questionnaires are not necessarily as easy as people often think, because the language used in their creation is very important. The principal failing is ambiguity or attempting to use conversational type questions rather than ones that will gain specific responses (p.336). The most obvious way of dealing with this is, of course, to use closed-questions in that this ensures that expected answers will be returned and also makes analysis somewhat easier. However Burton also argues that open-ended questions do allow important and unpredictable results to be returned (p.339). Arguably there is less need for unpredictable answers in the evaluation of a course, but I decided that the evaluation sheet should include at least one open-ended question in terms of allowing the students the opportunity to make additional comments after completing the closed questions.

 The questionnaire was undertaken in one of the last compulsory lectures. The idea of using a lecture-slot is that it should ensure that most students are present although, of course, it would be very naïve to suggest that every student would be there. In all, a total of 38 students completed the evaluation, and 63 people attended the module that was evaluated, but at no time could it be said that each lecture had full-attendance. Whilst the 38 students represents a sample of 60 per cent, every student who was present at that particular lecture completed the evaluation 

Focus Group

I followed up the questionnaire with a focus group because I thought there were some issues raised that could be explored in more depth. Clarke (1999, p. 77) argues that focus groups have become increasingly popular in recent years as a means of measuring group opinions. He cites Powell and Single (1996) who define the focus group as: 

“A group of individuals selected and assembled by researchers to discuss and comment on, from personal experience, the topic that is the subject of research.”

 Oates (2000, p. 186) agrees and argues that the distinctive quality of a focus group is that it measures collective feelings and opinions rather than individual ones. In other words whilst each member of the focus group may have an individual point of view, the real qualitative data arises from the interaction between the group and the settled view. 

Clarke (1999, p. 77) believes that the focus group as an evaluative tool offers advantages over a questionnaire. Presumably this is linked to the idea that it can be quicker to get good qualitative data. However he also cautions that there can be disadvantages in their use, most notably the fact that there is less anonymity (p.78). This is certainly true in respect of this exercise as I decided that I would chair the focus group rather than ask someone else to do it (Oates (2000, pp. 191-192) notes there are arguments for and against the researcher chairing the group but I thought this could be managed by the fact that the group would not be constituted until after the assessment marks had been released, so there could be no suspicion that anything said within the group could influence decisions made in respect of them. 

I had decided to use only one focus group because of the limited number of students and the fact that they were together as one cohort. I thought that the sample size may be difficult but the research appeared to suggest that this would not be the case. Krueger and Casey (2000, p. 72) argued that a group of between 10 and 12 is best but Clarke (1999, p. 77) argued that there is no fixed size and that a smaller group can be beneficial, something that Oates (2000, p. 190) agrees with noting that frequently a researcher will have to use whatever group can be empanelled, something often outside their control. Interestingly Oates reaffirmed the fact that it did not matter whether the group was independent of the researcher or of themselves (p.78). Eventually a panel of six students was brought together. 

I recorded the responses of the students by taking contemporaneous notes. Oates (2000, pp. 193-194) argues that the use of a tape-recorder is essential when using focus groups but I believed that this might cause unease to the group. I also believed that a tape recorder would be needed in more complicated groups where there would be a number of focus groups looking at an issue and where the principal investigator may not necessarily be the facilitator of the group. I do not believe that using contemporaneous notes in any away affected the activities of the group. 

Findings

The results showed that every student understood why the Minute Paper was being used. This is a welcome result not least because Angelo and Cross (1993, p. 28) argue that it is important that the students understand the purpose of a CAT and their role within it. Gaeddart (2003, p. 49) agrees that it is important that students understand why CATs are being used, but argues that they must also feel that they are an active part of the learning process. Arguably this is an application, or explanation, of the Cottell and Harwood’s (1998, p. 552) argument that CATs create a feedback loop. The implication of this is, of course, that a CAT is only as good as its feedback loop; in other words if the communication becomes one-way instead of two-way then the system breaks down. In general the students were happy that a feedback loop was created: 82 per cent of respondents said that I closed the loop after each sheet. The interesting feature about this response is that it demonstrates that CATs are quite often dealing with student perceptions. The reason that I make this comment is that I was very careful to ensure that the feedback loop was created and know that after every sheet I did feedback the results. The fact that 18 per cent of students believed that I did not feedback is quite interesting given that this did occur. This is perhaps because the feedback happened during the lecture, so if the students did not attend the lecture where the feedback occurred, then they may not have realised that the results were disseminated. A solution to this occurred with the later sheets since I used the Blackboard intranet site to respond to the minute sheet rather than just relying on the lectures. The focus group did demonstrate that the students appreciated the use of Blackboard in this way. Thus, rather than having a cyclic dual-feedback loop, it is quite possible that the feedback loop requires a historical setting in that the information that I provide should not be transient through speaking in a lecture but should be recorded in a document so that the students can refer to it. 

Cotell and Harwood (1998, p. 553) argue that if CATs are used too frequently then the students may become somewhat “burnt-out” by them. Angelo and Cross (1993, p. 153) do consider that this is a possible drawback of the Minute Paper but Harwood (1999) disagrees arguing that in her study there was no evidence of “burn-out”. Within the student evaluation, I asked the students whether the sheet had been used too often. The response was that 60 per cent of students believed that it had not been used too often but only 18 per cent of students believed that it should have been used more frequently. These statistics appear to suggest that Harwood is correct and if the sheet is used appropriately there is no “burn out”. 

A clear majority of students (82 per cent) found the sheet to be beneficial with only 5 per cent of students saying it was not. The remaining 13 per cent did not indicate whether they believed it to be useful or not. It is quite likely that the students were unclear as to whether it had helped them and this should not be too much of a surprise: CATs cannot operate for everyone, and some are almost certainly going to have a reaction against them. However the vast majority of students did find them to be beneficial to their studies which appears to back up the literature put forward by, for example, Almer et al (1998, p. 495) who believe that the use of the Minute Paper can help the student learning process, a point that is also confirmed by Chizmar and Ostrosky (1998) who reached similar conclusions. 

This study does not allow me to draw any conclusion as to whether the student learning process is better because to do this would have required a control group etc. and measurement of student learning, but there is no reason to suggest that the students are wrong. The students were studying at level 2 of an undergraduate course, and academic literature suggests that by then they should have identified their own learning style (see Honey and Mumfield (1992)) and, presumably, have an understanding of how they are learning. The fact that so many believed that it was beneficial would seem to suggest that they cannot be all wrong. 

An interesting comment that one student reported on the evaluation sheet was:

 “[They are] very useful – especially for people who are reluctant to ask you afterwards.”

 I had not contemplated that this might be an advantage of the Minute Paper because I was looking at them as a class-wide rather than individual tool. However Costello et al (2002, p. 28) reported a similar finding in that some students felt that they did not have to directly approach a lecturer to answer a question. This is an interesting point and is undoubtedly a useful advantage. Where the sheet is anonymous it is more difficult to respond to individual queries but, where there was something that was obviously wrong, I would include this within the Minute Sheet feedback. 

A question that was not placed on the initial evaluation but I later put to the focus group, and which is slightly linked to the above, is whether there was a need for a pre-printed sheet. It will be remembered that Harwood (1999, p. 55) argued that the students would respond better to pre-printed sheets because it shows planning yet Gaeddart (2003) believed that this is not necessary and that the system worked by simply asking the students to write down the responses on a bit of paper. Costello et al (2002) adapted the technique in a similar way by getting the students to write down their responses on a card. These cards were not pre-printed and whilst the authors do not expressly consider whether pre-printed cards would have been more appropriate, they record no problems with its use. The focus group was divided on this issue. Four of the six members argued that it would not matter whether the sheets were pre-printed, but two of this four argued that this would be the case later – i.e. pre-printed would have been useful for the first couple of times but less necessary as the students became accustomed to it. The remaining two students argued that pre-printed sheets were useful and it did, as Harwood predicted, show that I had made a conscious decision to use the pilot rather than a spontaneous exercise. The two students further argued that students may not have reacted as well to a spontaneous exercise in that they may not have understood the rationale behind it. 

Understanding students learning

Perhaps the most crucial question that needs to be answered is whether the pilot has helped me understand what the students are learning. It will be remembered that the principal reason for my choosing this action-research project was the doubt as to whether lecturers know what students are learning and, crucially, whether they are learning what I wanted them to learn.

 The answer to the question is “yes”. Using the Minute Paper did allow me to have some confidence in what the students were learning, and an understanding of what they were struggling with. There were occasions when I could inform the tutor of areas of weakness that had arisen, allowing the emphasis of the tutorial to react to this. On another occasion I spent more time in a following lecture on an issue that the students were not clear about. The first question, asking the students what they had learned, also helped to assure the teaching staff that the students had, at least, identified the key aspects of each topic.

Top | Contents | Bibliography

Amending the Pilot

The minute sheet ran as a pilot in the way described above throughout the first semester in the academic year 2003/4. After evaluating the pilot, and discovering that it was reasonably successful, I decided that in the second-semester I would undertake some comparative work by adapting the sheet. The most obvious way of adapting the pilot was to alter when it would be undertaken and how it was to be undertaken.

 I decided to experiment by turning the Minute Paper from a physical paper to a virtual paper, i.e. make the Minute Paper electronic. Chizmar and Ostrosky (1998, p. 4) report that in their study one lecturer also did this by mounting the paper onto the Internet (although it would appear more probable that it would be mounted on an intranet rather than the Internet) to allow the students to complete electronically. I used the Blackboard teaching environment to create an electronic Minute Paper. 

I decided to combine aspects of the minute sheet with an evaluation-type document. It will be remembered that there is an argument that traditional evaluation is of little use because of its timing (see, in particular, Biggs (2003) pp. 277-279). However if it was used more proactively then I believed that it could achieve some of the purposes put forward in the Minute Paper and by combining this with other more open-ended questions, it may produce encouraging results. 

The questions that I decided to ask, therefore, were a mixture of polls and questions. The polls asked the students to gauge how understandable a topic was, and the questions allowed them to state what outstanding issues they had with the topic. A particular concern I had was whether the polls deviated from the purpose of CATs in that they could be construed as asking not what the students were learning, but rather asking them whether they thought they were learning. However, by including open questions on what they had learnt and what questions they had outstanding, I thought that this risk was minimised. 

I had decided to evaluate the electronic minute sheet purely by the use of the focus group discussed above. 

Findings

The electronic minute sheet was a disaster. Only four students used the sheet, albeit all four made sensible approaches to it. The focus group explained that they believed that whilst, in principle, the idea of an electronic minute sheet was a useful idea, students were unlikely to go out of their way to complete it, and this meant that they would have to remember to complete the sheet when they went online, something that they were unlikely to do. 

Chizmar and Ostrosky (1998, p. 4) noted that in one of their classes they did use an electronic minute sheet and that it did work. However the key difference between their pilot and my own is that their teaching was delivered electronically too. Thus the students were already in the learning environment and so the electronic submission was not that different from the paper-based form that was used. 

By adapting the sheet in the way that I did I lost sight of the fact that one of the biggest strengths of the Minute Paper as a CAT is that it is supposed to be immediate. Indeed, arguably, I forgot about the meaning of ‘C’ in CAT – classroom. Whilst Chizmar and Ostrosky (1998) had success with the electronic Minute Sheet this was because their classroom was online. Mine was not. My classroom was a lecture theatre and then I tried to link this to the electronic world, that it was not successful reinforces the importance of proximity between the teaching and the CAT. 

Conclusion

This piece of action-research was, to me, very useful. The use of CATs, and in particular the Minute Paper, was of assistance in my teaching. The key to their use would appear to be that of confidence. Their use allowed me to be confident as to what students were learning, and it can be postulated that part of the reason why students found them useful was that they could be confident as to their own learning as others were mentioning similar points. A significant advantage to the Minute Paper is it helps demonstrate to students that they are not alone in thinking certain questions, or finding particular aspects of a topic to be difficult to follow. 

From my point of view it was very useful to be able to obtain a “snapshot” of student learning. It must be recognised that the students were better at telling me what they did not know than what they had learnt, but this is a question of training and as the pilot continued through the year so the detail on the sheets improved. There were occasions when the use of the Minute Paper became invaluable in terms of allowing me to take remedial action to correct general impressions or identify further reading and exercises for students struggling with a situation.  

That is not to say, however, that there are not some lessons to be learnt from this piece of action-research, as part of the purpose of undertaking action-research is to be informed by its results (see Kember and Kelly (1993) p .3). Perhaps one of the most interesting results was that produced by the focus group concerning the formality of the sheets. There was at least one occasion when the Minute Sheet did not run as planned, because I had forgotten to bring the Minute Sheet along to the original lecture. When I first decided to use the CAT my preference was to have tear-off Minute Papers that I could include in the student module handbook. I thought that this would be the easiest way of using the sheet, as the students would carry their handbooks with them to each lecture. However it was not possible to create such an accessible book and this led me to printing individual sheets but this meant it was reliant on me to remember to take the sheets to the appropriate lecture. 

In hindsight, however, it is possible to see that I was focusing on the tangible aspect of the Minute Sheet rather than the process. Both the literature and the student focus group suggest that it is possible to run this CAT without the pre-printed sheets. Whilst the focus group overall liked the idea of pre-printed sheets, it is also clear that a one-off ad hoc sheet could have worked. Rather than abandoning the Minute Paper I could have got the students to write down their answers on a piece of paper and pass them forward. The actual process would have been identical but the answers would not have been on pretty sheets. 

The second lesson to be learnt concerns the timing. By asking the students to complete the sheet at the end of the topic I was asking them to think about problem areas before they had truly finished the topic (e.g. their own reading and tutorials). To some degree this carried with it advantages in that if there were particular flaws I could inform the tutors of this and they could rebalance the tutorial to place added emphasis on the misunderstanding. However, had I asked the questions at the beginning of the next topic then, arguably, I would get a better reflection of their understandings because they will (or should) have undertaken their own reading and had the benefit of a tutorial. 

If Angelo and Cross (1993) are correct that it does not make a significant difference, then the results are still largely valid – and I believe that they probably are: the sheets did show some aspects where extra work was required – but in any future repeat of the minute sheet I would reverse the current system and use the sheet at the beginning so that I am assessing what they have learnt rather than what they are learning. 

The third lesson that I learnt was not to shift focus away from what a CAT is. The electronic minute sheet did not work because I started to think that it was the assessment technique that was useful not the classroom assessment technique. CATs are very simple techniques but they work because of their immediacy within a classroom. When a CAT is lifted from out of the classroom then it inevitably begins to weaken. 

However, notwithstanding these issues, I strongly believe the action-research was worthwhile and successful. The principal conclusion of this study has to be that the use of the Minute Paper did allow me to undertake a quick, simple, evaluation of student learning. The use of the CAT has benefited me in my teaching and, I believe, in the students learning process.(3)

Top | Contents

Bibliography

Almer, E.D., Jones, K. and Moeckel, C.L. (1998) “The Impact of One-Minute Papers on Learning in an Introductory Accounting Course” 13 Issues in Accounting Education 485.

Angelo, T.A. and Cross, K.P. (1993) Classroom Assessment Techniques (2nd Ed). Jossey-Bass Publishers. San Fransisco.

Biggs, J. (2003) Teaching for Quality Learning at University (2nd Ed) Open University Press. Buckingham.

Brown, S. (1999) “Institutional strategies for assessment” in Brown, S. and Glasner, A. (eds) (1999) Assessment Matters in Higher Education SRHE/Open University Press. Buckingham.

Brown, S. and Glasner, A. (1999) Assessment Matters in Higher Education: Choosing and Using Diverse Approaches SRHE/Open University Press. Buckingham.

Brown, S. and Race, P. (2002) Lecturing: A Practical Guide Kogan Page. London.

Brown, S., (1999) Institutional Strategies for Assessment in Brown and Glasner (1999)

Burton, D. (2000) “Questionnaire Design” in Burton, D.(ed) (2000)

Burton, D.(ed) (2000) Research Training for Social Scientists SAGE Publications. London.

Canon, R. and Newble, D. (2000) A Handbook for Teachers in Universities and Colleges (4th Ed) Kogan Page. London.

Chen, Y. and Hoshower, L.B. (2003) “Student Evaluation of Teaching Effectiveness: An Assessment of Student Perception and Motivation” 28 Assessment & Evaluation in Higher Education 71.

Chizmar, J.F. and Ostrosky, A.L. (1998) “The One-Minute Paper: Some Empirical Findings” 29 Journal of Economic Education 3.

Clarke, A. (1999) Evaluation Research Sage Publications. London.

Cooper, N.J. (2000) “Facilitating Learning from Formative Feedback in Level 3 Assessment” 25 Assessment & Evaluation in Higher Education 279.

Costello, M., Weldon, A. and Brunner, P. (2002) “Reaction Cards as a Formative Evaluation Tool: Student perceptions of how their use impacted classes” 27 Assessment & Evaluation in Higher Education 23.

Cottell, P.G. and Harwood, E.M. (1998) “Using Classroom Assessment Techniques to Improve Student Learning in Accounting Classes” 13 Issues in Accounting Education 551.

Fox, N.J. (2003) “Practice-based Evidence: Towards Collaborative and Transgressive Research” 37 Sociology 81.

Gaeddert, B.K. (2003) “Improving Graduate Theological Instruction: Using Classroom Assessment Techniques to Connect Teaching and Learning” 6 Teaching Theology and Religion 48.

Harwood, E.M. (1999) “Student perceptions of the effects of classroom assessment techniques (CATs)” 17 Journal of Accounting Education 51.

Honey, P. and Mumford, A. (1992) The Manual of Learning Styles (3rd Ed) Peter Honey Publications. Maidenhead.

Kember, D. and Kelly, M. (1993) Improving Teaching Through Action Research HERDSA. Adelaide.

Kivinen, O. and Ristelä, P. (2002) “Even Higher Learning Takes Place by Doing: from postmodern critique to pragmatic action” 27 Studies in Higher Education 419.

Krueger, R.A. and Casey, M.A. (2000) Focus Groups (3rd Ed) Sage Publications. London.

Laurillard, D. (2002) Rethinking University Teaching (2nd Ed).Routledge. London.

Light, G. and Cox, R. (2001) Learning & Teaching in Higher Education: The Reflective Professional Sage Publications. London.

McNiff, J., Lomax, P. and Whitehead, J. (2003) You and Your Action Research Project (2nd Ed) Routledge. London.

Oates, C. (2000) “The Use of Focus Groups in Social Science Research” in Burton (ed) (2000)

O’Neil, M. and Pennington, G. (1992) Evaluating Teaching and Courses from an Active Learning Perspective CVCP. Sheffield

Orland-Barak, L. (2004) “What Have I Learned From All This?” 12 Educational Action Resarch 33.

Ramsden, P. (1992) Learning to teach in higher education Routledge. London.

 

 



(1) A version of this paper was presented at the 2004 Society of Legal Scholars Annual Conference held in Sheffield.

(2) The 2003 Institute for Learning & Teaching in Higher Education conference, held at the University of Warwick. References to this conference are to the keynote address given by Professor Tom Angelo.

(3) I wish to acknowledge the assistance of Mr Graham Ellis of the Centre for Learning and Quality Enhancement at the University of Teesside in the planning of this research.


BAILII: Copyright Policy | Disclaimers | Privacy Policy | Feedback | Donate to BAILII
URL: http://www.bailii.org/uk/other/journals/WebJCLI/2005/issue1/gillespie1.html