Assessment methods

At present, I am the teacher responsible for three courses (two -Bachelor’s and Master’s level GIS courses and one Maser’s level Method and Data Analysis course). The assessment mode of these courses is a  problem/inquiry/project–based approach (Hmelo-Silver et al., 2007). The choice of assessment method is based on the learning objectives of the courses, as well as my own experiences, the experiences of other teachers (friends) giving similar courses in different institutions, and the literature dealing with teaching and learning (see details in My teaching philosophy). I have tailored different assessment methods to align the assessment strategies with the goals of the course and the TLAs.

Some of the main objectives of my courses are to help the students learn and apply GIS and data analysis methods, tools and techniques to solve problems related to fisheries and marine-resource governance. The students should also be able to use and integrate relevant knowledge from other courses to design/create their own GIS projects (or data analysis model/framework) to solve problems, and not least be able to communicate their findings/results scientifically and effectively to support decision-making. Therefore, I believe that take-home exam, based on a problem/inquiry-based approach, where the students work independently to solve a problem, is the best assessment method.

For the Bachelor’s level GIS course, I use a combination of take-home exams (individual project work, where the students work on given problems/tasks) and oral exams (short presentations on a given topic, followed by a short question/answer session) as the mode of summative assessment. I provide a general question that directly relates to the learning objective of the course, one week before the oral exam. The students research materials, prepare a short (seven-minute) presentation, and present it, before the question/answer session. This presentation as part of the appraisal allows an assessment of the students’ oral delivery, communication skills, and use of different tools, apart from the quality of the content. The rationale behind the use of this combination of project work/term paper and oral exam (including a presentation on a given topic) is that it provides the students with multiple ways of expressing and demonstrating their mastery of the requisite knowledge, skills, competence and understanding. Before I joined the UiT, the assessment of this course was based on school exams (where the students were only assessed through their theoretical understanding). In the autumn of 2017, I used the same method of assessment; however, the students’ learning skills/competence, the exam results, and feedback from the students have shown that an assessment method based on a problem-solving approach is significantly better than the traditional school exam. I have also observed increased attendance percentages in my classes and data labs, as well as an increased amount in the hours students spend in the data labs (or working on their own computers) after the formal class.

In the Master’s level GIS course, the summative assessment is based on individual project work (in the form of a home exam), where students design an independent GIS project, carry out the analysis, and write up a concise, illustrated report. The rationale behind this is that the students can demonstrate the GIS knowledge, abilities and competence that they learned on the course by designing, developing, implementing, and critically evaluating their own GIS analyses. For the Master’s-level Data Processing and Analysis course, we use a combined portfolio assessment (three reports, 50% of the grade) and final home exam (50% of the grade), where students work on given data/cases by designing their own research problem and analysis framework.

Apart from the summative grading, I always include formative feedback elements in my assessments. I provide feedback on the students’ final exam work in WISEflow. In the oral exam (undergraduate-level GIS course), we provide brief feedbacks on both the written reports and oral exams (presentation and interview) to the student at the end. I provide detailed written feedback for all assignments, both for their work requirement (arbeidskrav) and voluntary assignments (including datalab reports) in Canvas (and previously on Fronter). I always have a short session in my teaching plan to discuss the feedback on the assignments (both required and voluntary), where I provide general comments, discuss common mistakes and areas for improvement with all the students, and reflect on what my expectations were for that particular assignment. I also encourage students to discuss and respond to/comment on my feedback. I also use peer-review for some assignments. At the end of each module (also at the end of the class, in some cases), I conduct a quiz as a formative assessment, using digital tools. In my classes, I often use formative assessment strategies, such as strategic questioning (especially why and how), think-pair-(square)-share, and classroom pools using digital tools, such as Flinga, Kahoot!, Shakespeak, and Socrative, and short feedback surveys (including questions about what the students think about the key take-home messages of the lecture/data lab, and what they found confusing in the session).

The belief behind my choice of assessments, as mentioned above, is ‘assessment for learning’. Many students want to put effort only into learning what they think they will be tested on. For them, assessments define the core curriculum. I design problem/inquiry-based final assignments for summative assessments as part of learning. Similarly, I use class exercises, lab reports, different assignments (voluntary, and as part of the mandatory work requirement), and quizzes as part of the formative assessment. This motivates the students to put effort into all of the designed teaching/learning activities, as it helps them to identify the areas they need to improve on and to inform them of their progress. Thus, it empowers them to take the necessary actions to improve their performance and learning outcomes.

References

Hmelo-Silver, C. E., Duncan, R. G., & Chinn, C. A. (2007). Scaffolding and achievement in problem-based and inquiry learning: a response to Kirschner, Sweller, and. Educational psychologist, 42(2), 99-107.

Teaching history |Basic repertoire |Assessments methods | Ethical dimension