Case Studies

Adapting your assessments for Ai

#

Advances in Artificial Intelligence are already changing the way we think about Higher Education. One key impact will be how we assess student learning. While the longer term view is that teachers will need to redesign all assessments for an Ai enabled world, the current situation is changing so rapidly that a full redesign feels like a guessing game. So what changes can we make to our assessments in the short term to try to combat Ai tools.

In a recent Artificial Intelligence in T&L Community of Practice meeting Louise Denis (Computer Science), Clive Dickinson (Physics and Astronomy) and James Brooks (Electrical and Electronic Engineering) shared some simple steps they’d taken to protect their assessments.

Louise Dennis – Department of Computer Science

Can you tell us a bit about your course unit and outline of your assessment?

  • Subject: Year 2 unit on core (and classic computer science).
  • Number of students: 500 last year, 370 this year.

The assessment involves:

  • programming several solutions to the same problem using different algorithms or data structures.
  • then devising and running an experiment to see if the implementation behaves as predicted by the theory as the input size increases.  

What were your concerns relating to Ai use and what changes did you make to the assessment brief this year?

Plagiarism has always been a concern on this unit because the algorithms, data structures and problem formulations are classic and widely known.  Below is a quick summary of how our assessment has changed:

  • Pre 2022 we used an interview in order to validate student understanding of their own work.
  • 2022-23  we added a Blackboard quiz involving both interview questions and a requirement for a formal write up of the report.  Students were allowed to use chatGPT for writing the report, as long as they provided an account of how they used chatGPT.
  • 2023-24 we extended the use of generative AI for all aspects of coursework and require students to include a statement on their use of AI in accordance with a Code of Conduct*. We provide a video guide to students explaining the reasons they can use generative AI and walk them through the code of conduct.

*I’ve then adapted the Code of Conduct developed by  Iliada Eleftheoriou in FMBH (original:  https://www.iliada-eleftheriou.com/AICodeOfConduct/ ) to include programming. 

What was the effect of your changes? What have you learnt? What further changes will you make?

In 2022-23 only a small number of students (7 out of 474 who attempted the exercise) admitted to using chatGPT.  I presented this experience to the Engineering the Future event in MecD and I’ve attached the slides for this. 

In future years I think I might adapt the Code of Conduct a bit further.  I don’t think it quite captures how I might expect generative AI to be acknowledged in program code and in short answer questions.

Artificial Intelligence & AI & Machine Learning
Artificial Intelligence and Machine Learning. Image Licence by CC 2.0

Clive Dickinson – Physics and Astronomy

Can you tell us a bit about your course unit and outline of your assessment?

  • Subject: Year 2 core unit for Physics students (which will change to a year 1 unit from next year)
  • Number of students: 360 currently.

The assessment involves:

  • 5 Blackboard quizzes (35%). Quiz questions are randomised, drawing from a large pool of questions and are automatically marked.
  • Programming assignment 1 (15%) (3 weeks to complete)
  • Programming assignment 2 (50%) (4 weeks to complete)

The assignments are submitted to Blackboard and are marked by GTAs along with the course leaders using a common spreadsheet. Each on is double marked and checked for consistency. 

What were your concerns relating to Ai use and what changes did you make to the assessment brief this year?

Blackboard quiz:

We’ve not made any specific changes. It is certainly safe for chatGPT3.5 but possibly GPT4.0 (which can handle different forms of input including images etc.) might be able to aid students.

Programming assignment 1:

The 1st assignment in particular, could easily be done by chatGPT and get close to full marks with the correct input prompt. Nevertheless, there are some style points which GPT doesn’t necessary get completely right. We change the assignment each year anyway, but this year, we made it a little less clear to trick AI. In particular, the oscillatory system that is being modelled has been characterised in a slightly non-standard way – we use a different definition of an oscillation and how it starts and include a plot to show this (without the input parameters). This is enough to trick chatGPT. It also makes the students think a little, which is what we want for physicists.

Programming assignment 2:

The 2nd assignment is a little more complex and we believe is ok for now, although we haven’t formally checked (I’m waiting for GPT4.0 but they have halted new subscriptions for now!).

What was the effect of your changes? What have you learnt? What further changes will you make?

We’ve only made minor changes to assignment 1 and found the marks are very similar to in previous years. Quite a few students did not appreciate the definition and did not get full marks for the calculation part. Some students also complained that the instructions were a bit vague. From looking at dozens of codes though, I’m pretty sure I could recognise things that were likely produced (at least partly) by AI. A few codes had very high plagiarism marks (50%+!), which may well be due to the use of AI.

For now, we’re not changing anything else, at least not until next year. We did consider interviews for all students to explain their coding but the logistics make this difficult, so we kept it simple. But we may need to do this in the future, possibly?

Artificial Intelligence and Machine Learning
Artificial Intelligence and Machine Learning. Image Licence CC 2.0

James Brooks – Electrical and Electronic Engineering

Can you tell us a bit about your course unit (year, cohort size, subject) and outline of your assessment?

  • Subject: Year 3 core optional unit in Maths and Electrical Engineering..
  • Number of students: 30

The assessment involves:

  • Students independently study a topic of their own choosing and create learning material to teach that topic to their peers (normally a video and handout).


What were your concerns relating to Ai use and what changes did you make to the assessment brief this year?

My concerns were not just related to AI. Students can pick their own topic and the unit is 100% coursework meaning they could copy any material from the web or use AI to do part of their work. To mitigate some of this, students are required to submit a weekly log book of what they have learnt and list the resources they have used. 

As part of their work I encourage students to use LLMs such as Claude.ai and ChatGPT to gather feedback on their drafts – as part of this we had an in class discussion about how to use feedback both from peers and from AI. In it I heavily emphasized that both can be unreliable but both are good opportunities to reflect on your work.

 What was the effect of your changes? What have you learnt? What further changes will you make?

Students were surprised I was encouraging the use of LLMs, not many students have used (or at least told me) they have used LLMs but those who have generally said “It’s actually surprisingly useful if you understand the limitations” which is exactly what I wanted. You could see the frustration with some students to get the LLM to ‘understand’ what it was being asked to do.

Anything further to add?

James: I would like to spend more time discussing with students the various benefits and challenges of using AI.

Louise: I don’t think the changes we make can prevent deliberate malpractice on the part of students, and as the tools get better more imagination will be needed to ensure student understanding.  At the moment I don’t think the tools are sufficient to produce coherent code, code explanation, and AI use explanation across the exercise, but I expect that may change in future.

Clive: Well, our whole way of learning/thinking in the future is going to change radically. Everything will be done by AI, so really, we have to ensure that we are all experts on AI, at least using it. Assessments using AI are an obvious way forward but all staff will need to be up-to-speed with it, which I think will be the major issue.

Teaching Academy are keen to hear (and share) the experiences of colleagues who are identifying innovative ways to improve their teaching, if you’ve something to share please get in touch.