GenAI and assessment in ancient history

By Dr Aimee Turner

As 2024 wraps up, a lot of academics (those not enjoying a much deserved break!) are starting to look towards the new year. One of the major concerns in academia remains the impact of AI, and particularly on assessment. From conversations I’ve had and my own marking experience, some students are embracing this new technology, and we are not ready for them. In one of the fields that relies heavily on written assignments, the validity, reliability and fairness of our assessment practices are being undermined.

As a learning designer, I am commonly asked “how do I GenAI-proof my assessment”, and I have to answer that, sadly, there are no ways to really AI proof an assessment. In the unit I just finished teaching, there was an online test at the end of semester that asked students to identify and analyse art and architecture, based on images, and even that task showed signs of AI usage. There are no really good resources out there at the moment to help with this – AI detection software is spotty at best, inaccurate at worst, and we are still struggling to really understand how we can use this technology ourselves.

That said, there are a few things I would recommend that would limit the opportunities and temptation to use GenAI in assessment in ancient history.

The first step is thinking about how to make the assessment relevant to the students, something they would have an interest in doing. I haven’t had a chance to coordinate my own units yet, but one of the things I would do is move away from essays in first year – and possibly all together. For example, instead of having students complete an essay for their final task, I would think about getting them to develop an exhibit, either real or virtual, around a topic – about gender in myth, or Athenian responses to crises. Have them develop something that draws on the physical remains as well as the literary, supported by modern research, which still demonstrates the skills needed for an essay – critical analysis, source selection, communication skills – but in a way that is more practical and engaging. Embrace group work, problem- and project-based learning and be creative!

The second step is to consider how you can build students’ AI literacy, so that they have an awareness of when it is and is not appropriate to use the tool. Engage in activities that demonstrate the dangers of using AI without critical thought – for example, have them look for sources using GenAI, as it will almost always hallucinate at least one and others will be of non-scholarly quality. Scaffold this into your assessment, so that AI is just one more tool to help them with their studies, not do it for them. One resource I particularly like, from Oregon State University, is this revised Bloom’s Taxonomy.  It gives a really clear picture of where AI currently sits, and in this updated version pairs human capability with AI support.

Finally, consider what skills and knowledge you really want to assess. Which of these can easily be done with GenAI? Skew your marking guides or rubrics, so that even if they use GenAI for the writing, they will fail or barely pass without doing more. Some students are fine with passing (Ps make degrees, as my friends always chanted in my undergrad), but most really do want to do well and this will steer them away from the generic outputs of AI. I actually see this as the opportunity provided by GenAI for us to push our students to higher levels of critical and analytical thinking.

Now is a good time to start thinking about your assessment tasks for 2025. Choose one to start, and draw on the resources provided by your institution to start making real change in the field.


Leave a Reply

Discover more from Arke

Subscribe now to keep reading and get access to the full archive.

Continue reading