Blog | Beyond Universal AI Literacy: Fostering Deeper Connections in Higher Education – Innovation Month 2025

6 February 2025

This blog reflects the views of the author. The Council of Deans of Health has a wide membership and set of partnerships with a range of opinions that do not necessarily constitute formal positions of the Council. We value that diversity of thought and experience.

Dr Louise Drumm is an Associate Professor in Digital Education at Edinburgh Napier University. In this blog, she situates recent developments in generative AI in education within the broader higher education context. Drawing on her research, she challenges simplistic responses to this complex issue and advocates for moving beyond calls for universal ‘AI literacy.’ Instead, she proposes a reframing that shifts our focus from AI itself to fostering deeper connections among educators, students, and knowledge. 

In the world of education and generative AI, time moves both slowly and quickly. It is true to say that we are seeing unprecedented developments in the range and functionality of generative AI tools and platforms, with a breathless pace of news items and announcements every day. Yet it is also true to say there are a set of underlying principles which remain more or less constant, both for the specific context of Generative AI in education and the wider endeavour of higher education. To start on a mostly positive note, I’ll outline what I see as areas which haven’t changed and we should not forget:

  • Higher Education has been in the business of critical thinking a long time; all of those existing skills can and should  be transferred to this new context.
  • Universities are still centres of knowledge, where we create, curate and disseminate knowledge.
  • That knowledge is managed through very human processes like life experience, argument between humans, experimentation, human thought and human writing
  • There are always technologies which feel like disruptors when they come along, but real change comes from how humans choose to change what they do, not from the technology itself.
  • The external and internal motivations for students to find means to avoid doing academic work have not changed (though the barriers to accessing the means to do this have reduced or disappeared)
  • Everyone still has agency; educators and students alike can choose what technologies to use or not use; just because a technology exists, it does not mean everyone should use it.

All of that said, there is no doubt that certain human-based processes within education which will, or already are, change by the employment of generative AI tools. But there is no one size fits all, and the data are demonstrating this in abundance.

Since early 2023 I have been involved in three research projects exploring student and staff experiences of generative AI, and if anything, these findings demonstrate more diversity in beliefs and practices than perhaps the techbros of Silicon Valley, the UK government and the media at large would like us to think.

At Edinburgh Napier University, we have collected anonymous research data from students about their use of tools like ChatGPT for the past three academic years. While the first year most participants waxed lyrical about the potential of Generative AI, by the second year we saw more evidence of concrete experience, and a greater divide with some students making a stand on choosing not to use it. This tallies with the findings of my other project, research funded by Advance HE Collaborative Development Fund, conducted between Edinburgh Napier University, Aberdeen, Herriot-Watt and Dundee universities, which gathered data from students, academics and professional services staff in universities across the UK. As detailed in our Advance HE report, bubbles of experiences and attitudes emerged; those who used generative AI, saw the value in using it, those who didn’t use it, didn’t see value. Interestingly, students did not feel pressured by peers to use it, whereas staff felt more pressure. I suspect that the personal and professional spheres we all move in has a significant impact on this.

From Edinburgh Napier’s ChatGPT&Me research it is clear that all students had a common understanding that irresponsible use could stand between them and their learning. They also recognised that irresponsible use was not fair on others, and for some – such as the participant in the following post – it was felt keenly personally.

Figure 1 Post from a participant on the ChatGPT&Me project.

While there are valid arguments in favour of integration of generative AI into multiple areas of education, there must still be space for thoughtful rejection and resistance to its use. What’s most important here is keeping our eyes on the prize: maintaining the relational aspects of education and building connections between educators, learners and ideas, not allowing technology to intercede on our behalf. In order to know what appropriate use or appropriate non-use is, there has to be a level of engagement with what generative AI is, how it works and how to use it. This is more than ‘AI literacy’, something which most research calls on universities to provide to staff and students, but a step beyond to generative AI collective and individual responsibility and consciousness-raising. We as individuals, institutions and a sector should be able to call out the companies behind generative AI tools and their myriad dubious practices (see Helen Beetham’s excellent blog for a list), but from within a position of knowledge and awareness (including hands-on with using the tools), not from a state of ignorance.

So, if you or your colleagues have yet to try out generative AI tools, it is important to engage with them and experiment with a range of topics and types of prompts. Research I conducted on training colleagues on using Generative AI, indicated that skilled use does help academics develop their thinking about the impact of these tools on how they design assessments. But this does not mean that you must incorporate their use into your everyday practices, nor does it mean that you need to absorb the hype that makes you feel pressurised to keeping up with developments. Instead, seek out a range of colleagues in your institutions who are using Generative AI regularly and ask them to keep an open channel with you about what they are finding. It is after all in those connection between humans where knowledge is developed and built, and ultimately where evidence-informed decisions can be made on changing practice that can make a positive impact on our students and their learning.

 

References

Share to:

Comments are closed.