In this blog, Michael Rowe, Associate Professor in Digital Innovation at the University of Lincoln, delves into the complex and high-stakes world of using AI in fitness to practice (FtP) processes in nursing. It explores how students, academic staff, panel members, and clinical educators might leverage AI tools to enhance communication, decision-making, and fairness, while also unpacking the significant ethical, professional, and procedural challenges that come with it.
Generative AI tools are already reshaping how nursing students and educators work. From drafting assignments to interpreting complex policies, these technologies have quietly become part of the education and practice landscape. But when it comes to fitness to practice processes—the high-stakes procedures that determine whether students can continue their professional journey—the stakes are considerably higher.
Unlike routine coursework, FtP processes involve multiple stakeholders with different needs: students facing potentially career-ending decisions, academics responsible for maintaining professional standards, clinical staff documenting concerns, and panels making complex judgements about professional competence. Each group might reasonably use AI tools, yet each use-case raises different questions about appropriateness, transparency, and fairness.
Opportunities across the process
For students, AI could address some persistent inequities in FtP processes. Those whose first language isn’t English or who struggle with academic writing might use these tools to better articulate their insights and reflections. AI can help interpret dense professional standards, structure action plans, or prepare for panel interviews through role-playing exercises. This could make processes more genuinely educative rather than simply evaluative.
Academic staff and panel members might use AI to ensure more systematic and consistent approaches to complex cases. The technology could prompt consideration of multiple perspectives—”What might explain this behaviour from the student’s viewpoint?”—or help identify relevant precedents and policy sections. For case preparation, AI could synthesise complex documentation or highlight potential procedural issues that might otherwise be overlooked.
Clinical educators could benefit from AI’s ability to help structure their observations more effectively. The technology could assist in distinguishing factual observations from interpretive judgements, translating practical concerns into formal competency language, or ensuring incident reports are comprehensive and appropriately contextualised. This could strengthen the evidential foundation for FtP decisions while reducing the administrative burden on busy clinical staff.
Risks and concerns
These opportunities all come with genuine concerns. Perhaps most fundamentally, when does AI assistance cross the line into replacing professional judgement? If a student’s reflection is substantially shaped by AI, does it still represent their actual insight and capacity for professional growth? If panel decisions are significantly influenced by AI analysis, whose professional judgement are we really evaluating?
There is also the risk of creating adversarial dynamics. If panels begin using AI to detect student AI use, FtP processes could shift from developmental conversations about professional competence toward compliance investigations about process adherence. It also exerts pressure on students to use AI to better position themselves for an outcome in their favour. This dynamic would ensure that FtP processes miss the point entirely.
Quality assurance presents another challenge. AI can be confidently wrong, and in high-stakes FtP contexts, this could have serious consequences. Staff and students using these tools need training not just in how to operate them, but in how to critically evaluate their outputs and recognise their limitations. There are also questions about documentation and transparency—should AI use be disclosed, and if so, when and how? Creating audit trails and maintaining accountability becomes more complex when multiple parties could be using AI tools at different stages of the same process.
Key tensions without easy resolution
Several tensions emerge, none of which have any clear resolution. The distinction between enhancement and replacement of professional judgement is difficult to define in practice. In a profession where practitioners increasingly work alongside AI tools, should AI literacy itself become part of FtP assessment? What seems like appropriate use in one context might appear problematic in another, making universal guidelines very difficult to establish.
There is also an equity paradox at play: AI could level playing fields for some students while creating new forms of advantage for others. And the question of disclosure expectations remains complex—focusing too heavily on AI detection might divert attention from the fundamental question of professional competence. As AI tools become more sophisticated and accessible, these equity considerations will likely intensify rather than resolve.
The need for nuanced frameworks
These complexities suggest that blanket rules—either prohibiting or mandating AI use—are likely to miss the mark. Instead, we need frameworks that can navigate context, purpose, and professional judgement. Such frameworks should address questions of transparency, accountability, and the preservation of authentic professional development while recognising the legitimate benefits these tools might offer.
Importantly, similar complexity exists throughout nursing education and practice: research supervision, clinical decision-making, interprofessional collaboration, among others. Each involves multiple stakeholders, high stakes, and complex judgements about professional competence. FtP processes therefore represent just one area where the nursing profession needs thoughtful approaches to AI integration.
Instead of avoiding these complexities, now is the time for nursing to lead in developing nuanced, context-sensitive approaches that acknowledge both the opportunities and risks these technologies present. The question is not whether AI belongs in professional processes, but how we can navigate its use thoughtfully and fairly.
Get in touch
If you want to know more about this, get in touch!