Introduction
Artificial Intelligence (AI) is no longer just a hot topic it’s a reality rapidly shaping how we design, deliver, and reflect on education. While discussions around AI are increasingly open and active across King’s College London, the pace and tone within King’s Foundations has been more reserved.
As a Learning Technologist working in King’s Foundations, I recently attended the Association for Learning Technology’s (ALT) OER25 Open Education Conference. This experience sharpened my perspective on how AI, particularly generative AI can be critically and constructively embedded across not just assessment and feedback, but curriculum design, staff development, and internal communication. The key? Moving beyond automation and embracing openness, collaboration, and shared ownership.

Artificial Intelligence
At OER25, one of the most recurring themes was that AI reveals not resolves, the tensions already embedded in our educational systems. A speaker from the AI Now Institute urged participants to ask not how much AI we can use, but how little computational power we need to retain meaningful educational practice. This principle, dubbed “frugal computing,” flips the narrative from efficiency to intentionality.
This resonates deeply with our current challenge in King’s Foundations. While there have been informal discussions around AI mostly centred on resource development, assessment redesign, and feedback workflows conversations often occur in silos. During the conference, examples from other institutions showed that siloed experimentation without shared resources or policy scaffolding often leads to duplication, inconsistency, and missed opportunities.
Many departments at King’s (outside of Foundations) are engaging in visible pilots such as AI-supported assessment banks, ethical guidelines, even student training hubs. But internally, staff across King’s Foundations may only hear about innovations from other teams through casual conversations or ad hoc meetings. This presents a major gap: not just in practice, but in strategic alignment.
Lessons learnt
1. AI Should Be Seen as a Shared Resource, Not a Specialist Tool
From feedback generation to curriculum scaffolding and even budgeting, AI has potential across a broad spectrum of educational activity. The open MOOC (Massive Open Online Course) case study from Politecnico di Milano highlighted how modular, editable resources shared across departments helped create consistent student experience while reducing duplication of effort. King’s Foundations has the potential to lead in creating a shared AI resource hub supporting consistency, reducing duplication, and promoting inclusive, informed digital practice across teams.
2. AI Can Reduce Feedback Inequality but Only with Transparent Practice
The OER25 sessions addressed hidden labour and opacity in AI use. For King’s Foundations, AI-generated feedback could help reduce marking load and give students quicker responses. But unless these systems are transparent, explained, and ethically disclosed, students may distrust them, or staff may avoid using them.
There’s also an opportunity to explore shared, open banks of feedback phrases and models making formative feedback not just fast, but open and peer reviewed. This would also facilitate more aligned feedback standards across King’s Foundations courses, helping reduce the “black box” nature of academic comments.
3. Internal Communication is as Important as Innovation
One standout comment at the conference came from a digital education researcher who said:
“Most departments don’t need new AI tools they need shared language and shared direction.”
King’s Foundations has a strong culture of innovation but often lacks systematic ways of sharing outcomes and learnings. AI pilot projects such as using co-pilot agents for formative feedback, AI summarisation of reading packs, or tools like Explainpaper for decoding complex texts and Perplexity for guided research should be openly documented, evaluated, and scaled. A regular ‘AI in Practice’ forum could promote transparency and reflect the open sharing ethos reinforced at OER25.
4. Open Educational Practices (OEP) Must Include AI Transparency
The conference raised deep concerns around AI disrupting knowledge equity. From data scraping of open archives to the invisible labour of data labellers, generative AI often operates in ethically grey spaces. For King’s Foundations has a unique opportunity to lead by embedding discussions of AI literacy, openness, and creative commons licensing into our professional development and curriculum.
By modelling these practices internally, For King’s Foundations can support not only better teaching but clearer communication, values alignment, and trust-building with students and staff alike.
The way forward
To move forward, AI must be treated not as a technical upgrade, but as a cultural shift that demands openness, reflection, and collective action.
Recommendations
- Create a shared digital repository of AI-assisted tools, templates, feedback prompts, and ethical use cases across KF that are easily accessible and clearly communicated to all staff. Ensure that resources are well-organised, regularly updated, and supported by clear guidance on how to access them, who to contact for further information, and where to go for training or troubleshooting. This will help eliminate confusion, raise awareness, and promote confident, consistent use across teams.
- Facilitate regular cross-departmental check-ins (e.g. AI in Practice sessions) to surface, connect, and align AI-related initiatives.
- Pilot AI feedback enhancements in a transparent, student-informed way, ideally within formative assessment stages first.
- Develop a King’s Foundations-wide AI policy primer, aligned with King’s broader policies, including OER and open pedagogy principles.
There is an opportunity not only to align with the wider college conversation on AI, but to help shape it through critical practice, grounded experimentation, and visible collaboration.
Useful Links
- OER25 Conference – Association for Learning Technology (ALT)
- AI Now Institute – “Artificial Power” Report
- Creative Commons
& AI Policy
About the author
Faith Namakula, Learning Technologist at King’s Foundations with a sharp focus on digital transformation, AI-enhanced education, and strategic innovation in assessment and feedback. Faith specialises in bridging technology and pedagogy, enabling data-informed, compliant, and effective digital practices that align with King’s strategic priorities. With expertise in project delivery, stakeholder engagement, and digital capability development, she is committed to embedding sustainable, high-impact solutions that contribute to institutional excellence and sector-leading practice.