AI in schools and trusts

Key points:

  • There is no risk of AI replacing teachers in the foreseeable future.
  • School leaders and staff should be aware of what we already know about AI, what it can do, what risks it poses, and what the future may look like.
  • AI tools can help save time on a large variety of tasks, but they are not entirely reliable for feedback, analysis, or critical thought, nor can they be considered completely safe.
  • AI tools can and will simplify teacher work processes and lessen teacher workload in processes like grading, writing lesson plans, generating examples, and even planning lessons.
  • AI tools can also be used to boost learning.
  • The best way for teachers and school leaders to find out how AI can benefit their schools remains to experiment and familiarise themselves with the tools available.
  • The majority of research points to positive effects of AI in education, though it remains mindful of risks and shortcomings, as well as rapid developments in the field.
  • Soon, we could see AI providing in-class support roles complimenting that of Teaching Assistants.
  • HEP recommends that every education professional familiarises themselves with AI tools as both a service to their students, who will no doubt already be experimenting with them, and as a skilled educator who can learn to use new tools in the constant endeavour to improve the overall quality of education for children and young people.
  • The most apparent risk in education is plagiarism, in the sense of students presenting AI generated work as their own.
  • Since AI-generated work cannot be detected, it may be time for schools adapt their definitions of plagiarism, rulebooks, and even homework assignments.
  • Another potentially more malicious risk of AI tools is misinformation or disinformation.
  • HEP recommends that all schools adopt some form of digital literacy training for both staff and pupils as part of the IT curriculum. It will also be essential for people to know how generative AI functions as well as the ethics of using it.
  • LLMs, no matter how sophisticated, are trained and programmed to function as tools, and though their output appears human, it is simply the product of highly advanced predictive algorithms.

It is now clear that Artificial Intelligence (AI) tools are here to stay and already making an impact in education. While the US and China are leading the way in the research and development of AI, the UK is trying to uniquely position itself as a global leader in the realm of AI standards. One of the ways it is doing so is the establishment of AI in Education, an independent body bringing together leaders and experts in both technology and education. James Page, Chief Executive of Haringey Education Partnership, is on the Strategy panel Protocol for this organisation as it leads the way in publishing “guidance, advice, real-life case studies and commentary to help schools to navigate the complex landscape of AI in education as it develops.”
While the long-term impact AI will have on education remains to be determined due to the meteoric speed at which it continues to develop, the greater picture of AI’s role in society, and more specifically in schools, is becoming clearer. As such, school leaders and staff should be aware of what we already know about AI, what it can do and what it’s not so good at, what risks it poses, and what the future may look like.

What is artificial intelligence?

Let’s begin with a definition of artificial intelligence, as the phrase is ubiquitous in today’s media. According to Oxford Languages and Google, the definition of artificial intelligence is “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” Many of the abilities highlighted in this definition became possible with OpenAI’s first public release of the large language model (LLM) known as Chat GPT. LLMs are a type of computer program trained on billions of existing texts that use predictive algorithms to output text in response to natural language prompts. This means that these models not only respond like a human (would write), but can do so with correct grammar, punctuation, and vocabulary, with few comprehension mistakes, and fluently in multiple languages. Because of this impressive range of comprehensive and generative capabilities, people commonly refer to LLMs as artificial intelligence, or generative AI.

These generative AI tools are not only limited to language, either. Using machine learning techniques, developers have trained programs to recognise and create artwork, videos, songs, voices, characters, and podcasts. Whereas their creations at the beginning of 2023 were rudimentary and conspicuously AI-created, they have evolved over the course of the year to be almost impossible to distinguish from human work. This is important, because it means that a plethora of creative jobs and fields (including education) have and will be affected by AI tools.

The most prominent of these AI tools are known as ‘Frontier Models’, and they include GPT-4, Claude-2, and Google’s Bard / soon-to-be-released Gemini. Microsoft’s Bing uses GPT-4, which essentially connects the LLM to the internet (along with providing free public access to GPT-4) and therefore makes it the most powerful search engine currently available. It is worth mentioning other tools like Midjourney and DALL-E, which were the first large-scale natural language-input image generators, as well as Adobe’s Firefly, which supplements Photoshop with a language-prompted generative fill function. These more artistic models are already starting to become integrated into the ‘Frontier Models’, creating publicly accessible, powerful AI tools that have infinite applications in the field of education and beyond.

What is AI capable (and less capable) of?

It’s probably easiest to start with what AI cannot do. AI tools cannot and will not displace any education professionals or their jobs anytime soon. They are not sentient, and they cannot take the place of a good school leader or a skilled classroom teacher. This is not due to a lack of knowledge, but rather due to their nature as generative tools. It is important to remember that these LLMs, no matter how sophisticated, are trained and programmed to function as tools, and though their output appears human, it is simply the product of highly advanced predictive algorithms.

In fact, even the current capabilities of AI are overhyped. Most output from the ‘Frontier Models’ is still text-based, creating obvious problems for use in the classroom (keeping pupil’s focussed on text-based content, no accompanying pictures, only a single user at a time, etc.). Natural language prompts generally work well, but they are still far from being perfected, leading to many limitations for educational use (archived in this series of blogs). Even using AI tools for marking multiple writing assignments has led to a plethora of problems, as discussed by Daisy Christodoulou on the School Leadership Podcast.

When it comes to non-textual output, some AI-generated images still have glaring oddities that set them apart from photographs, making them more of a curiosity to pupils than a teaching aide. Of course, there are exceptions to this, and every day the models are tweaked to be more versatile, accurate, and realistic.

For now, AI tools can help save time on a large variety of tasks, but they are not entirely reliable for feedback, analysis, or critical thought, nor can they be considered completely safe. A recent paper from Microsoft assessed the trustworthiness of GPT models and found that “GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history.” The same study found that GPT-4 is more vulnerable to ‘jailbreaking’ and malicious prompts due to its tendency to follow instructions more precisely.

Another unreliable feature of AI-generated output is that it cannot reliably be detected. In a study testing 14 different detection tools for AI-generated text, the authors found that the tools were neither accurate nor reliable in revealing whether the content was AI-generated (Weber-Wulff et al., 2023). This has ramifications that will be discussed further in the ‘risks’ section.

What can AI do for educators?

While educators’ jobs are not directly threatened by AI tools, all education professionals can greatly benefit by becoming familiar with and using them. The uses of AI in education are, as of now, not entirely discovered or catalogued (nor are they likely to ever be, given their ever-expanding versatility). AI educator Dan Fitzpatrick provides a brief overview of the uses and effects of AI tools in education in his presentation at the Thinking Digital Conference.The talk touches on ways that AI tools can make learning more interactive, serve as digital teaching assistants, and create new and engaging multimedia content. Dan has also usefully compiled a database of tools specifically designed for education.

The potential of generative AI in education is vast. The most immediate impact will be through simplifying teacher work processes and lessening workload. This can be implemented most obviously in processes like assessment, writing lesson plans, generating examples, and even planning the lessons themselves.
AI tools can also be used in myriad ways to boost learning. Ethan and Lilach Mollick published a paper on how LLMs can enhance five evidence-based teaching strategies: providing multiple examples and explanations; uncovering and addressing student misconceptions; frequent low-stakes testing; assessing student learning; and distributed practice. The same authors published another paper on how LLMs can serve in the following educational roles: AI-tutor, AI-coach, AI-mentor, AI-teammate, AI-tool, AI-simulator, and AI-student, each with distinct pedagogical benefits and risks.

HEP has published a series of blogs on how to use specifically Chat GPT in the classroom, focusing on creating personalised learning experiences, training a virtual teaching assistant, augmenting language learning and communication skills, facilitating collaboration and peer-to-peer learning, and improving assessment and feedback efficiency. All these resources highlight only a fraction of the ways in which AI tools can be used in education. The best way for teachers and school leaders to find out how AI can benefit their schools remains to experiment and familiarise themselves with the tools available.

The evidence leans in favour of AI

While early adopters continue to discover new ways to use AI tools, there is a quickly growing body of research that supports their use for educators. When testing an interactive chat-based teacher training tool that allows practice with simulated students, Julia Markel and co-authors found that the tool provided the opportunity for teachers to get valuable teaching practice without the pressures of affecting real students. Moreover, the participants enjoyed the flexibility in the process (Markel et al., 2023). Another study conducted a large-scale randomised controlled trial to find out whether automated feedback could improve teachers’ uptake of student ideas. The LLM tool used improved instructors’ uptake of student contributions by 13% and may have also improved student satisfaction and assignment completion (Demszky et al., 2023). The same research team also asked and measured whether ChatGPT was a good teacher coach. The idea was to see if AI tools as automated teacher coaches could be cost-effective complements to expert feedback. The team found that ChatGPT responses are relevant to improving instruction, but there was a caveat – 82% of the time, the feedback was not novel or insightful (Wang & Demszky, 2023). These studies are by no means the full extent of the research conducted on using AI tools in education, and there are no doubt many more currently underway. The vast majority of this research points to positive effects of AI in education, though it remains mindful of risks and shortcomings.

We are already seeing other capabilities, such as file generation, data analysis, code writing, animation, etc. being honed and becoming publicly available. These new capabilities have the potential to shape the world of education. AI tools are now being given vision, voice, and connection, empowering them to new heights.

Soon, we could see AI providing in-class support roles complimenting those of Teaching Assistants. With enough digital infrastructure and tech-savvy school IT managers, AI tools can be integrated into classrooms for moderated student usage. This is currently being tested by companies like Khan Academy with Khanmigo. It will take multitudes of future studies to determine the effectiveness of these tools. Most likely they will not be as effective, but will prove useful in larger classes or in understaffed schools if they can be made cost-effective.

Closing disadvantage gaps in learning using AI will depend in large part upon breaching the digital divide. If pupils don’t have access to these tools at home, it will be difficult to practice or produce what they are taught in school, and in this scenario, the gap will only widen. However, with access, motivated pupils can use AI tools to fill their knowledge gaps, and perhaps, if certain AI tutoring models are made more accessible and ‘fun’, they will appeal to students for use outside of the classroom.

As AI tools become more commonplace, it will be easier to comprehend and see their limitations, as well as their full capabilities. We are not there yet, but the pace of development is unprecedented, and has the characteristic of exponentially building upon itself. HEP recommends that every education professional familiarises themselves with AI tools as both a service to their students, who will no doubt already be experimenting with them, and as a competent educator who can learn to use new tools in the constant endeavour to improve the overall quality of education for children and young people.

The risks

The risks of AI proliferation and development are many and varied, but in education, there may be fewer risks than elsewhere. As mentioned previously, there is no risk of AI replacing teachers in the foreseeable future. While AI can play a very effective supporting role in the classroom, it is far from being able to successfully and simultaneously manage the intricacies of classroom teaching, behaviour management, holding pupil attention, and implementing a well-structured, knowledge-rich curriculum.

The most apparent risk in education is plagiarism, in the sense of students presenting AI generated work as their own. New studies have shown that teachers using regularly used plagiarism-detection tools cannot distinguish AI work from student work (Weber-Wulff et al., 2023). In fact, in another small study, many teachers penalised high-achieving students on the presumption that they used generative AI to write their work (Farazouli et al., 2023). Therefore, not only is AI work largely undetectable, but teachers grading assignments with the knowledge that AI may have been used introduces an entirely new bias.

This is not a matter of waiting for better detection technology – it is an inherent aspect of generative AI-tools, and it demands that schools adapt their definitions of plagiarism, rulebooks, and even homework assignments. It may be time to shift perspectives and accept the use of generative AI as a tool for writing rather than thinking of it as an original creator with intentional artistic ideas. That is perhaps a debate for philosophers, but it is a very real challenge facing schools now, and so teachers must be able to navigate the already-difficult task of teaching quality writing while not falling prey to the grading bias that pupils are ‘cheating’ using AI tools.

The second obvious and potentially more malicious risk is misinformation or disinformation. Microsoft has already shown us that using GPT models “can be easily misled to generate toxic and biased outputs and leak private information” (Wang et al., 2023). If harmful output can be so easily generated by AI tools, it is not hard to imagine nefarious actors repurposing them to produce misleading propaganda, overload servers, or dump personal information onto the internet. If this false or personal information were to get picked up by pupils, it could be damaging in any number of ways, from reducing achievement to conspiracy ideation, ‘doxing’, or full-blown radicalisation. Teachers or students relying solely on generative AI can be left with gaps in knowledge, or worse, end up internalising false knowledge. Therefore, having a solid grasp on a knowledge-rich curriculum is still essential for any teacher or pupil. HEP recommends that all schools adopt some form of digital literacy training for both staff and pupils as part of the IT curriculum. Though it won’t be possible to identify exactly what content has been generated with malicious intent, it is possible to recognise patterns and motives behind some messages. It will also become essential for people to know how generative AI functions as well as the ethics of using it, knowledge which may aid in mitigating the risk of AI-generated misinformation/disinformation.

At this point, we can address most issues that will arise with the use of generative AI, but in the near future, that may not be the case. Schools need to be prepared to be completely unable to distinguish human-made content from AI-generated content; they need to be cyber-secure and carefully avoid data leaks; and they need to be up-to-date about the capabilities of AI tools so they can protect children and young people.

Conclusion

As we stand at the precipice of a new era in education shaped by AI, it is imperative that we, as educators and leaders, navigate this terrain with informed caution and proactive engagement. The journey of integrating AI into our educational fabric is not just about adopting technology; it is a transformative process that will require us to rethink pedagogy, curriculum, and the very nature of teaching and learning.

AI, with its myriad capabilities, offers unprecedented opportunities to enhance educational practices. It promises to streamline administrative tasks, foster innovative teaching methodologies, and personalise learning experiences, thereby significantly reducing the workload of educators and enriching the learning journey of students. However, this technological advancement does not come without its challenges.

As we embrace AI, we must be vigilant about the potential risks, including issues of plagiarism, misinformation, and ethical concerns. It is crucial to continuously educate ourselves and our students about the responsible use of AI, ensuring that it serves as a tool for enhancement rather than a substitute for critical thinking and creativity.

To effectively integrate AI in our schools, HEP recommends the following actions:

  1. Educational Experimentation: Teachers and school leaders should actively engage with AI tools. Experimentation and familiarisation with these technologies are crucial for understanding their potential and limitations in the educational context.
  2. Digital Literacy and Ethics: Schools should integrate digital literacy and ethical AI use into their curricula and teacher CPD. Understanding the mechanics and ethics of AI will empower both educators and students to use these tools responsibly.
  3. Policy Adaptation: As AI evolves, schools need to adapt their policies and practices, particularly regarding plagiarism and content creation. It’s essential to redefine our approach to academic integrity in light of AI’s capabilities.
  4. Professional Development: We encourage ongoing professional development for educators to stay updated on AI advancements and their implications for teaching and learning.
  5. Community Engagement: Engage in dialogues within and beyond the school community about AI in education. This includes sharing experiences, challenges, and successes with AI integration.

Through these actions, HEP aims to guide schools in leveraging AI’s potential while maintaining a commitment to high-quality, ethical education. We believe in a future where AI enhances, rather than replaces, the critical role of educators, fostering a dynamic and inclusive learning environment for all.

Works cited:

About the Author:

Luke Kemper

Luke Kemper is Insight and Intelligence Lead at HEP. He recently graduated from the University of Cambridge with an MPhil in Education, Globalisation and International Development. Before that, he worked for seven years as a university lecturer and high school teacher in China and Poland.

HEP Talks Podcast

The voice of Haringey Education Partnership. A weekly briefing on the latest stories in education news, deep diving into developments and interviews with leading voices in Education.

More Insights