October 16, 2023 - 2:30pm

Cottesmore School, a boarding prep school in West Sussex, has appointed an AI robot as its “principal headteacher”. The school worked with an intelligence developer to create the robot called Abigail Bailey (who is, of course, young, attractive and female) to support the school’s headmaster on a range of issues such as writing school policies, helping pupils with ADHD and answering questions.

This is a classic example of not whether we could use technology, but whether we should. Much has been made already of artificial intelligence’s potential to revolutionise the ways in which we learn, teach, and manage our administrative workload, such as automating repetitive, time-intensive tasks. AI could even eventually offer a solution to the teacher recruitment crisis. In Mississippi, teacher shortages have forced districts to turn to online education programmes, while earlier this year, Cottesmore School advertised for a Head of AI to embed technology into the curriculum, but eventually the headmaster decided to appoint another robot to the role, which feels like a futuristic take on teachers marking their own homework.

Cottesmore head Tom Rogerson believes that children and adults should be taught to make robots their “benevolent servants”, but there are two main problems with this attitude. The first is that AI apps are not benevolent — they are biased, and more people-pleasers than servants. Research by the University of East Anglia shows that ChatGPT, the fastest growing consumer application in history, has a “significant and systemic left-wing bias”, while research by the University of Munich also confirms its “pro-environmental, left-libertarian orientation.”

ChatGPT may profess neutrality, but biases are embedded into every aspect of its system: from problematic training data, skewed learning algorithms that teach it to prioritise some types of information over others, or prejudices from the humans designing the processes or those giving feedback on its answers. This is less of a problem when it is making up poetry and other party tricks, but when it has real-life applications, such as criminal facial recognition, deciding who gets a mortgage, or evaluating job applications, then it is significant.

The second is that it undermines intellectual integrity: if the headmaster can be seen to cognitively cut corners, why shouldn’t the students? AI has already transformed the task of checking for plagiarism from a minor inconvenience to an almost impossible undertaking; as a teacher, I now try to get my GCSE students to always write essays in class, but this takes up valuable lesson time. I also worry about the impact this will have on students who mentally checked out after the pandemic, and already see school as “optional”, a problem which is also exacerbated by soaring absence rates. 

We need to acknowledge the limitations of current AI-powered educational tools, such as a lack of creativity and originality, and a limited understanding of context; my personal experience of ChatGPT when seeing if it can produce GCSE answers is that it is very good at style and less so at substance. These systems will improve, but they cannot replicate the relationships that are at the heart of face-to-face teaching. Many virtual charter schools fail or have abysmal levels of student turnover because students are motivated by the many social interactions that take place in school, not through a screen.

There are also privacy and security implications to consider: I’m not sure how I would feel putting sensitive safeguarding information through an AI robot that has very little to no accountability. Until we can develop a curriculum that adapts to reflect these monumental changes, we should be wary of how much good AI is really doing in schools.


Kristina Murkett is a freelance writer and English teacher.

kristinamurkett