Churning about the chatbots
And learning about why there's good reason for churning about them
It’s time for the story of how a group of Year Threes and I ended up having an incredibly interesting (and quite reassuring) discussion about AI and resistance. You are going to have to take me at my word that the conversation is a true story, Year Threes are really cool. And I really do have strong feelings about chatbots.
To begin, you should know that Year Threes are aged seven or eight years old, and are slightly older than my ‘preferred’ age group to teach. Yes, I find seven and eight year olds slightly intimidating! Unlike three-to-five year olds, they can actually read the instructions on the board, and tell the time, and because they’ve been in school for longer the stuff they are learning is a bit harder, so I have to do things like double check my answers to the maths questions I’m modelling and learn what an expanded noun phrase is.
With all that in mind, I’d had a very successful and enjoyable day with these lot, and had been able to make use of the advantages of slightly older students. As I said, unlike three-to-five year olds, they can actually read the instructions ion the board so don’t need it repeated 15,000 times, can tell the time so can remind me of their routine themselves, and have had a bit more practice in school so can build on their prior learning a bit more independently!
Even so, by the time it was 2:45 and we had completed all the set teaching for the day, I was a bit stuck for how to vamp until home time (again, I have no such anxieties when working in the early years. I know how to extend the telling of a story book, I know loads of songs the little ones love, I know lots of them enjoy the chance to get up and perform for their classmates so sometimes I don’t even have to do the talking). Luckily, the class lead the way for me.
“Ms Hill, is your first name Dolores?” someone asked on behalf of their peers, who had obviously all been discussing this during their afternoon break and were anxious to hear the answer. I confirmed that yes my first name is Dolores, and was immediately reminded that there is another well-known teacher with the same name: one Professor Dolores Umbridge, one of many evil, authoritarian, oppressive Defence Against the Dark Arts Teacher who puts in a shift at Hogwarts. The pupils remind me about how terrible she is, she’s cruel, manipulative, changes the curriculum to minimise the defensive skills the magical students need to protect themselves as their world becomes increasingly dangerous.
After establishing that I am MUCH nicer and a much better teacher than Professor Umbridge, we get into the juicy stuff. I thought out loud about why she would have wanted to deskill her students. What benefits would it serve her, and the regime she supported, to have the next generation of wizards and witches almost entirely defenceless against the forces that sought to oppress them? Why would it be useful to deny them information about the realities of the regime? How could rewriting or even denying history help those in power to manipulate people to see their way of thinking, and act on their behalf?
Ding ding ding, the children had all the answers. “It would make it easier for the baddies to control the goodies.” “It could stop people even realising they were at risk until it was too late.” “It would turn family and friends and neighbours against each other.” And what did the students do to resist this I asked? “They talked to each other!” “They thought about how they were feeling and if it was right – even if a teacher was telling them it was!” “They got the information they needed themselves!” “They figured out the big plan and joined up to fight against it!”
And so, I asked them, is it a good idea for us to just accept whatever we are told, ignore each other’s feelings, follow instructions whatever they are and regardless of who gives them to us? “No!” And should we be telling those people, who we don’t know, or don’t trust, all of our information, to help them learn even better how to influence our ideas? “No!” Would we be very good at standing up to Umbridge if we just read what she told us to instead of making sure we checked where the information came from, how it was deduced, what it really means for us? “No!” Exactly… do you lot know about AI? “Yes!”
Learning about why there’s good reason for churning about them
What do you know about it? “It talks to you.” “It makes pictures.” “Sometimes inappropriate ones.” “It has loads and loads of information and is so quick.” “But it is sometimes wrong.” “And it’s sexist.” Oh interesting, what do you mean by that? “Well, if you ask it for a picture of a doctor it will always make it a man!” Yeah, you’re right. It is also unfortunately prone to doing things like that with race and age (Stanford University, 2024; UNESCO, 2024; Hong and Choi, 2025). And imagine if from now on people only ever asked AI like ChatGPT what a doctor looked like, and it only ever showed a white man… do you think that would inspire other kinds of people to become doctors? “No!” And yes, it does get things wrong. Unfortunately, the way those AIs work is that it takes all the information it can get its hands on (Forbes, 2026) and puts it into a big pot of sloppy soup, and then you ask a question, and it scoops out whatever it finds that’s somewhat related, and plops it on your plate. Would you want to eat that? “No!”
When discussing (so-called) Artificial Intelligence, it’s essential to differentiate between the different types. Large language models (LLMs) make predictions about what word they should produce next based on the linguistic statistics of content they’ve been trained on (IBM, 2026). So far, so not intelligent in any meaningful way, but appear to be useful for the tasks where the only necessary skill is a rudimentary awareness of what word normally comes next in a sentence. Unfortunately, as is now well known (and weirdly well accepted), the most popular LLMs such as ChatGPT, Claude, Gemini, and others are often ineffective at answering even some of the simplest queries of their users, making mistakes or entirely making things up, and becoming more and more sycophantic whenever they are corrected (IBM, 2026).
LLMs such as this are also unethical in a multitude of ways. The sloppy soup of information the models are trained to dip their ladles into is largely stolen content.so not only is the information often poor, but by providing it to us with no sources, the models deprive the rightful owner of that information of the credit – and us of the chance to verify it. Some people have learnt how to better sort the slop out to find useful information in it, and feel that it really does save them time and saves organisations money, though my question is always who is really benefitting from those savings? Because it doesn’t seem to be us, it just seems to be the for-profit bosses and the owners of the models themselves, who, incidentally, have created a lovely big bubble of loans between each other and other investors that risks bursting and plunging us into even more economic turmoil (Forbes, 2026).
Moreover, the energy required for these sloppy outputs is contributing to the rising risk of global water bankruptcy, and their data centres are using land needed for agriculture, homes, and green spaces (Business Energy UK, 2026). Finally, ChatGPT is now happily handing over your questions, responses, musings, dreams, fears, and plans to the monster in the White House and using that information to harm people (The Guardian, 2026). Is that worth the five minutes saved on writing an email oneself? Or making a caricature of what one’s work self looks like? Or outsourcing entirely voluntary, mostly enjoyable tasks like planning dinner parties or thinking of gifts for loved ones or reading a book recommended by a friend?
As I went on to discuss with my Year Threes, there are other AI tools that do deserve the title a little more than LLMs do. And even LLMs could have a place in our lives where they genuinely support us without such major costs. Those churnings will be out next time!
Until then, thank you for reading.
Dolores
I would love to hear your thoughts on this, please comment them!
If you think others would love to hear my thoughts on this, please share them!
And to find out when I share the unofficial part two of this piece, subscribe!
References
Age-ism, Hong and Choi 2025 https://pmc.ncbi.nlm.nih.gov/articles/PMC12762376/
AI water use, Business Energy UK 2026 https://www.businessenergyuk.com/knowledge-hub/chatgpt-energy-consumption-visualized/#:~:text=The%20water%20AI%20companies%20use,and%20to%20put%20out%20fires.
ChatGPT and authoritarianism, The Guardian 2026 https://www.theguardian.com/commentisfree/2026/mar/04/quit-chatgpt-subscription-boycott-silicon-valley
Gender bias, UNESCO 2024 https://www.unesco.org/en/articles/generative-ai-unesco-study-reveals-alarming-evidence-regressive-gender-stereotypes
Racism, Stanford University 2024 https://hai.stanford.edu/news/covert-racism-ai-how-language-models-are-reinforcing-outdated-stereotypes
Stolen content and the LLM credit bubble, Forbes 2026 https://www.forbes.com/sites/eriksherman/2026/01/18/new-research-shows-llms-face-a-big-copyright-risk/
Water bankruptcy, united nations university 2026 https://unu.edu/inweh/news/world-enters-era-of-global-water-bankruptcy
What are LLMs, IBM 2026 https://www.ibm.com/think/topics/large-language-models



What a great lesson about free thinking ! And excellent discussion on the pros and cons of AI. Thank you
Lovely piece. I was there in the classroom with you.