An AI-generated face. Freepik.com
Recently, I was at a conference on human-centered generative AI. One attendee told me, half-groaning, that they can always tell when an email has been written by AI—it’s just too perfect. We’re seeing the use of LLMs (large language models like ChatGPT) become a ubiquitous part of our lives--even in conference peer reviewing. I’ve spoken with students who even use ChatGPT for conversations.
Arguing against using AI in knowledge work today feels like swimming against a tidal wave. It is also being used more frequently, and often indiscriminately, for routine tasks, as though by default. It’s easy to access, saves time, and helps avoid the hard work of summarizing, interpreting, writing, and even reading. But what are the long-term consequences of such reliance on it?
LLMs are far from perfect. Two well-documented error types are known as hallucinations of factuality and hallucinations of faithfulness. The first refers to AI generating false information, like the widely reported case of Mata v. Avianca, Inc., a personal injury lawsuit in which a lawyer used ChatGPT to draft a legal brief. The AI invented multiple court cases that sounded plausible but didn’t exist. The attorney, who failed to verify the citations or disclose the use of AI, was fined for submitting the fabricated brief.
An example of a hallucination of faithfulness is if you ask about the economy of Belgium, and the system returns accurate facts—about the Netherlands.
We can feel unease when reading the output of AI, not just from AI getting things wrong, but from it doing things too perfectly. In a classroom experiment I conducted (which I described in a previous Substack post), I asked students to summarize and reflect on assigned class readings: once using ChatGPT and once on their own. This continued over a ten-week course, giving me many examples to compare.
While the the AI-generated summaries sometimes included hallucinations, they were polished, and usually grammatically flawless. The student-written pieces, by contrast, were imperfect: sometimes clunky, sometimes grammatically off—but unmistakably human. They expressed what the students felt. To me, the AI versions felt robotic. Many students noticed this too, commenting that the AI-written work just didn’t sound like something a person would write.
This unease reminded me of a phenomenon in robotics called the uncanny valley, a term introduced by Japanese computer scientist Masahiro Mori. As robots or digital avatars become more humanlike, people’s comfort with them increases—until a point. When they become almost human, but not quite, we find them disturbing. A digital face that blinks rhythmically becomes spooky. Pixar’s Tin Toy movie presented an animation of a baby that many movie-goers considered to be creepy. It’s possible to buy a baby doll that looks preternatural. You may not realize that you’re looking at a computer-generated character until a small, subtle flaw gives it away, and then it triggers the uncanny reaction.
LLM writing can evoke a similar discomfort. The attendee at the conference wasn’t just annoyed by AI-written emails—she was unsettled. The content made sense, the grammar was perfect, but something felt off. It was too polished. A machine-written email may achieve its sender’s goal of saving time and effort, but it risks alienating the recipient.
As we move towards a world where our writing becomes error-free thanks to tools like Microsoft co-Pilot or LLMs, we’ll develop new norms. We’ll be expecting messages to be flawless and this can make us even more reliant on AI. But isn’t it lovely to see imperfect writing once in a while, as it reminds us that the writer is still human?
I found this beautiful passage from a Facebook page called Zambian authors, written by a member called ‘you’ and it struck me deeply:
There is an acceptable imperfection that comes with human writing...
An imperfection that can only stem from the depth of human emotion— something AI could never truly replicate.
Every misplaced comma, unexpected phrase, or raw, unpolished thought carries the fingerprint of its creator. It tells a story beyond the words themselves: a story of vulnerability, creativity, and the courage to express what’s often hard to articulate.
This stayed with me. AI doesn’t write from the heart—it has no heart. It lacks empathy, reflection, intentionality. It doesn’t understand its audience. When I evaluated my students’ AI-generated essays, I sensed this.
The person receiving an AI-generated message may not know who really wrote it—but if you’re the sender, you will. So, if you’re struggling to write a love letter, a thank-you note, or even a message of friendship—don’t outsource it to AI. AI can construct a perfectly phrased letter, but it’s not from your heart. You’re asking someone to respond to a polished version of yourself created from data scraped off the web. That version might be more fluent, but it isn’t you.
As an educator for many years, I worry about what students are truly learning about writing—and what insight they’ll be gaining from course material—when they’re not engaging with it directly. The struggle to write, to make sense of difficult texts, and to articulate ideas clearly is a critical part of learning. That’s where growth happens.
Humans learn by making mistakes and reflecting on them. But as AI becomes a routine tool for writing, we risk losing those opportunities. The more we default to AI to write and interpret for us, the more likely we are to accept error-free outputs as the standard—without realizing what we’ve given up. We need to retain the courage to make mistakes, to be imperfect, and to develop our own thinking through the act of writing.
What happens when the habit of outsourcing our ideas to AI becomes so ingrained that a faultless product becomes the norm? Will we forget the richness and complexity that human expression brings to writing?
In a world increasingly shaped by AI, it’s tempting to value fluency, polish, and speed over the messy, harder, slower process of thinking and writing. But if we surrender too much to automation, we risk losing something essential—our own voices. AI will continue to improve; its hallucinations will be reduced and its tone will become more natural. But as AI blends more seamlessly into our daily practice and communication, will we begin to accept a synthetic version of ourselves?
Writing isn’t just about delivering information—it’s a way of making sense of the world, expressing emotion, and forming genuine connections with others. AI can be incredibly useful for technical tasks like software engineering. But when it comes to communicating with another person, interpreting a passage, or expressing your true self, step away from AI. Try writing it yourself. Leave room for human effort, mistakes, and creativity. Most of all, let’s embrace our imperfections and recognize the unique value of what we as humans can express from the heart.
******************************************
You can read more about our lives in the digital world in my book Attention Span, now in paperback with exercises to improve your focus.