On May 18, the Chicago Sun-Times published a summer reading list: fifteen books to enjoy during the holiday months. Unfortunately, ten of the books did not exist. The author of the feature, Marco Buscaglia, was not a staff journalist but a contributor from a syndicate the Sun-Times relies on, having laid off many of its writers in recent years. Pressed for content, Marco turned to an AI chatbot for help. The bot delivered. The list featured fabricated titles such as Tidewater Dreams and The Rainmaker, complete with publisher blurbs. One breathless summary described "a near-future American West where artificially induced rain has become a luxury commodity."
The list was plausible. It sounded like the kind of thing featured on morning TV or tucked into the "Employees' Picks" section of an independent bookstore, except it was mostly made up. The paper removed the article, refunded subscribers, and Buscaglia apologised. On the surface, it's an amusing little episode-an embarrassing footnote in the evolving story of AI. But underneath the comedy lies something quieter and more consequential: a cultural shift in how we determine what is real. Not a collapse of reality, but a fraying of it-thread by thread, list by list.
What is most unsettling about the Sun-Times story isn't the fakery-it's how easy it was to believe. The AI did not churn out gibberish; it mimicked the tone, cadence, and familiar clichés of real book reviews with uncanny precision. It understood the shape of authenticity, but not its substance. It wasn't trying to deceive-it simply didn't care. And that's the heart of the problem. Generative AI does not lie with intent. It operates without intent. Its goal is not accuracy, but fluency. It's not trained to check facts, only to sound plausible. As described in The Age of Bullshit, this is the defining feature of our new communication machine: not that it wants to fool you, but that it wants to impress you-by producing, effortlessly and endlessly, what amounts to 100 percent premium-grade bullshit.
Advertisement
In a media environment where tone often outweighs substance and polish is mistaken for credibility, the ability to effortlessly produce plausible-sounding nonsense is a dangerous skill. AI doesn't replace human writers-it replaces the hard work of writing. It removes the slow, clunky process of thinking, checking, and confirming. It smooths everything over with simulated insight. And the result is content that's effortless, polished, and sometimes fake.
Of course, this trend didn't start with AI. We've been softening our relationship with truth for years. Public trust in media, universities, and even science has been steadily eroding. Influencers talk about "my truth" as if reality were a boutique experience. In the current context, AI doesn't feel like an intrusion. It feels like the next logical step. So, yes, we can laugh at the fake summer book list. But let's not miss what it signals: a growing tolerance for approximation, a shrug toward verification. If it sounds right, looks right, and confirms what we already believe, we're likely to accept it. Plausibility has become a substitute for proof.
And once that shift takes hold, once we stop caring whether something is real as long as it feels right, the consequences stop being funny. University students use AI to ghostwrite essays and glide their way to increasingly worthless degrees. YouTube serves up deepfakes of Donald Trump singing We Are the World with uncanny conviction.
This deluge of fakery is not just a technical challenge. It's a moral and civic one. What kind of citizens will we be in a world where reality is infinitely remixable? How do we build trust among people and institutions if we can't be sure what is real?
It's too soon to despair. There is hope hidden within the absurdity of the summer reading list. Readers noticed. They flagged the fake books. The fakery did not go unnoticed by everyone. It's easy to see this as a story about technology run amok. But it's also a story about human attention doing its job. The truth still matters, but it will need defenders because AI will continue to improve. The simulations will get smoother, and reality will become something we must actively choose to seek, preserve, and protect.
The temptation is to demand restrictions: ban the tools, legislate the tech. But a better solution lies in strengthening our habits of discernment. Editors must get tougher. Writers must own their process. Readers must relearn scepticism-not the corrosive kind that distrusts everything, but the patient kind that pauses to ask: Is this real? We will also need institutional changes-new standards for transparency, media literacy taught in schools, and norms that reward precision over performance. We need to resist the notion that a good-enough version of the truth is really good enough. And above all, we must restore a culture where accuracy is admired, not assumed.
Advertisement
Today, the fakes are about summer books. Tomorrow, they will be about war, elections, pandemics, and even the past itself. Indeed, tomorrow may have already arrived. A recently published White House report on how to "Make America Healthy Again" is filled with counterfeit references.
When everything can be faked - a headline, a government report, a confession, a memory - who gets to say what is real? What if the next deepfake is not funny, but a speech that triggers violence, a doctored vote count, or a perfectly planned crime? What happens when the real footage looks less convincing than the fake? When you can no longer tell if Donald Trump is singing or if Donald Trump himself is real. Not a person, but a persona generated, edited, and endlessly iterated until we're no longer sure who originated what, or whether there ever was an original to begin with.
This is where we're heading: toward a world of unreality, so convenient and so convincing that we may get too lazy to insist on the truth. Seeking the truth is hard work. It's complicated and sometimes dull. But it is also the only thing that can save us. Let us enjoy the weird, funny moments while we can. But we shouldn't mistake them for harmless. They are not jokes. They're previews.