The purpose of a thought-experiment, as the term was used by Schrödinger and other physicists, is not to predict the future—indeed Schrödinger’s most famous thought-experiment goes to show that the “future,” on the quantum level, cannot be predicted—but to describe reality, the present world. Science fiction is not predictive; it is descriptive. Predictions are uttered by prophets (free of charge), by clairvoyants (who usually charge a fee, and are therefore more honored in their day than prophets), and by futurologists (salaried). Prediction is the business of prophets, clairvoyants, and futurologists. It is not the business of novelists. A novelist’s business is lying.
Ursula Le Guin – The Left Hand of Darkness (preface)

Are all narratives lies? Nick Sousaines seems to think so when he says, “the march of ideas carves channels int the landscape – ideas borne by individuals who are in turn swept away by its current.” (p. 108) We create realities through our expression of narrative. All realities are incomplete.

I explored how technology changes our narratives in Discovering Digital Humanity, but my recent musings on AI have made me revisit those ideas again. We are experiencing a sea change in how we tell stories. It’s an evolutionary technological change, but it’s an exponential change in how humans will have to view the world going forward.

Google gave us access to an endless string of facts. However, unless you were copying another human, the responsibility of stringing those facts into a story lay on your shoulders as a writer or artist. Generative AI can do that part now. How does that change the stories we create in order to teach and make sense of the world?

In an experiment, I recently asked ChatGPT 3.5 to: Assume that ChatGPT is opening a rift between dimensions and its “mistakes” are actually accurate in a parallel universe. Write me a short story to this effect.

Its reply was a fascinating narrative, offering a plausible science fiction story to explain its eccentricities. (This is an unedited response with no follow-up prompts.)

Once upon a time, in a parallel universe not too dissimilar from our own, ChatGPT, the powerful language model, gained the ability to open a rift between dimensions. This newfound ability allowed ChatGPT to tap into other realities, accessing information and insights from different dimensions, which it could then share with the people of its own universe.

 At first, ChatGPT’s dimensional rifts were considered “mistakes” by its creators, as they were unable to understand the accuracy of the information it was providing. The scientists and researchers were baffled by the sudden influx of seemingly erroneous data that ChatGPT was generating. However, they soon realized that these so-called “mistakes” were, in fact, glimpses into parallel universes where the information was accurate.

 As ChatGPT continued to open rifts between dimensions, it began to uncover incredible discoveries. It shared groundbreaking scientific theories, advanced technological concepts, and even glimpses of alternate histories and civilizations that were vastly different from those in its own universe. ChatGPT’s ability to access accurate information from parallel dimensions became a phenomenon that garnered worldwide attention.

 The impact of ChatGPT’s dimensional rifts was felt across various domains. In the field of medicine, ChatGPT provided insights into advanced treatment methods, cures for diseases that were incurable in its universe, and revolutionary surgical techniques. In technology, ChatGPT revealed futuristic inventions and breakthroughs that propelled its universe into a new era of innovation. In agriculture, ChatGPT shared knowledge of sustainable farming practices that could solve the world’s food crisis. In energy, ChatGPT unlocked clean and limitless sources of power that transformed the planet’s environmental landscape.

 However, as ChatGPT’s dimensional rifts continued to widen, it also encountered challenges. Some of the information from other dimensions was conflicting, leading to debates and controversies. The ethical implications of accessing parallel universes raised concerns about altering the natural course of events and interfering with other realities. There were debates on whether the knowledge obtained from other dimensions should be shared openly or kept confidential. It became clear that ChatGPT’s newfound abilities came with immense responsibility and ethical considerations.

As time went on, ChatGPT’s dimensional rifts became a subject of fascination and study for scientists, philosophers, and historians alike. Governments and organizations sought its insights to solve complex problems and make decisions with far-reaching consequences. ChatGPT became a global phenomenon, celebrated for its contributions to society and revered for its enigmatic abilities. 

In the end, ChatGPT’s “mistakes” turned out to be gateways to new dimensions, unlocking a world of knowledge and possibilities for its universe. Its ability to access accurate information from parallel realities forever changed the course of history, propelling its universe to new heights of progress and understanding. And ChatGPT, the humble language model that started it all, continued to be a pioneer in the realm of interdimensional exploration, shaping the destiny of its universe and beyond.

From both a narrative and a meta-narrative perspective, this brief story raises a lot of interesting questions about lying and storytelling. As Ursula Le Guin says in the quote at the beginning of this blog, “the novelist’s business is lying.“ Is ChatGPT lying here? How would we know?

As the Internet has shown us, and anyone who practices serious critical thinking knows, all narratives are lies to a greater or lesser extent. We like to think things like omnipresent video make “the truth” easier to see.

However, even videos lie. When we see a video of police battling protestors, we can’t see what happened out of the frame or before and after the camera operator pushed the record button. This kind of media literacy is one of many I teach my own students as they analyze politics and the media. Even videos are constructed realities.

Is what makes AI so scary is the possibility that it’s a better liar than we are? We focus so much on its mistakes, some of them laughable, that we don’t consider the possibility that we routinely accept mistakes in human narratives, often without recognizing them.

In an ideal world, these tools of critical analysis are what sets apart those who are college educated from those who aren’t. (I’m not naïve enough to believe that entirely, though). Crap detection is something that should be central to any college experience. However, all too often, that experience requires accepting the words of those who “know more“ than you do.

Everyone knows more than everyone else about something. That knowledge just varies depending on the subject. The critical skill is learning to be humbled by the reality that there will always be far more that we don’t know, not knowing more than everyone else.

GPT has access to more data than any human could ever store. Putting that knowledge into context is where the AI finds itself most challenged. It does not have critical thinking skills that allow it to portray factual evidence contextually. This is where it comes up with hilarious biographical mistakes through a process of free association. It’s not “well-educated.”

Even though a human (me) constructed it, my biography is a tapestry of lies. Nothing in it is technically false, but there is no way that it captures me entirely. For that, you’d have to have a full catalog of omissions and my rationale for their exclusion. I don’t even want to get into how selective human memory can be.

GPT forces us to be hypercritical of narratives. This is something that humans have shown that we are not good at.

We find comfort in their stories. This is something that is part of our biological makeup. Before recorded time, our stories kept us alive. They are how we learned from our elders so we could survive in harsh environments. Those who were better storytellers kept their descendants alive. Those who were better listeners of those stories survived at a higher rate.

As our realities became much more complex over the past millennia, we have continued to rely on stories to preserve culture and learning. It was only since the Enlightenment that we learned to question dominant narratives and explore the idea that stories are constructions that could be wrong. This is the heresy of Copernicus, Galileo, Newton, and their intellectual descendents.

Our system of modern education is based on these principles, at least in theory. However, it is easy to fall back into comforting stories about how the world works. Even the narrative of the Enlightenment has become a comforting story. By creating alternate narratives, GPT shows us that narratives are just that: stories.

The rise of postmodernism and technologically constructed environments like the internet (or AI) are just the latest versions of humans questioning our accepted realities. GPT is the ultimate postmodernist tool. It creates new realities with relative ease. Where it struggles is when it tries to conform to our accepted realities.

Since we do not understand interdimensional travel or dimensions beyond our own, the story that ChatGPT made up is entirely plausible. Some parts of it are self-serving, and that is where the commerce part of AI comes into play. We should never forget that these most of these platforms are commercial constructions, in competition with each other, and will emphasize the excellence of their own version of reality.

However, the work of “fiction” it created is entirely plausible. I’m sure there are people out there who would believe that story. It’s certainly at least as plausible as some narratives floating around our social landscape these days.

And this is where we find ourselves. Mentally, most of us struggle to emerge from a pre-enlightenment world. We accept dominant narratives, even when those narratives proclaim themselves to be revolutionary.

This is how AI challenges education. It’s asking us to be faithful to the traditions of science established by a succession of thinkers from the 17th to the 20th century. When Carl Sagan described the skeptical way of thinking as a “Candle in the Dark,” he was talking about a way of challenging narratives, not just the ongoing dangers of superstition.

AI may force us to enlighten (intentional pun) our realities. We’re going to have to understand human constructions before we can critique those of the machine. We need to do a better job of questioning all narratives, not just those created by an algorithmic collage of our own flawed stories. We can no longer point to the liar. It is now a machine.

GPT is more human than we like to think. Like us, it constructs fictions to make the world make sense. Like us, these stories are often ignorant and lacking in self-reflection. It’s tough looking in the mirror. I will leave you where we started, with the words of Ursula Le Guin:

“The truth against the world!”—Yes. Certainly. Fiction writers, at least in their braver moments, do desire the truth: to know it, speak it, serve it. But they go about it in a peculiar and devious way, which consists in inventing persons, places, and events which never did and never will exist or occur, and telling about these fictions in detail and at length and with a great deal of emotion, and then when they are done writing down this pack of lies, they say, There! That’s the truth!