During a recent tour for my newly released book on what Jewish tradition teaches us about artificial intelligence, the third-most common question I received—after “What’s your book about?” and “What made you write it?”—was “Did you use AI to write your book?”
Each time, I would answer that third question with an unequivocal “no.” But I would also take care not to denigrate the potential of large language models (LLMs) like ChatGPT to assist writers and other creators in conceiving and expressing their ideas. Like any technological tool, AI presents tremendous opportunities and vertiginous risks.
My focus on this dichotomy intensified when I read about the demise of National Novel Writing Month, or NaNoWriMo, a nonprofit that for more than 25 years has sought to foster creativity among young and unknown writers.
Every year since 1999, NaNoWriMo has dedicated the month of November to challenging budding writers to pen the first 50,000 words of a novel—or 1,667 words per day for 30 days. According to the organization, more than 400,000 writers participated in the program in 2022, more than 50,000 of whom met the challenge. The group boasts of spawning more than 400 novels published by traditional houses, such as the bestsellers The Night Circus by Erin Morgenstern and Water for Elephants by Sara Gruen.
But last year, the endeavor ran aground on the shoals of AI. Late last summer, NaNoWriMo updated its policies to include AI, stating: “If using AI will assist your creative process, you are welcome to use it. Using ChatGPT to write your entire novel would defeat the purpose of the challenge, though.” (The page containing these comments has since been taken down.) Controversy soon erupted, and in the face of withering criticism from inside and outside the organization, the group doubled down, issuing a statement (since scrubbed from its website but contemporaneously reported) affirming that “we believe that to categorically condemn AI would be to ignore classist and ableist issues surrounding the use of the technology and that questions around the use of AI tie to questions around privilege.” Around the same time, several bestselling authors resigned from its board in protest.
Since then, the group has announced it’s closing up shop. In a March video message, the organization’s interim executive director Kilby Blades acknowledged the group had run out of money but denied the shortfalls had anything to do with AI. “To blame NaNoWriMo’s demise on the events of the last year does a disservice to all struggling nonprofits,” Blades said.
The announcement triggered an avalanche of “I-told-you-so’s” from former program participants and supporters. “So many people worked so hard to make NaNoWriMo what it was,” the children and young adult author Maggie Tokuda-Hall wrote on Bluesky, “and it was all squandered to prop up a plagiarism machine.” Said another Bluesky user: “NaNoWriMo belongs to the writers, not some s— traitorous organization.”
But I’m not so sure we should cheer the collapse of a group that sought—however ham-handedly—to integrate AI into the process of human creativity.
Yes, contemporary creators understandably fear the encroachment of LLMs into their work. As ChatGPT, DeepSeek, and other tools continue to improve, their output is becoming increasingly difficult to distinguish from authentic human authorship, and those who write for a living are correct to be worried. The 2023 labor unrest that roiled the Writers Guild of America, for instance, centered in large part on the prospect of chatbots writing soap opera plots and screenplays for the next installment in the Avengers franchise. And, yes, NaNoWriMo’s stilted, sanctimonious references to ableism, classism, and privilege in its 2024 defense of LLM-assisted writing did the group no favors.
But underlying the group’s woke-speak is an important point: We can indeed, with appropriate safeguards and policies, channel ChatGPT and its ilk into enhancing human creativity. Like any technology developed over the course of human history, we possess the power to apply its potential for good or for ill, and we must forcefully choose the former.
Over the last several years alone, high school and university students around the world have rapidly learned how to use chatbots to spark ideas, conduct research, enhance their prose, and correct their grammar and spelling. And yes, many have abused these tools, triggering a cat-and-mouse game with their teachers and professors of plagiarism and detection. But for students or writers or musicians who struggle to get out of the starting gate, or who labor to cast their sentences or choruses or motifs into their optimal forms, LLMs offer a part of the solution.
Imagine an otherwise talented budding author stumped by classic writer’s block. She’s stuck on a particular element of her plotline that she cannot seem to resolve. She’s tried the usual approaches—talking to mentors, walking around the block, writing in to Story Club, the award-winning writer George Saunders’ fiction-writing blog—but nothing’s working. Instead, she asks ChatGPT to generate a few potential outcomes for her protagonist, a sort of 21st century choose-your-own-adventure. From the five results she receives, she immediately rejects three as implausible or inappropriate, carefully considers but ultimately tosses a fourth, and then seizes on the rudiments of the fifth, editing, refining, polishing, and adjusting it to the tone of her own writing.
Using AI in this sort of creative manner isn’t limited to professional artists. On a recent episode of the Freakonomics podcast, economic journalist Adam Davidson recounted a touching story about an 84-year-old woman in India on life support:
Her husband, who is 92, was distraught. For all the obvious reasons, but for another one, too. He wanted to tell her how much she had meant to him, how wonderful their 60-plus years of life together had been. But he didn’t know how to say that in words. As it happens, his granddaughter, my friend’s daughter, works in A.I. She guided her grandfather through some A.I. prompts. Asked her grandfather some questions and entered them into ChatGPT. It produced a poem. A long poem. He said it perfectly captured his feelings about his wife. And that, on his own, he never would have been able to come up with the right words. He sat next to her, reading the poem, line by line. She died soon after. And he said it allows him to know he told her everything.
Indeed, as Joshua Gans, then the principal investigator of the National Bureau of Economic Research’s Economics of Artificial Intelligence project, told Davidson, “Anybody, even if they can’t string a few words together, can prompt ChatGPT to churn out their thoughts and then read it and sign off on it. There’s this potential for a great explosion in the number of people who can participate in written activity.” Such an explosion would be an unqualified benefit for humanity, and we should do everything we can to facilitate, not stifle, it. Indeed, a recent Organisation for Economic Co-operation and Development (OECD) report found that “literacy and numeracy skills among adults have largely declined or stagnated over the past decade in most OECD countries.” If generative AI can help make up for losses in those skills, we should welcome the innovation—all the while taking care to note, by way of disclaimer, that LLMs were used in the making of a given work.
I consider myself deeply fortunate (privileged?) to be able to express myself fairly well in print and out loud. And yet not everyone who wants to write or speak possesses the confidence or ability to do so. Some people, like the distraught Indian husband mentioned above, can benefit tremendously from innovations that stimulate their creativity on their own terms. Thus, if tools like chatbots can be harnessed to responsibly inspire, improve, and even innovate new forms of human creativity, we should absolutely welcome them to the party. Here’s hoping we’ll have the courage and wisdom to channel this technology for our benefit.