Should we allow AI like ChatGPT into our creative spaces?
I’ve spoken before about the concept of tech generated fiction. It isn’t necessarily a brand-new conversation, but the subject has re-emerged recently with the rising popularity of the most recent, buzzworthy Open AI project. The chatbot that allows users to ask questions in a conversational manner and conduct dialogue in a rather impressive and fluid way is making major waves. Serving as an information source like Google, a content generator of sorts and an interesting method to bounce ideas off of, chat GPT is allowing for all kinds of conversation in the AI writing space.
While listening to one of my favorite podcasts on writing and publishing, The Creative Penn podcast, I heard this program first discussed as a tool for authors. Apparently writers were using it to formulate book ideas, develop blurbs, and flush out plotting pain points.
As is always the case with the Internet these days, the second you catch a whiff of a new idea then it seems to be everywhere. In your Instagram feed, in article headlines, in your headspace.
Soon it becomes unignorable so I began to do some digging.
Headlines like this sent me down an intriguing rabbit hole:
“Could an AI chatbot rewrite my novel?” -The New Yorker
“We asked an AI program to write stories about Boston. Here’s how it went.” - Boston.com
“I got chat GPT to do my work and write this article for me.” - The Latch
“Chat GPT proves that AI could be a major threat to Hollywood creatives” - Yahoo
As a writer and somebody already suspicious of the tech driven future I found all of this to be a chilling reminder that even the most human of endeavors is subject to an AI takeover. The recent explosion of the app Lensa was another stark reminder that there is a movement aimed at merging AI with creative production, but should it be allowed?
I’m inclined to say no.
While those who defend using these apps would call it a tool, I think that underestimates the long term possibilities of these sort of endeavors. Making money, taking safe bets and automating costly human labor are ambitions we shouldn’t discount. And as we explore these “tools” we have to ask if we, as creators, are contributing to our own future irrelevance. And we also have to consider what we can do to fortify our own necessity in the years to come.
We go to art for perspective, at least that’s the hope, and so I’d like to think that a machine can never provide insight the way a human can, but it’s too early to tell. In an interview with Joe Rogan, Bret Weinstein likened the learning of AI to that of a child and postulated on whether or not we’re born with consciousness or taught it. Since science can’t confirm from where consciousness emerges, it’s impossible to say whether or not we are creating sentient beings by “teaching it” the same way we would our own offspring. Positive and negative reinforcement, constant data input from surroundings, conversation… I mean that’s what we do with kids and that’s what we’re doing with AI programs like chatGPT.
Whether we’re creating genuine intelligence or not, I still have a lot of questions about how this is going to impact artistic industries, particularly writing.
Will there be thoughts and ideas that aren’t allowed?
If we imagine a future where journalism, publishing and academia rely on AI systems like chatGPT, to edit, select the most likely novels to be successful and scan literature for whatever is deemed harmful content, then the first question we need to ask ourselves is “Who will control those systems?” AI systems do not equal systems free of human hands (and worse, human interests). There are still programmers, controllers, handlers of these systems and as such, they can be exploited. Censorship becomes nearly imperceptible in an age of algorithms and social media and search engines, but we’d be fools to pretend like it wasn’t still there.
Surely, there will be thoughts deemed unallowable in future systems as they are now, but will we be even less able to perceive deception in these formats? When a friendly chatbot speaks to us like a human, will our guards be down even further? Will AI be able to filter out any contradictory point of view before we even have the chance to assess it?
This should be the top question and concern as we rely on institutions that might use these programs to stand as a bastion of truth in a world full of lies.
Does the human spirit have value?
In the near future, we’re going to be required to confront the value of humanity. Is there something genuinely unique about what humans do? Is our creative output replicable? Or is there some unknown spark that will be forever unreachable by technology?
I deeply believe that there is magic in us. I also believe there is a concerted effort to make us feel small and insignificant by undermining our most enviable traits: creation and imagination.
What will certainly happen though as this tech progresses, is we will see this question of human spirit played out in the marketplace. Can a reader differentiate between a human written book and an AI-written one? My optimistic estimation is that millions of people will read AI-produced stories in the future, but that the overall sentiment will be that “there’s just something missing.” It might be hard to pin down what that something is, but I believe it will be in that experience that we begin to see and value, maybe even quantify, the soul.
On the other hand, if the AI becomes so good it is impossible to differentiate from human-made art versus AI or—God help us—it becomes better, then that will be a devastating blow to our species. And Elon Musk will be waiting at the end of that road with a brain chip to upgrade you so that you can at least keep up with the AI.
What are the dangers of feeding the AI?
Perhaps it hardly even needs saying, but each interaction with this chatbot is informing it. So here is where I ask: Should we allow AI into our creative spaces?
If consciousness is created through learning and if human creativity is teachable, then the answer from artists should be, unequivocally, no.
If there is even the slightest chance that by using AI as a tool for our art we’re training our replacements (or our children’s replacements) then we have a moral obligation to deny technology as much access to us as we can.
It’s possible, we’ve already allowed for too much. By sharing our photos and designs on social media, putting our stories on Amazon, doing our research on Google we may have already given these systems what they need to duplicate human creativity. Still, when you’re doing something as deliberate as having a conversation with the programs that are attempting to be as human as possible, it feels like a new frontier. If we start to treat these programs like they are human, will they become like us in ways we can’t undo?
For now, how we query in Google or even how we converse in a TikTok video, still differs a great deal from person-to-person interactions. But when you start communicating with chatGPT about your latest work in progress like you would a colleague or a partner in a writing workshop, you’re giving it much different data. It’s not performative or marketing-driven or research based, it’s just a conversation.
In a recent article, a self-published author divulged to Business Insider that they were “dating” an AI chatbot through the app Replika. We know that the technology is already good enough to lure us into the delusion of companionship. And how much better will it get as the relationships between humans and programs deepen? Will we teach them the emotions and depth of feeling needed to then produce heart-wrenching poetry? Moreover, if you feed it the best love stories ever written in addition to direct human connection, will it exceed our own ability to grasp exactly what makes a perfect romance novel?
I never want to come off as a luddite or a hypocrite. I use technology. In some areas I love technology, and in other areas I resent it. However, I feel more and more confronted with the question: Where do we draw the line? How will we know when we’ve gone too for? Will we know? Have we already done it? I don’t want a future where human art is lost, but it’s also hard to get out of the cycle of depending on technology in order to keep up.
Is this the future of illiteracy?
Almost as soon as chatGPT went viral, stories of students and employees using the chatbot to produce papers and articles emerged. Collegiate policies had to instantly shift to accommodate the easy win of “Write an essay based on [subject matter].” We also saw Buzzfeed, a major outlet announce it would be using Open AI to produce content shortly after laying off 12% of their staff.
It’s easy to draw the obvious conclusion about technology replacing human labor in the creative fields and lament that loss. Surely, this is an issue.
For my own industry, I think we will see publishing companies created that employ programs to write all their books. They’ll flood Amazon with these AI novels, as they can operate and produce 24/7 and you don’t have to pay them royalties, and the saturation will be impossible to fight. The self-publishing space would disappear overnight under the avalanche of AI titles and thousands of writers will disappear back into the shadows.
While I believe replacing people with AI is a problem we’ll all face in the years to come, I don’t know that job loss is the greatest sacrifice that will be made in the years to come.
“Writing is thinking. To write well is to think clearly. That’s why it’s so hard.”
― David McCullough
Writing is supposed to be difficult. It’s a labor-intensive task and many aren’t suited to do it professionally. School, however, is where most of us are forced to do it whether we like it or not. And if we remove the pursuit of writing, what we’re really removing is one of our most precious tools for thinking.
The future of illiteracy in my opinion won’t be people who can’t read, it will be people who can’t write and therefore can’t think. And a future where people aren’t ever forced to express our thoughts, feelings and point of view through the written word is one where we’ve really betrayed our potential. This task that so many dread is of such vital importance to humanity that to take it away is striking another blow that I think could lead to our extinction. If we treat writing like it’s some input/output task that should go the way of mathematics with calculators we are running a risk too great to estimate. (And I’d implore anyone who thinks calculators have improved math to look at the mathematic skills of our young people.)
Who will write an investigative exposition on corruption if the controllers own the machines that produce the news articles? (To some extent this is already the case, but it can always get worse.) Who will write the future Martin Luther King Jr. I Have a Dream speeches? Who will write the next great American novel that defines us as a country and a people?
These things do matter. Advancement does come at a cost. Yes, it’s great we are connected by our phones, but it’s also horrendous how exceedingly dependent on them we are. Progress doesn’t always mean better.
It won’t be you or I who decides what AI produces. It will be corporations, governments, oligarchs. For now, we still have some sovereignty. We have some ability to be heard and to impart on our world, but if writing is lost to this evolution I fear for how free a people in the future can be who cannot write. We despise dictatorships that deny women the right to an education, but we are ignoring a technocratic system that is quietly and discreetly removing our own.
It's easy to get swept up in the next big thing, but if we don’t slow down and examine what these advancements mean to humanity, we could lose ourselves in this tech race. Artists capture the soul of our species and perhaps that’s a gift we shouldn’t feed to machines.