Brave? New World?

Photo by cottonbro on

Today it came to my attention that one of the major online magazines in the field has temporarily closed to all submissions until they figure out a way to deal with the tons of incoming spam slush that, wait for it, was clearly written by AI, probably ChatGPT.

Yes, this is a problem which we all should have seen coming. I’ve written about it before now, but so far as I know this is the first time a magazine actually shut down submissions over it. A certain class of hopefuls and maybes and probably nevers have always existed, and like Merida, want to change their fate, and would try anything and see this as their big chance. Or maybe the clueless just wanting to make a quick (hah) buck? How it’s going to shake out is anyone’s guess, but I do take some small satisfaction knowing that Fritz Leiber was already there in 1961 with his book, The Silver Eggheads. This was a future where all books were written by machine and “authors” were simply the people assigned to tend a particular machine. There was more to it, of course, but a review would say something like “Joe Scribbler writing on a Worderizer 3000 produced…” etc. The end product, if I recall correctly, was referred to as “word wooze.” Part of the problem we have now is, with a decent prompt and some example text, ChatGPT can do a decent job of it, likely more literate than any of the hapless. It’s only a matter of time before a purely AI-written story appears in a major magazine of the field. Maybe it already has. An AI written self-published story/novel? Probably already there or very soon will be.

Yes, I do know there are online “AI detectors” which can take a text and determine with fair accuracy whether or not it was written by a human, but that’s beside the point. So far as most editors are concerned, “Ain’t nobody got time for that.” They get a lot of submissions that have to be dealt with as quickly and efficiently as possible. Slush readers are either volunteers/interns or the lowest editor on the totem if there’s more than one, which often is not the case. I don’t pretend to know what the solution might be, but there has to be one. Stopping people from submitting AI written stories probably isn’t going to happen, because “how”? Especially as the AI gets better and I can see a day when such stories are indistinguishable from human created by any objective measure.

Just as Stable Diffusion and Dall-E are shaking up the art world, now it’s our turn. Fair is fair, I guess. Sort of.

I take a very little comfort in knowing that it does still take some skill to get the result out of AI that you intended. As I noted above, a decent prompt is required. I’ll give a personal example. I asked ChatGPT to write a routine in C++ to print the Fibonacci series. Worked perfectly. I asked ChatGPT to write a function where, given an integer, it would produce the previous two numbers in the Fibonacci series.

Total train wreck.

Some of you may have already seen that coming. I asked it to take an integer. I didn’t specify that the integer was actually IN the fibonacci series.

Whoops. Garbage in, garbage out.

So there is still some skill involved, and a human, as in The Silver Eggheads, has to provide that skill. Likely that’s where it will all go wrong. For the cheater, that is.

Is it Real, or is it AI?

Photo by cottonbro on

In the still of night
The moon casts its silver light
On the silent pond.

A bit misleading, that. Which is rather the subject of this post. There’s a new AI model called ChatGPT by OpenAI. Sure, it answers questions, but it also writes text on demand about almost any subject. It’s good. Maybe not human-level good in every instance, but surprisingly good.

Take the haiku above, for example. That wasn’t me. I asked ChatGPT to write a haiku in the style of Basho, the 17th c Japanese poet. That’s what it came up with, and it only took a few seconds. Or this below:

Golden slumbers kiss your eyes

Smiles awake you when you rise

Sleep, pretty wantons, do not cry

And I will sing a lullaby

Rock them, rock them, lullaby

Care is heavy, therefore sleep

You are pretty, therefore sleep

For when you wake, you shall have all

The world’s delight, and all the world’s

Fashion, and all the world’s desire

Sleep, pretty wantons, do not cry

And I will sing a lullaby

I told it to write a poem in the style of W.B. Yeats. That’s all I told it. Proper poets may take issue, but I think it did all right…except for the fact that it totally cheated. It just copied a poem from Thomas Dekker (1570-1632) and changed a few words. Something we still have to be aware of, though as AI models improve, it may be less of an issue. For all I know, the Basho was also copied, though I haven’t been able to find a close analogue, so maybe it didn’t. If anyone can point me to an original, I’ll take it down. For something like this, it’s harder to tell.

Which is kind of the point, so I had to do a little experiment. As AI models get better at imitating humans, we’re looking at a generation of students and essayists and perhaps even fiction writers never having to write another word if they don’t want to. Is it ethical? Of course not. Just ask the artists now raising the alarm over painter AI models like Dall-E2 and Stable Diffusion. You tell them what you want in the picture, and they draw it. Sometimes really well. Sometimes it’s the stuff of nightmares. But, like GPT, they’re getting better.

So, where AI is the problem (or our lack of ethics, fully debatable), AI might be the solution. Some in the AI field are working on models that can distinguish between human and AI generated work. I tried one of them out with a simple test: I showed it a paragraph written by ChatGPT on my prompt and then a snippet from the work in progress. I’m not showing the paragraph from my prompt, because I strongly suspect it was at least partially ripped off from an existing series, because the name of the kingdom was from a known series. It also wasn’t very good, though I’ve seen worse. The detector thought it was fake. I hope it’s right. Here’s mine below:

Nailed it. As the caption above states, the model needs a certain number of “tokens” (word breakdowns) to be reliable, 50 or above to have a chance of being right. That is, the longer the piece, the more chance it will get it right.

Here’s the rub: I also showed it both poems above. Failed both. Granted, the pseudo-Basho only produced 15 tokens, so not a surprise but the pseudo-Yeats? 93. Of course, most of it was written by a human, and the detector apparently doesn’t take plagiarism into account, but still…

We may be in trouble, long term. Short term? Try passing this stuff off as your own work and you might be the one in trouble.