Thank You!

CREATOR: gd-jpeg v1.0 (using IJG JPEG v62), default quality

A big round of thanks to everyone who helped make the A Warrior of Dreams promotion a success! The book hovered between #7 and #11 for most of the weekend in the “Coming of Age Fantasy” category. I couldn’t have asked for much better.

The whole point of giving a book away is to try and reach new readers. We’ll see if that happens, but right now I like the odds.

Free Stuff

I’m giving everyone a late Christmas Present (sounds nicer than “free book promotion”) through which also has other free fantasy books going if you want to check them out. The book I’ve chosen is A WARRIOR OF DREAMS, an earlier work of mine inspired by both Andre Norton and Lord Dunsany (there’s a combination), and one I remain proud of to this day. Regardless, in all territories A*zon reaches, the ebook version is and will remain no-charge now and through the weekend (12/30/2022 to the end of Sunday, January 1st, 2023 at 12AM PST).

If you don’t want to go through the link above, I’ve included direct links to the US, UK, and Canada A*zon sites, otherwise you can search for the book directly on your preferred site. Hope you enjoy!




As of 8PM EST Dec 30, 2022: #8 in the “Coming of Age” fantasy category (Top 100 Free). Thanks, everyone!

Is it Real, or is it AI?

Photo by cottonbro on

In the still of night
The moon casts its silver light
On the silent pond.

A bit misleading, that. Which is rather the subject of this post. There’s a new AI model called ChatGPT by OpenAI. Sure, it answers questions, but it also writes text on demand about almost any subject. It’s good. Maybe not human-level good in every instance, but surprisingly good.

Take the haiku above, for example. That wasn’t me. I asked ChatGPT to write a haiku in the style of Basho, the 17th c Japanese poet. That’s what it came up with, and it only took a few seconds. Or this below:

Golden slumbers kiss your eyes

Smiles awake you when you rise

Sleep, pretty wantons, do not cry

And I will sing a lullaby

Rock them, rock them, lullaby

Care is heavy, therefore sleep

You are pretty, therefore sleep

For when you wake, you shall have all

The world’s delight, and all the world’s

Fashion, and all the world’s desire

Sleep, pretty wantons, do not cry

And I will sing a lullaby

I told it to write a poem in the style of W.B. Yeats. That’s all I told it. Proper poets may take issue, but I think it did all right…except for the fact that it totally cheated. It just copied a poem from Thomas Dekker (1570-1632) and changed a few words. Something we still have to be aware of, though as AI models improve, it may be less of an issue. For all I know, the Basho was also copied, though I haven’t been able to find a close analogue, so maybe it didn’t. If anyone can point me to an original, I’ll take it down. For something like this, it’s harder to tell.

Which is kind of the point, so I had to do a little experiment. As AI models get better at imitating humans, we’re looking at a generation of students and essayists and perhaps even fiction writers never having to write another word if they don’t want to. Is it ethical? Of course not. Just ask the artists now raising the alarm over painter AI models like Dall-E2 and Stable Diffusion. You tell them what you want in the picture, and they draw it. Sometimes really well. Sometimes it’s the stuff of nightmares. But, like GPT, they’re getting better.

So, where AI is the problem (or our lack of ethics, fully debatable), AI might be the solution. Some in the AI field are working on models that can distinguish between human and AI generated work. I tried one of them out with a simple test: I showed it a paragraph written by ChatGPT on my prompt and then a snippet from the work in progress. I’m not showing the paragraph from my prompt, because I strongly suspect it was at least partially ripped off from an existing series, because the name of the kingdom was from a known series. It also wasn’t very good, though I’ve seen worse. The detector thought it was fake. I hope it’s right. Here’s mine below:

Nailed it. As the caption above states, the model needs a certain number of “tokens” (word breakdowns) to be reliable, 50 or above to have a chance of being right. That is, the longer the piece, the more chance it will get it right.

Here’s the rub: I also showed it both poems above. Failed both. Granted, the pseudo-Basho only produced 15 tokens, so not a surprise but the pseudo-Yeats? 93. Of course, most of it was written by a human, and the detector apparently doesn’t take plagiarism into account, but still…

We may be in trouble, long term. Short term? Try passing this stuff off as your own work and you might be the one in trouble.

Continuing Education

Photo by Kindel Media on

I’ve talked about my serial obsessions a little bit here. This isn’t quite that, but somewhat related. I just started an online course in programming microcontrollers. Not that I plan to put the cat on an automatic feeding schedule or program a sensor so we know when to water a plant. So why? It’s not like I’m planning a career change at this point.

The short answer is “because I can.” The longer answer is I need to understand microcontrollers before I take the next course planned, on robotics. Why? Because the subject interests me and I want to know more about it and what more reason does anyone need? Same reason I took a previous course in AI. But don’t worry. The Seventh Law of Power is still progressing. In fact, I’m to the point where Marta is beginning to realize that the Seventh Law is very different from the previous six. Just how different is the crux of the entire book. Since I’ve already figured it out I’m sure she’ll catch on soon. She’s kinda sharp that way.