Closest Thing to a Replicator

The radical judges were put in place to do a job, and they are doing it. Every woman I know is mad as hell, and every man with more understanding than a brick is also. Not to mention that the freshly-packed Trump Supreme Court (TSC) immediately went chipping away at the separation of Church and State, and Thomas has proclaimed they’ll take on Marriage Equality (as in attack) and contraception later. Everyone knew this was coming, and I’m only surprised it took this long, and that’s all I’ll say on the matter…for now.

I want to talk about happier things, or at least something else. The Seventh Law of Power is going along fine (new character(s) coming in soon), and I bought myself a late birthday present: a 3D Printer.

I always thought one of the coolest things in later Star Trek was the notion of the Replicator. More magic than technological so far as I was concerned. A thing in the wall and you ask it for precisely what you want, and if you get the request in the proper form (Earl Grey, Hot), you get it…mostly. Tell me that’s not a Genie granting wishes, though I’m a little surprised the thing didn’t try to create Earl Grey on fire. It was very literal.

Those of you who have experience with 3D Printers already know they have a lot in common with the Replicator, if far less capable. You have to know what you want, express it precisely enough that a slicer app can turn the image into g-code instructions the device understands, and the thing makes it for you…if nothing goes wrong. Though calling it a printer is a bit of a misnomer. It’s more of a mini additive manufacturing device in this context, though I have seen versions large enough to make a house.

Anyway, once I had it assembled (which is another story. Supposed to take an hour. Took most of the day), I took my first stab at it just to see if I had put the thing together right, using an existing file for the test. You can see the result above. Not bad for a first attempt, if a little fuzzy around the ears.

Next I’m going to get my feet wet in CAD software so I can make my own designs. Because.

75% Chance of Human

Photo by Kindel Media on Pexels.com

I’m almost done with the introductory AI course, with three classes left. Also almost done with Chapter 10 of The Seventh Law of Power, though the two are not related, other than serving as evidence I can still multitask. Marta has just had to deal with a suspiciously inept assassin and I’ve been deep in the weeds with Mediapipe and Tensorflow.

We did a series on facial recognition early in the course, with pictures of known politicians (posed) for training data and more candid shots for the recognition bit. The software was amazingly accurate. In the last few lessons we’ve moved more into facial and gesture recognition in the sense of spotting what is and isn’t a face on a more fundamental level than simple ID, and recognizing gestures.

In a recent lesson, the goal was to make the program recognize a face on camera, and draw key index points on that face, using our own as the target. While we were looking at the raw data, one of the first parameters to show up is how confident the program is that “this is a human face” with the probability given as some fraction of one. Our instructor was fussing that the routine never gave his own face a probability higher than .93, or 93% confidence.

Mine was never more confident than 75%.

When it was time to identify gestures, you can guess which one I chose.

A Very Fast Idiot

Photo by cottonbro on Pexels.com

Back in college my major was Polymer Science. Plastics, resins, that sort of thing. Even so, it was deemed necessary for undergrads in technical fields to gain some exposure to computers and programming.

Bear in mind, this was still early days in the computer revolution. Personal computers did not exist, nope, not so much as a Trash 80, though the Apple I was already on its way. What passed for a small computer was a DEC PDP-11, about the size of a large filing cabinet. Paper tape. Punch cards, which were bloody awful. Later, if you were lucky, a dumb CRT terminal. If you weren’t, a paper teletype machine. Our campus system was a XEROX mainframe (Sigma 9), 64K main memory (that’s K, not G or even M). The system took up an entire (and very large) room. Have I dated myself well enough? I should say so.

I knew nothing then, thinking computers were something almost magical a la Star Trek. Not these computers. You had to tell them everything, and I do mean everything, in precise instructions, in order, and they would do what you told them and nothing else. Problem was, what you think you told them wasn’t always what you actually told them, and since we were the last generation running batch jobs it sometimes took a long wait before you knew you’d messed up. They were, as the “elves” in charge of the Sigma referred to them, “very fast idiots.”

I loved it. One of the biggest regrets of my misspent youth was I didn’t change my major in my first year. Regardless, cut to the present. Narrow AI is progressing by leaps and bounds and used everywhere (not always a good thing); general AI is either imminent or impossible, depending on who you ask. I’m taking online classes in machine learning because I can and I want to.

My last lesson was was on gesture recognition. You film yourself on a webcam, and using prefab learning model libraries, teach the computer to recognize a human hand. For practice we created an updated version of Pong, only this time you point at the top of the screen and the computer has to know to put the paddle where you’re pointing to intercept the ball. So far so good, only mine was creating a paddle when my hand wasn’t even in the shot. Took me a moment to realize why: it was interpreting the headstock of my Peavey Predator hanging on the wall as a hand.

Still doing exactly what it’s told, if not exactly what you intended. Still a very fast idiot. I’m not holding my breath on that general AI thing.

Sometimes You Just Gotta

Photo by cottonbro on Pexels.com

As I’m sure I’ve mentioned before, I’ve been taking a self-paced online machine learning class in my abundant, nay even copious, free time. Not that I’m contemplating a career change or anything. I’m just interested in AI and want to have a better understanding of what it is and isn’t, what it can and cannot (yet) do.

The course has been fascinating and a little scary at times, both because of the subject and my rusty coding skills. But the last lesson was a bit of a mental hotfoot for an entirely different reason. We’ve moved into facial recognition tools to (you guessed it) identify people from photographs and video. To identify from video we had to use ourselves as test subjects, which involved using a webcam to take a selfie video and teaching my computer to recognize me and draw a rectangle around my face in real time to prove it. Weird but no more than that.

But then….

Next task was using a set of photographs of known persons to train the system to recognize them, then compare those to a series of unknown (as in unidentified) pictures. Pretty straightforward by comparison, with just one stumbler: to avoid copyright and privacy issues, we were using pictures of politicians and public figures. Once the figure(s) in the pictures were identified, we had the system display them onscreen with a rectangle around their faces and their name.

I think you can see where this is heading.

While there are many political figures I disagree with strongly, there is one I cannot even look at without throwing up in my throat a little. I will neither confirm nor deny the identity of this person, but needless to say, they were in there. The program found them, and displayed them as ordered with the aforementioned rectangle and name.

Funny thing about the rectangle command, though. Among the parameters there is one to control the line thickness. 1 for thin, 2 for thicker, etc. However, using -1 as the parameter draws a completely solid rectangle, obscuring the face entirely.

Acid reflux is bad on your throat, after all. It’s not politics, it’s a health issue.

Plot Bombs

I don’t remember where I first heard the term “plot bombs,” but I immediately understood what they were. They’re sort of like land mines, laid down in either a previous text or an earlier point in the current one. And then the reader hits them and perhaps stop for a moment to think, “Oh, so THAT’S what <blank> was all about.”

It can be a little more refined than that, but it’s the same principle, which I just ran across from the writer’s perspective. One of those multiple cases where my subconscious is clearly smarter than the rest of me. Those who have read Black Kath’s Daughter may remember a rather unpleasant creature called a craja. Marta thought she understood what they are, and the future danger she was in of becoming one.

In the scene I’m writing now, I was going to show Marta that she was entirely wrong about the craja. In preparation for writing it, I was referring back to their original appearance to make sure I was getting the details of my own creation right (happens all the time in a series).

So what did I find? I find that, way back then, the Power Amaet had already told her what a craja really was, and Marta, perhaps partly due to her loathing of Amaet, just wasn’t listening. In short, she’s about to find out what she already should have known from the beginning. All that worry…not exactly for nothing. Definitely something, but not the something Marta thought it was. Sure, I knew what they were, but I had no memory of the fact that Marta should also have known.

Will be something of a shock to her when she realizes this.

Something of a shock to me already.

For those already present, I’m on Pinterest now. If you’re inclined, come check me out there.