AI, Trolls, and Sherlock Holmes

AI, trolls, and Sherlock Holmes

This morning, replying to a critical comment at Facebook, I was startled.

That is, as sincere and layered as a comment may be, it could have been written by AI.

With the abrupt emergence and growing sophistication of GPT-3, -4, and so on, can we trust anything we read online?

(Yes, I know the logical response is, “Okay, but could we ever trust it?”)

With this new reality, how do we deal with critical — but sincere-sounding — comments?  Replying to them could consume valuable time better spent more constructively.

(As I type that, I wonder if it’s any better to spend time replying to AI-generated comments that sound cheerful and supportive. Or to spend even one moment of our valuable time dealing with snarky critics, whether their voices are real or AI.)

Will we need digital signatures to identify those who comment? But if we each use digital signatures, how do we also protect our privacy?

Have the lines already blurred?

Here’s why that question comes to mind:

Recently, I’ve been helping an overwhelmed cousin with her audio/visual business, providing voiceovers for some of her clients. I don’t mind helping her for a few hours a week, while she hires new voice actors.

Her voice and mine are so similar, most people — including her clients —  won’t realize that what they’re hearing isn’t actually her voice. (That’s why I volunteered, short-term.)

But even in that context, this gets tangled. After all, my cousin is the “voice” for several clients.

So, in audiobooks, local ads, and videos, people think they’re actually hearing the voice of Jane Doe (or whomever), when it’s actually my cousin. Or, in some cases (for just a few weeks), it’s me.

Okay, that seems like a harmless deception in a low-tech context.

[September 2023 update: It may have been part of a later problem. At my request, my cousin re-recorded the material that used my voice, and worked with clients to replace the older ads, etc. So now, if you hear “my” voice anywhere except on my YouTube channel or at my websites, it’s not me. ]

But I wonder where we draw the line.

What safeguards we can put in place, without it becoming a privacy risk to the broad populace?  After all, the problem is actually a tiny, malicious minority, albeit one with the potential to wreak broad-scale havoc in a world that’s already a bit of a tinderbox.

I don’t want this to become a question of “do the ends justify the means?”

Asimov’s “Three Laws…” – prescient or too simplistic?

This morning, my husband referenced Isaac Asimov’s “Three Laws of Robotics.”

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

So, how do we define “harm,” and how can we trust AI — sentient or not — to understand acceptable boundaries when it’s learning from the Internet? After all, that’s where boundaries are trampled and exploited daily.

As content creators, website owners, forum moderators, and participants in social media, these are some daunting issues to address.

As Conan Doyle said in The Boscombe Valley Mystery, “There is nothing more deceptive than an obvious fact.”

The questions then become:

  • What appears to be an obvious fact, but is actually deceptive?
  • And how can we tell the difference, in the fast-paced realm of social media?

I’m not sure we have much time to consider this. At the pace AI is developing, we may be coping as we sort this.

“Good enough” may have to be good enough, whether that means shutting down comments altogether (as I sometimes do), or learning to shrug off (and perhaps delete) those we don’t have time for.

For now, the answers aren’t clear, but it’s an immediate and emerging issue most of us need to consider, as content creators, as members of the online community, and in the everyday world.