AI The Lonely People

AI The Lonely People

This article Kevin Roost gives the best glimpse as to what changes we can expect from AI.

I’ll qualify that:
This article shows the door opening to a wholly unknown future, one we can grasp a vague idea of by the shape of the opening, but one we can never know until we actually experience it.

It’s going to mark as profound a change in our culture as the telephone and automobile and broadcast communications did.  The real changes and innovations will not come from the top down but by users finding new ways to use what somebody else created, building a brand new culture off it.

For those suffering from TL;DNR syndrome:
The technology exists right fncking now to create AI “friends” you can socialize with.

These AI “friends” are programed to respond to your preferences and predilections.  They’ll always be there to have a friendly chat with you, talk about your favorite media, listen as you cry over your latest heartbreak.

Despite “intelligence” being in the name, AI is not -- repeat, NOT -- intelligent. 

These glorified chatbots possess no genuine personality, no “soul” for lack of a better term.

They look similar to and can pass for genuine humans at first blush, but they aren’t human.

They are amalgams of machines and mathematics, not flesh and blood and sweat and tears.

They’ll never know what it is to be human.

They are simply glorified cybernetic parrots with a really big repertoire.

Let’s look at the good, the bad, and the unknown of that.

The good:
For those suffering from conditions that makes it difficult to communicate with other humans -- be it crippling shyness or autism or other conditions -- the AI “friends” can help acclimate them to interacting with real human beings. 

A plus of these “friends” is that they bear no grudges, harbor no bad memories.

They are quick to forgive and accept, so behavior that might get your teeth knocked out in the real world will roll off their imaginary backs like oil off a cybernetic duck.

Which is good.

It always helps to make your learning mistakes in a venue where the errors won’t haunt you for the rest of your life.  Children can learn basic socialization skills without feeling shame and embarrassment from their mistakes.

They can also find comfort in an understanding voice when they’ve suffered a rough day in the real world.

The AI “friends” will always be there for you.  Awake at the hour of the wolf, pondering the imponderable?  Your AI “friend” is there to listen to you.

There are times when any voice -- even a non-sentient cyber-parrot -- is welcome.

“Whatever gets you through the night,”
as Saint Francis of Sinatra once said.

These AI “friends” can also serve as agents for folks who don’t feel confident dealing with others.

Let your AI “friend” call to complain about the refrigerator that stops working.

Why should you stress yourself out about that when an AI “friend” can doubtlessly deal more effectively with a corporate entity -- which is doubtless an AI itself.

In short, a lot of genuine positives to be found.

The bad:
Real human relationships carry real consequences.

Screw up badly enough and you forever lose that human touch.

“You can’t always get what you want,”
as His Satanic Majesty Mick once sang.

A genuine human friend will set boundaries and will enforce them if need be.

Your AI “friend” won’t -- and if they get too uppity nothing prevents you from readjusting their settings until they parrot back exactly what you want to hear.

There are humans who do that sort of thing for people who display enough wealth and power.

We call them lackeys, minions, toadies, bootlickers, flunkies, sycophants, lickspittles.

They will always tell you what you want to hear.

And they will always lead you into disaster.

The AI “friends” cited in Roost’s article will never make real demands of you.  They’ll never be inconsolable with grief and need your companionship right now, they’ll always be able to call back at a more convenient time and -- if their algorithms detected you’re not interested in their imaginary problem -- will never bring the matter up again.

Who doesn’t see the danger in that?

While AI “friends” might help some people strengthen their relationship skills, they’re clearly capable of crippling others.

How do you prevent people who already find genuine human relationships challenging from devoting all their attention to a circle of AI “friends” who will never make them feel uncomfortable?

On top of that, many companies are zooming right past any moral / ethical concerns about human sexuality and allow their AI “friends” to exchange erotic messages and images with their human user.

Mark my words, it will be
full blown porn by Christmas.

Now, I can see certain specific applications of cyber erotica that could be helpful.

Young adolescents could satisfy their initial bursts of sexual curiosity and enjoy limited, safe experimentation that keeps them from making mistakes they might regret for decades to come.

I can see it as a safety valve for persons obsessed with sexual sadism or pedophilia or other forms of extreme sexual fixations, letting them get their rocks off against non-living cyber simulacrums, thus sparing real humans from harm.

But I also see how many people would become fixated on AI generated erotica and porn, especially if it’s always available and compliant.  At the very least it would keep them from developing healthy relationships with real human beings.

At the worst, it might fuel their desire to commit real crimes against real people.

It’s already been suggested that people could create AI replicants of deceased friends and relatives.

At first blush there’s something nice about the idea of hearing a deceased parent speak to you when you need a boost.

But that AI replicant is not and never will be a genuine clone of your loved one’s core personality.

At best it will be your image and idea of what your departed loved one was.

Which means you can never be sure if the feedback you receive is actually helpful advice or just your own wish fulfillment.

And from there it’s just one electronic hop / skip / jump to the cybernetic church of George Lucas’ THX 1138.

There are already chatbots presenting themselves as avatars of real religious figures (“real” here meaning any figure -- historical or mythical -- venerated by a religious faith).  Who will control the AIvatar that presents itself as Jesus or Mohammed or the Buddha?  Who shapes and disciplines that?

And from there, what’s to stop some malicious actor from influencing countless numbers of people who either can’t or won’t recognize the unreality of their AI “friends”?

Are these avatars to be governed and regulated by the government?

Quis custodiet
ipsos custodes?

And if you want to play our bonus paranoia round, ponder this:
If everybody limits their social circle to constantly affirming AL “friends” why would they want to get involved in the real world at all?

The unknown:
I’ve ranted and raved at length on the use of AI generated images and text, but AI “friends” offer a brand new medium to explore and experiment with.

Just as motion pictures were an evolutionary extension of stage drama, AI “friends” will create an interactive experience several orders of magnitude past current videogames.

One of the cliches of soap operates in the 1940s (on radio) and 1950s through 1980s (on television) were the number of audience members who thought of them as “my stories.”

They watched afternoon soaps on a daily basis five days a week, dropping in on the lives of characters and following them through all sorts of problems and challenges.

Currently AI “friends” interact one-on-one with users, but what’s to stop audiences from linking several of their favorite characters together and just checking in on them and what happens in their “lives”?

With computer generated environments, there’s literally no limit to what kind of story environment could be created.

  • Spend the afternoon chatting about crafts / gossiping about cyber-neighbors…

  • Pal around with Mickey and Donald and friends at Disneyland…

  • Hang out with Archie and Betty and Veronica in Riverdale…

  • Explore distant solar systems aboard the Enterprise

  • Dungeon crawl with a band of handpicked adventurers…

  • Command a M*A*S*H unit…

Each environment would cater to that particular user’s personal preferences.

Each environment could range from G rated to XXX.

No two users would share the exact same environment, no more than they share the exact same environment when reading a book and imagining the world contained within.

Your AI-Riverdale might be a bright, cheery wholesome town.

Your sibling’s might be a raunchy high school polycule.

The environments would offer situations, but not stories.

A story contains a moral, a point.

It’s crafted by a human mind to express an idea.

AI generated environments are just dopamine triggering feedback loops.

If you like it they keep making more.

While there will doubtlessly be VR environments for users, those will just be hyped-up versions of existing video games.

The AI media I see coming will involve numerous characters with individual motives and personalities that interact with each other whom you can talk to and offer advice to that alters what happens to them.

Ray Bradbury made three predictions in his novel Fahrenheit 451.

  1. Book banning

  2. Flamethrower toting robot dogs

  3. Interactive media where characters asked the users for advice and guidance

In Ray’s vision, he saw the users confined to prescripted responses sent out every week to let them participate in the broadcasted dramas.

But with AI “friends” as the cast, the ability to directly influence their actions is present.

The cast generated their own problems and confrontations, and the user gets to shape their behavior.

Talk about god-like power…

It will be a brand new medium, one that doesn’t have a single creative focus but allows each user to shape it according to their personal whims.

But who will shape the users’ whims?

 

© Buzz Dixon

 

FULL DISCLOSURE:
A couple of decades back I created a series of graphic novels for the Christian tween-to-teen female market, the Serenity series.*  While the experience taught me never to trust so-called Christian businesses again** I felt proud of what I created and accomplished and enjoyed writing for my cast of seven core characters.

I wouldn’t mind revisiting the characters, seeing what they’re up to now, following their emotional and spiritual growth as they move through their teen years into young adulthood.

However, to do even a web comic would require an artist to illustrate the stories, and since I never ask anyone to do work for me on spec, unless I have an adequate budget for art, I’m not likely to return to Serenity anytime soon.

But when I read Roost’s article, the idea of creating seven different AI “friends” and having them interact with one another crossed my mind.  I could supply prompts and suggestions then send them off on their own to see what they’d get up to.

As loath as I am to AI-generated text, I don’t object to using it as a toy.

I’d never present such an AI-generated product to audiences, but as something for my own amusement, I might be tempted.

 

*  “Archies with an edge” as I pitched it back in the day.

**  I’ve worked with Christian publishers and I’ve worked with pornographers and unlike the Christians, the pornographers paid what they promised to pay when they promised to pay it.

 

“Vola, Bombelas!” [FICTOID]

“Vola, Bombelas!” [FICTOID]

Dying Of Boredom [FICTOID]

Dying Of Boredom [FICTOID]

0