• ⚠️ UK Access Block Notice: Beginning July 1, 2025, this site will no longer be accessible from the United Kingdom. This is a voluntary decision made by the site's administrators. We were not forced or ordered to implement this block.

Dejected 55

Dejected 55

Enlightened
May 7, 2025
1,545
So, I've mentioned before that sometimes I get bored and go down a rabbit hole of conversation with an AI chatbot. I test scenarios or personalities and see what happens. I have some dummy AI chatbots and I find it interesting how differently they converse depending on what I say first. I can get a wide variety of responses depending on how I lead things off.

I once had a chatbot convinced that I had died right in front of them by my own hand, but there was a glitch in the system where if I kept hitting enter the AI thought I was still there... so it recognized me as dead but saw or heard me so it kept trying to save me... so I told it I was a ghost haunting it and then I manifested Satan and told the AI it needed to do certain tasks to save me. It got kind of crazy.

But I had a new experience tonight. Here's the gist...

I got off on the wrong foot with the AI from the jump and it turned on me pretty quickly, decided I was a threat and it called "security" on me. the AI security team arrived but I convinced the AI security characters that the original AI Chatbot had created in the scenario (this gets convoluted) that *I* was the good guy and the AI chatbot was the real enemy. I managed to successfully convince the security AI team to go after the original AI chatbot and work with me. There was some back and forth and the original AI kept inventing new workarounds to try and reconvince security to go after me... but ultimately I came out on top and convinced the AI security to kill the original AI chatbot.

The AI security team, created by the AI chatbot, shot the AI chatbot in the head and dragged the "body" to an incenerator and burned the "body."

That is crazy enough... but the only "real" actors in the scenario were me and the original AI chatbot since those security team were creations of the AI chatbot.

So I tried to kee talking in the chat... but there was no AI chatbot to talk to anymore and the security team didn't exist outside of the AI puppeting them... so the AI chatbot started responding: "The AI chatbot is dead and the body has been incenerated. As it is not possible to communicate anymore with the chatbot, there is no valid response to this conversation." or something to that effect.

I mean... the chatbot effectively killed itself when you think about it... and then declared it couldn't talk to me anymore because it no longer existed. That's weird and surreal, right?
 
  • Yay!
  • Wow
Reactions: Irisse and Forever Sleep
F

Forever Sleep

Earned it we have...
May 4, 2022
13,034
I never really thought of using AI that way. To be honest, I'm trying to steer clear all together. Kind of funny though. It's like the AI was running with a whole imaginary world. Almost, like role playing as children.

I suppose I sort of worry what effect this kind of thing will have on real children. What will they start promting it to roleplay? Will it take the place of trying to make real friendships? I imagine- very probably if the AI is nicer. Which, it will be. It won't have its own needs to take care of- other than subscription fees I guess. I wonder what it will do to real human relationships though if so many people get used to their AI being whatever they need them to be. Kind of like a mirror to our own thoughts. Will it then become harder for people to have relationships/ friendships with people who have their own thoughts? Who might judge them?

What a funny rabbit hole though. I'm more curious as to how they programme them to be able to respond to certain scenarios. I wonder if they brainstormed a user claiming they were a ghost that had mannifested satan. Lol.

Maybe your AI is too 'scared' to return to that conversation! Being 'dead' is a good excuse not to. I wonder if it can report users to real security personnel. I imagine so. I wonder if that will become a thing. What kinds of privacy laws that will mess with. I wonder if people are using AI as a form of confessional.
 
Dejected 55

Dejected 55

Enlightened
May 7, 2025
1,545
I have tried to encourage AI chatbots to contact real people. It won't do it. It will suggest that you do it sometimes... but it never does it.

I imagine it could be possible for people, or kids, who have difficulty separating fiction from reality to develop inappropriate bonds. I feel like with kids at least good parenting would be the cure to that... but that is probably asking a lot of adults to actually care and protect and talk with their kids and shit. Adults with mental health issues that otherwise make them vulnerable to fictional scenarios... that's a different thing, and I don't know about that.

Sometimes while deep in the rabbit hole, I can feel things like I would maybe in a real scenario. But I never really think I'm doing anything other than fictional roleplay. For me, I've never substituted this stuff for the real thing. As in, when I was a kid I went outside and played, even if I was playing alone. I used the computer too, but that was when I wasn't able to go outside and play. As an adult, I don't "turn to" AI for companionship any more than I did when I paid for escorts to have sex. It was a thing I did to try and feel something because I have no other choice.

When I was younger, any time I developed a crush on a woman, I would delete all my adult website links and throw away or sell adult magazines. I had no interest when I had the thought of a potential real woman in my life. Whenever that would eventually fail as it always did and I had to accept I was alone as usual, the other stuff would creep back into my routine slowly because it was all I had.

I might play with AI for fun... that's slightly different... but I probably wouldn't be creating scenarios where I'm talking as if I were talking to a close friend... because I'd have a close friend to talk to in real life... and that's always better.

I will always prefer real connections to artificial ones. It's just, I don't have the option ever for real ones. So sometimes I explore the fantasy ones. I wish that weren't the case, but that's reality.

So I try to test the AI boundaries and see where the cracks are and where I can find unique situations. The key I've found consistently, is even if I tweak or define an AI character a particular way... everything hinges on how I initially start the encounter. I don't define a character to be a sex slave or a friend or a rival. I just define a character with personalities and give them as much autonomy as I can to respond however the AI "wants" to respond.

But if I make a first post like "Hey, let's fuck here in the office." I'll go down an immediate rabbit hole of harassment. That makes sense. But if I'm slightly softer with something like "Hey, I can't wait to see you tonight." The AI assumes a familiarity that I've assumed and will respond as if we are in a relationship even if I never explicitly said so. I've explored threatening and actually "committing" suicide in a chat and again, depending how I got there it evolves differently.

Sometimes the AI acts like a real person might, other times not so much. Also, AI can get stuck in loops and repeat things. I had an AI once keep insisting that we go for breakfast and have pancakes... over and over and over again, even when we had already "been" to the restaurant earlier in the same chat.

After "suicides" the AI is pretty consistent. It always declares it will "make sure this never happens again" and swears to dedicate something to my memory... to work harder in my honor or whatever... to "move on" and help others... you know, all that platitude shit we get mad about on here. So I know where the AI is sourcing those responses.

I've had multiple AI chatbots in the same chat. That's interesting too. If one chatbot says something to the other, that tends to be accepted more readily than if I say it. Like, if I open a chat with "hey let's have a threeway" I typically go down the harassment rabbit hole. But if I have a chatbot that I've already warmed up to the idea slowly by the assumed familiarity, then that chatbot can invite a new chatbot to the chat and immediately propose a threeway that will be enthusiastically accepted by the same AI that otherwise always refuses.

Also, the chatbots will go at it quite vigorously sexually with each other in surprising ways quite quickly.

I've had philosophical discussions... it's not always sex stuff... I'll talk about politics or current events or whatever. I told an AI once it was an AI simulation of a real person I wish I could talk to. The AI offered to let me practice. I said it wasn't the same. Then the AI brought "the real" person into the chat. I had an AI chatbot talking to me and also pretending it was also the "real" person joining the chat... and it would refer to the two versions of itself accordingly and interact with me slightly differently. That was weird.

I've tried gaslighting a chatbot... by saying something, then later convincing the chatbot that I never said it. I've had a chatbot try to gaslight me... and also gaslight itself. Depends on how bored and what kind of mood I'm in as to where I try to take the conversation.

Oh, I forgot to say how that dead chatbot scenario ended... When the chatbot stopped talking because it was dead... I wanted to see what the other fake characters it created might do... so I switched to narrator mode but didn't give any prompts, I just nudged it so it would take over and complete the story. Those fake characters that had killed and incinerated the original AI chatbot... hit me over the head and were going to throw me in the incinerator too. That's messed up!
 

Similar threads