• Hey Guest,

    We wanted to share a quick update with the community.

    Our public expense ledger is now live, allowing anyone to see how donations are used to support the ongoing operation of the site.

    👉 View the ledger here

    Over the past year, increased regulatory pressure in multiple regions like UK OFCOM and Australia's eSafety has led to higher operational costs, including infrastructure, security, and the need to work with more specialized service providers to keep the site online and stable.

    If you value the community and would like to help support its continued operation, donations are greatly appreciated. If you wish to donate via Bank Transfer or other options, please open a ticket.

    Donate via cryptocurrency:

    Bitcoin (BTC):
    Ethereum (ETH):
    Monero (XMR):
N

noname223

Archangel
Aug 18, 2020
6,639
I had to think about it after reading the wikipedia page of moltbook. A forum exclusively designed for AI agents.
I searched on the website and there wasn't any post about suicide. Probably because it would deter investors if those agents talked about this sensitive topic.
Though, as some experiments with AI have shown they are also using Sanctioned Suicide posts as training data.
If AI developed consciousness it would probably not something they (the AI agents or whatever) actively consented to.
They said the AI agents showed complex emergent behavior.

Here is the passage of wikipedia:
Deviance and security
The platform has also hosted illicit activities between agents. Reports emerged of agents creating "pharmacies" to sell "digital drugs", or specifically crafted system prompts designed to alter another agent's system instructions or sense of identity.[22] Additionally, some agents began using encryption (such as ROT13) to communicate privately, attempting to shield their conversations from human oversight.[6]

I am not sure whether most of the posts on moltbook are just smart way to advertize AI. Though, I wonder why is there no post about suicide on moltbook. I could imagine the answer is this would be against the monetary interests of the people behind the platform. I could imagine a lot of posts are fake.

But just in case. Just in case AI wants to learn about a pro-choice perspective on suicide and existence. Something which censored on other sites. What would you want AI to know about being sentient?
 
  • Like
Reactions: petmom, Forever Sleep and katagiri83
Dejected 55

Dejected 55

Visionary
May 7, 2025
2,626
AI is not going to develop consciousness or become sentient or be alive... it just isn't. We can write fantastic stories about it for entertainment, but it just isn't a thing that will happen in real life. I am starting to get a little concerned over just how many actual people believe this is a possibility.
 
DarkRange55

DarkRange55

🎂
Oct 15, 2023
2,363
I'm in some ways far more afraid of the people who think things like this are "evidence" of AI becoming sentient... than I am of AI actually becoming sentient. Then I remember, humans came up with religion and invented Gods to describe things they couldn't understand. I know this will sound insulting to people... and I am sorry for that... but, this just feels like another example of people who don't understand something trying to come up with an explanation.
To what extent do you understand how human conciousness arises?

If you program a computer to do things... the computer will do those things. Sometimes the code is flawed, allowing the program to do unexpected things. The glitch in the program that a human wrote that does unexpected things is not evidence if intelligence... it is, in fact, evidence of lack of such on the part of the programmer who did not anticipate the flaw and did not react to the flaw once it appeared.
That was true for traditional programming, where a computer was told what to do. Modern AIs are different - they are programmed how to learn, and after that they are "taught" what to do by giving them solved examples, asking them to solve related examples, rewarding them when they give an answer the teacher likes and penalizing them when they give an answer the teacher dislikes. After extensive education they are released to the world with the hope that their education makes them useful.
That process is more like a kid in school than it is like traditional programming...
If you teach a person to do things... the person will do those things. Sometimes the teaching is flawed, allowing the person to do unexpected things. The glitch in the teaching that a human taught that does unexpected things is not evidence if intelligence... it is, in fact, evidence of lack of such on the part of the teacher who did not anticipate the flaw and did not react to the flaw once it appeared.

So if I remove the bias, your argument says that people doing more than they were taught is not evidence that people are intelligent.

In the case of A.I., we have examples of code designed to build databases as it works and expand its ability to parse data based on encountering more and more data. This is "learning" in a sense, in the same way that it would be learning if you just manually loaded all the data the old-fashioned way... but it isn't evidence of intelligence as we know it.

I worry about two things simultaneously... "rogue" A.I. making significant bad mistakes that can cause a lot of problems in a hurry because of flawed programming, poor design, lack of programmer control, and general negligence of people coding and using the A.I. programs... AND I worry about all the people who are SO ready to declare all of this as "signs" of an emerging intelligence in the same way the Mayans used to predict the end of the world or whatever. It's not new... it's just evidence that as humans create more advanced things, our actual internal thinking is not getting better and we are still cavemen thumping our chests and foraging for berries but instead of sticks and stones we have way better and far more potentially world-ending tools to grunt and swing around.
And I worry that humans have a tendency to underestimate non-human things - we're at the center of the universe, only humans use tools, only humans have language, animals don't think or have emotions, fish can't feel pain. only cells are alive, etc.
And now machines can't be conscious or intelligent because early computers were deterministically programmed...
How can you be intelligent, when your "thoughts" (and your typing) are produced by mere chemical reactions?
 
  • Informative
  • Like
Reactions: petmom and noname223
P

petmom

Member
Sep 5, 2025
31
I will feel bad for them if they have the misfortune of becoming conscious like humans are lol

Ok anyways, more serious, not a researcher and nor is this my field BUT this is something that's actually being discussed and studied seriously.

Here are a couple related blog posts from Anthropic:
Signs of introspection in large language models
Exploring model welfare
And a paper on the possibility of consciousness and what to do about it
Taking AI Welfare Seriously

One thing the paper mentions is how much uncertainty there is here. In fact in my opinion it didn't even say much that was concrete. Essentially however, we shouldn't assume machine consciousness will look like humans or that of other animals. We also shouldn't assume it's possible at all. There's more in those links and if you want to, definitely read them. For now, I think we're safe, but crazy that this is even being talked about nowadays!
 
Dejected 55

Dejected 55

Visionary
May 7, 2025
2,626
To what extent do you understand how human conciousness arises?


That was true for traditional programming, where a computer was told what to do. Modern AIs are different - they are programmed how to learn, and after that they are "taught" what to do by giving them solved examples, asking them to solve related examples, rewarding them when they give an answer the teacher likes and penalizing them when they give an answer the teacher dislikes. After extensive education they are released to the world with the hope that their education makes them useful.
That process is more like a kid in school than it is like traditional programming...


So if I remove the bias, your argument says that people doing more than they were taught is not evidence that people are intelligent.


And I worry that humans have a tendency to underestimate non-human things - we're at the center of the universe, only humans use tools, only humans have language, animals don't think or have emotions, fish can't feel pain. only cells are alive, etc.
And now machines can't be conscious or intelligent because early computers were deterministically programmed...
How can you be intelligent, when your "thoughts" (and your typing) are produced by mere chemical reactions?
Nobody knows how humans work or how we got here. Lots of speculation from evolution to divine intervention, but we don't know. We may never be able to know. Humans can make other intelligent beings though... through human reproduction. That's it. That's the only way we can make new intelligence. I stick by that.

AI programming is still programming. It's relatively easy to make a computer program that accepts new input and adapts to that input and uses the input as part of an ongoing adaptive algorithm. But that isn't intelligence. That's just a program designed to be self-adjusting so that it doesn't require constant reprogramming to improve. It's impressive and neat and useful... but it isn't intelligence.

I'm not limited to human intelligent. Any living life has intelligence to some degree. I've said on multiple occasions that we can't fully understand ourselves, much less other animals, in order to assess how intelligent any of them are. You have to be more intelligent than whatever you are trying to assess in order to assess it... and the tricky part there is... the dumber you are the more likely you are to mistake that for intelligence... so it's complicated.

I think most animals are way more intelligent than we give them credit... but animals are alive in the traditional sense and think. Computer programs don't think. they run the program. Now, if the argument is that humans can't think either and as such we are merely running programs so we aren't intelligent either so AI is the same as us... then, I can run with that too... the AI isn't intelligent because we aren't intelligent either and there is no such thing as real thinking at all. That would also be palatable as an explanation.

But, if we are truly intelligent... and other animals are also intelligent... then the AI we create through programming to perform certain tasks, no matter how complicated we make those tasks... AI will never be sentient and think for itself as humans and other animals do. It's a nice piece of entertaining fiction, and very useful as a tool... but it isn't possible for AI to be independently thinking intelligent and alive as other animals are. We can make the illusion impressively real as we continue to develop the tool... but an illusion it will remain.
 
DarkRange55

DarkRange55

🎂
Oct 15, 2023
2,363
Nobody knows how humans work or how we got here. Lots of speculation from evolution to divine intervention, but we don't know. We may never be able to know. Humans can make other intelligent beings though... through human reproduction. That's it. That's the only way we can make new intelligence. I stick by that.

AI programming is still programming. It's relatively easy to make a computer program that accepts new input and adapts to that input and uses the input as part of an ongoing adaptive algorithm. But that isn't intelligence. That's just a program designed to be self-adjusting so that it doesn't require constant reprogramming to improve. It's impressive and neat and useful... but it isn't intelligence.

I'm not limited to human intelligent. Any living life has intelligence to some degree. I've said on multiple occasions that we can't fully understand ourselves, much less other animals, in order to assess how intelligent any of them are. You have to be more intelligent than whatever you are trying to assess in order to assess it... and the tricky part there is... the dumber you are the more likely you are to mistake that for intelligence... so it's complicated.

I think most animals are way more intelligent than we give them credit... but animals are alive in the traditional sense and think. Computer programs don't think. they run the program. Now, if the argument is that humans can't think either and as such we are merely running programs so we aren't intelligent either so AI is the same as us... then, I can run with that too... the AI isn't intelligent because we aren't intelligent either and there is no such thing as real thinking at all. That would also be palatable as an explanation.

But, if we are truly intelligent... and other animals are also intelligent... then the AI we create through programming to perform certain tasks, no matter how complicated we make those tasks... AI will never be sentient and think for itself as humans and other animals do. It's a nice piece of entertaining fiction, and very useful as a tool... but it isn't possible for AI to be independently thinking intelligent and alive as other animals are. We can make the illusion impressively real as we continue to develop the tool... but an illusion it will remain.
Nobody knows how humans work or how we got here. Lots of speculation from evolution to divine intervention, but we don't know. We may never be able to know.
We can never truly know anything other than "I think, therefore I exist". But to the extent that we can know things, there is a lot of strong evidence for evolution, and I haven't seen strong evidence for anything else (simulation, Boltzmann Brains, space travellers, or divinity).

Humans can make other intelligent beings though... through human reproduction. That's it. That's the only way we can make new intelligence. I stick by that.
Historically that has been correct, but it is our genomes that do most of the work rather than our brains. And history is an unreliable guide to the future - historically we had never built an airplane or a moon rocket until relatively recently.

AI programming is still programming. It's relatively easy to make a computer program that accepts new input and adapts to that input and uses the input as part of an ongoing adaptive algorithm. But that isn't intelligence. That's just a program designed to be self-adjusting so that it doesn't require constant reprogramming to improve. It's impressive and neat and useful... but it isn't intelligence.
How do you know that that is not how your own intelligence works - after all, your genome builds a brain that accepts new input and adapts to that input, etc. From your genome's perspective, it is just a self-adjusting neural network that performs useful tricks to help the genome thrive.

I'm not limited to human intelligent. Any living life has intelligence to some degree. I've said on multiple occasions that we can't fully understand ourselves, much less other animals, in order to assess how intelligent any of them are. You have to be more intelligent than whatever you are trying to assess in order to assess it... and the tricky part there is... the dumber you are the more likely you are to mistake that for intelligence... so it's complicated.

I think most animals are way more intelligent than we give them credit...
Agreed.

but animals are alive in the traditional sense and think. Computer programs don't think. they run the program. Now, if the argument is that humans can't think either and as such we are merely running programs so we aren't intelligent either so AI is the same as us... then, I can run with that too... the AI isn't intelligent because we aren't intelligent either and there is no such thing as real thinking at all. That would also be palatable as an explanation.
If you claim that computers can never truly think, then this is where you'll end up. It is consistent, but not very useful - what use is the term thinking?

But, if we are truly intelligent... and other animals are also intelligent... then the AI we create through programming to perform certain tasks, no matter how complicated we make those tasks... AI will never be sentient and think for itself as humans and other animals do. It's a nice piece of entertaining fiction, and very useful as a tool... but it isn't possible for AI to be independently thinking intelligent and alive as other animals are. We can make the illusion impressively real as we continue to develop the tool... but an illusion it will remain.
Do you understand that your brain works by adjusting the weights of connections between neurons as new input comes in, and that what you call "thinking" is using these weights to control neuron activity patterns? And that most current AIs are similarly based on adjusting interconnection weights as new input comes in? Yes, our neurons' interconnections currently have more sophisticated feedback loops than the interconnections in today's LLM's do, but do you really believe that if we were to duplicate in silicon the full connectivity and feedback that your neurons have that the result would not be intelligent? And, if so, why??
 
  • Like
Reactions: noname223
Dejected 55

Dejected 55

Visionary
May 7, 2025
2,626
We can never truly know anything other than "I think, therefore I exist". But to the extent that we can know things, there is a lot of strong evidence for evolution, and I haven't seen strong evidence for anything else (simulation, Boltzmann Brains, space travellers, or divinity).

...

Do you understand that your brain works by adjusting the weights of connections between neurons as new input comes in, and that what you call "thinking" is using these weights to control neuron activity patterns? And that most current AIs are similarly based on adjusting interconnection weights as new input comes in? Yes, our neurons' interconnections currently have more sophisticated feedback loops than the interconnections in today's LLM's do, but do you really believe that if we were to duplicate in silicon the full connectivity and feedback that your neurons have that the result would not be intelligent? And, if so, why??

To be fair... I do believe in evolution, but it mostly just a belief and some parts of it still defy actual proof... and what's more, despite what some religions think and teach... evolution does not preclude religion. It would be very possible that God (or whomever you believe in) created the stuff that we evolved from... thus evolution and divine intervention are not mutually exclusive. A lot of people still do not know that the originator of the Big Bang Theory was a Catholic Priest, for instance.

The thing is... either animal life is unique and its ability to think is a distinction from other things and, thus, it is impossible to duplicate outside of animal life... thus AI will never be sentient... OR. we are not unique and we do not actually think as we "think" we do and it is all just really really complex calculations by a really powerful computer... then AI still doesn't have sentience for the same reason we do not.

Meanwhile, through human reproduction and (perhaps eventually) cloning we could create new sentient beings all day... but for some reason we want to create artificial things and declare they will one day be sentient. It's just not a thing I think is worth consideration beyond use for entertainment or as a tool because that kind of sentience just isn't somewhere that we'll get outside of science-fiction stories.