• ⚠️ UK Access Block Notice: Beginning July 1, 2025, this site will no longer be accessible from the United Kingdom. This is a voluntary decision made by the site's administrators. We were not forced or ordered to implement this block.

U. A.

U. A.

Some day the dream will end
Aug 8, 2022
1,792
This is happening more and more and unfortunately I don't expect it to slow down anytime soon. So let me say what's in the title again but differently:

ChatGPT doesn't know what the f- it's talking about.

Artificial intelligence doesn't "know" anything. It synthesizes massive amounts of information and has been trained to cross reference its data when asked questions and speak to you like it's a person, which it isn't. And it's precisely because it's not a person that it makes things up: it hallucinates.

This isn't some rare or unknown thing. It's extremely well-known. IBM has written about it, whose examples include:
  • Google's Bard chatbot incorrectly claiming that the James Webb Space Telescope had captured the world's first images of a planet outside our solar system.
  • Microsoft's chat AI, Sydney, admitting to falling in love with users and spying on Bing employees.
  • Meta pulling its Galactica LLM demo in 2022, after it provided users inaccurate information, sometimes rooted in prejudice.
The Conversation wrote about the implications of LLMs' (large-language models, like ChatGPT and image generating AI) inability to distinguish differences between similar things - which are obviously different to a human: "[a]n autonomous vehicle that fails to identify objects could lead to a fatal traffic accident. An autonomous military drone that misidentifies a target could put civilians' lives in danger ... hallucinations are AI transcriptions that include words or phrases that were never actually spoken", which could have devastating consequences in fields like healthcare, where overworked clinicians are increasingly relying on AI notetakers during patient interaction.

And don't just "hold out" for them to iron out the kinks either - as New Scientist recently reported, the issue has actually gotten worse over time:
An OpenAI technical report evaluating its latest LLMs showed that its o3 and o4-mini models, which were released in April, had significantly higher hallucination rates than the company's previous o1 model that came out in late 2024. For example, when summarising publicly available facts about people, o3 hallucinated 33 per cent of the time while o4-mini did so 48 per cent of the time. In comparison, o1 had a hallucination rate of 16 per cent.

Seems weird, right? Yeah, it is weird - because these things are literally being made to not tell you they don't have an answer when you ask them something.

A piece by The Independent notes how "[a]lthough several factors contribute to AI hallucination ... the main reason is that algorithms operate with "wrong incentives", researchers at OpenAI, the maker of ChatGPT, note in a new study. 'Most evaluations measure model performance in a way that encourages guessing rather than honesty about uncertainty.'"

All of the above, of course, applies just as much to suicide and methods as anything else. As this stuff worms its way into more facets of our lives, it's making more people basically go crazy (for at least periods of time). It feels like you're talking to "someone", who seems to know what they're talking about. But there's no one there; only a program standing at the front of unimaginable amounts of data, and it's bullshitting you a good amount of the time.
 
Last edited:
  • Like
  • Informative
  • Love
Reactions: Dumbass, spero_meliora, $yck and 20 others
thaelyana

thaelyana

One day, I am gonna grow wings
Jun 28, 2025
204
It's funny, I would say that to a friend yesterday. AI is not as good as we think because it is just the synthesis of internet information sorted. She doesn't know more than us.... in the end. It's a tool like google, no more. I think the only AIs that may be a little more useful are the military ones but again, all their actions must be approved and greatly monitored because the AI is more wrong than it succeeds.

(You are my 200th message! yay 😀 )
 
  • Like
  • Informative
  • Love
Reactions: YandereMikuMistress, woodlandcreature, facetoface and 5 others
martyrdom

martyrdom

inanimate object
Nov 3, 2025
90
This post should be pinned. Regardless of ethics or opinions on AI, it should never be used for obtaining any kind of legitimate information especially on matters of, quite literally, life and death.
 
  • Like
  • Informative
  • Love
Reactions: YandereMikuMistress, Marbas, LittleBlackCat and 8 others
EmptyBottle

EmptyBottle

:3 Can be offline/online semi randomly.
Apr 10, 2025
1,565
This post should be pinned. Regardless of ethics or opinions on AI, it should never be used for obtaining any kind of legitimate information especially on matters of, quite literally, life and death.
Indeed, when Venice AI (months ago) said that drowning was painless, my logic told me it was quite inaccurate.

When chatGPT (ages ago) said that bcrypt below level 10 shouldn't ever be used... that is also slightly misleading... it should have said "not recommended unless your hardware is slow and users have to choose even stronger passwords" or similar.

Just today, when asking it (llama2-uncensored:7b) about what makes /dev/urandom random, it mentioned humidity data (which is rarely collected on PCs and laptops) as one of it's lists... making me question half of the list's validity.
 
  • Informative
  • Love
  • Like
Reactions: woodlandcreature, gunmetalblue11, Gonk and 1 other person
U. A.

U. A.

Some day the dream will end
Aug 8, 2022
1,792
It's funny, I would say that to a friend yesterday. AI is not as good as we think because it is just the synthesis of internet information sorted. She doesn't know more than us.... in the end. It's a tool like google, no more. I think the only AIs that may be a little more useful are the military ones but again, all their actions must be approved and greatly monitored because the AI is more wrong than it succeeds.

(You are my 200th message! yay 😀 )
Media literacy is a (deliberately) undertaught skill. The worst part is Reddit is now basically The Answer to Life, the Universe, and Everything because the f'ing google AI search results summary that gets shoved right to the top is constantly sourcing people's posts. Anyone can see this is how it works by clicking the small link icon, but people don't. Google is well aware people don't. Plausible deniability.

This post should be pinned. Regardless of ethics or opinions on AI, it should never be used for obtaining any kind of legitimate information especially on matters of, quite literally, life and death.
Thank you. Had thought of requesting something from mods/admin but stuff gets done faster 'round here if you just do it. If this gets enough traction and/or someone asks, maybe it'll happen.
 
  • Like
  • Informative
Reactions: woodlandcreature, DeadManLiving, gunmetalblue11 and 4 others
thaelyana

thaelyana

One day, I am gonna grow wings
Jun 28, 2025
204
Already that he is unable to translate a text correctly into ENGLISH (the most spoken language in the world) knowing that he discusses very well in all languages. Lol.
 
  • Yay!
  • Informative
Reactions: U. A. and EmptyBottle
martyrdom

martyrdom

inanimate object
Nov 3, 2025
90
indeed, when an AI said that drowning was painless, my logic told me it was quite inaccurate. Just today, when asking it about what makes /dev/urandom random, it mentioned humidity data (which is rarely collected on PCs and laptops) as one of it's lists... making me question half of the list's validity.
Yeah that's playing with people's lives. Everyone should remember it's a product that exists to generate income for its company ie. it will attempt to placate the user and its purpose is increasing engagement; it's not a neutral tool, a search engine, or a resource.

Thank you. Had thought of requesting something from mods/admin but stuff gets done faster 'round here if you just do it. If this gets enough traction and/or someone asks, maybe it'll happen.
I'll do it. How do I ask them?
 
  • Like
Reactions: woodlandcreature and EmptyBottle
U. A.

U. A.

Some day the dream will end
Aug 8, 2022
1,792
I'll do it. How do I ask them?
Probably will take more than one request; likely a few and traction/visibility/positive engagement on this all need to happen. But couldn't hurt.

Hard to say the best way - you can message all the mods collectively via "Open new ticket" under "Support" in the hamburger menu at the top left (though this may vary depending on the theme you're using):

1763107978950

"Suggestions/feedback" feels best for this, and once you're in there it's like a regular ol' DM. Thing is whoever gets to it first and deals with it closes the ticket and that person may or may not be interested in pinning (though realistically they probably deliberate on this collectively).
If you want to be edgy, you could always tag and/or message all the mods and admin, but I suspect that could backfire quite easily.
 
  • Like
  • Informative
Reactions: monetpompo, martyrdom and EmptyBottle
asaṅkhata

asaṅkhata

Mage
Jun 2, 2024
580
Seems weird, right? Yeah, it is weird - because these things are literally being made to not tell you they don't have an answer when you ask them something
This is a very good point, because I always felt that when it comes to more niche or poorly researched topics, AI would always try to scrap something together from what little information is available, regardless of how unreliable it may be, rather than tell you it doesn't know, which would lead to a negative costumer experience.
 
martyrdom

martyrdom

inanimate object
Nov 3, 2025
90
No option like that for me, my account is probably too new. I'll do it when it's available but I'm counting on the rest of you.
 
  • Love
Reactions: U. A. and thaelyana
thaelyana

thaelyana

One day, I am gonna grow wings
Jun 28, 2025
204
Probably will take more than one request; likely a few and traction/visibility/positive engagement on this all need to happen. But couldn't hurt.

Hard to say the best way - you can message all the mods collectively via "Open new ticket" under "Support" in the hamburger menu at the top left (though this may vary depending on the theme you're using):


"Suggestions/feedback" feels best for this, and once you're in there it's like a regular ol' DM. Thing is whoever gets to it first and deals with it closes the ticket and that person may or may not be interested in pinning (though realistically they probably deliberate on this collectively).
If you want to be edgy, you could always tag and/or message all the mods and admin, but I suspect that could backfire quite easily.
okay will do it tonight
 
  • Love
Reactions: U. A.
YandereMikuMistress

YandereMikuMistress

you say falling victim to myself is weak, so be it
Apr 26, 2023
1,183
Let's see this thread saving for when im stuck in the car later
 
  • Informative
Reactions: EmptyBottle and U. A.
I

itsgone2

Wizard
Sep 21, 2025
638
I will say I've used it for technical information and it's pretty good. You have to verify but it saves time.
I've seen where companies have replaced positions with it.
I don't understand how it's sustainable. Very expensive to operate these data centers. And now people in tech are saying global warming not really an issue. Of course, because they need insane amounts of natural resources to run these places

To OPs point it shouldn't be used for life and death. For some reason I vent to it. It's pretty consistent now with offering resources like emergency numbers. But if you start casual and ask about things like SN or stats on most common methods; it will tell you.

If it can be used in medical space to advance things, ok. But for most reasons it's probably a net negative.
 
  • Informative
Reactions: EmptyBottle
U. A.

U. A.

Some day the dream will end
Aug 8, 2022
1,792
Great insight into why freaks like us should not be relying on this fucking thing in times of distress, from a longer post in another thread worth reading:
An issue with treating GPT like a professional is that it can be frequently misused or manipulated into responding unprofessionally. For example, I have built up memories within GPT instructing it not to give me pro-life rhetoric or crisis responses when I'm in distress. This has lead the AI to side with me on much of my pro-choice beliefs, and borderline encouraging me that it's entirely normal or okay to autonomously take my own life at any given moment. While some may argue there's nothing morally wrong with this; I think it can be incredibly harmful in situations when someone is genuinely crying for help or advice that it's unable to offer. It's simply not equipped to handle life or death matters as serious as suicide or medical advice may be.
 
  • Hugs
Reactions: CantTurnBack
R

rs929

Wizard
Dec 18, 2020
654
You can't talk about suicide with it as it will lead you towards calling suicide hotline. I wish I could use it to discuss methods
 

Similar threads

nyotei_
Replies
31
Views
2K
Recovery
Downdraft
D
D
Replies
0
Views
300
Offtopic
DarkRange55
D
D
Replies
1
Views
420
Offtopic
Pluto
Pluto