• ⚠️ UK Access Block Notice: Beginning July 1, 2025, this site will no longer be accessible from the United Kingdom. This is a voluntary decision made by the site's administrators. We were not forced or ordered to implement this block.

EmptyBottle

EmptyBottle

🔑 LTO tape exists
Apr 10, 2025
749
Answering a mean-spirited sounding message, but in a calm, logical manner. The majority (3 ppl) agreed with my initial post, but the thread was locked before I checked it again (and later unlocked but it isn't relevant to post there).

Even a general disagreement (eg: posting the screenshots of the other detectors and the limitations) would have been better than what actually got written by the other user.


Quoted in Bulletin board style for better readability. This reply bears no ill will, and is saved to disk locally in case of unfair takedowns

> Sorry... but what exactly have you contributed to this discussion, other than publicly demonizing someone who consistently wrote clearly and brought meaningful data to the table?

I have stated why the story could be legitimate.

> Did your brain light you on the road to Damascus? If your brain were of any use, you would have produced something useful instead of investigating Pyrrhic discoveries

[This surprising statement needs no reply]

> Do you really feel good about yourself after posting screenshots of an AI detector in an attempt to discredit someone? Honestly, aren't you even a little ashamed?

My intention wasn't to discredit the person, but to question the accuracy of that specific post, since that post itself appears to discredit someone. Maybe the Hmmph reaction was a bit strong, but not ashamed to point out AI use.

> Here's what I did: I took the exact same text and ran it through other detectors: QuillBot, Scribbr, ZeroGPT. Guess what? They all said the text was 100% human-authored.

>>Screenshot, one of the detectors actually said 12.7% AI<<

Not all the detectors said it was human, a screenshot showed 12.7%.

> And here's the ironic part: even GPTZero itself admits in its official documentation that it can produce false positives and has clear limits on reliability.

I am well aware of the limits to tools, that is why I also mentioned the phrase "moralistic fiction" that made me feel like it was AI. The phrase "language of trauma" is also suspicious. Also, https://gptzero.me/news/ai-accuracy-benchmarking/ says that GPTzero has the lowest error rate of the competition. That's why I used it. (image attached)

> But obviously you ignored this, because the content isn't your problem; the author's is. The problem isn't whether a text looks like it was written by an AI.

As I said before, the author isn't my problem, the content's accuracy is.

> The real problem is when someone, having nothing to say, appropriates a paltry percentage to orchestrate a public takedown.
I had something unique to say "I don't believe it was fake, yes they omitted information; mentioning they won't get into the gory details, but the interruption could have been at any point around unconsciousness.", and I wasn't aiming to orchestrate a public takedown either, just question the accuracy of the specific post.

> And honestly, I can't describe how petty it is. When arguments are lacking, pettiness is all that remains. I've attached screenshots: let everyone judge who's actually contributing and who's just being a cheap accuser.

It wasn't petty to point out AI use *after* addressing the post's content, and yet again, I didn't come to accuse.
 

Attachments

  • 235594_gptzero-accuracy.png
    235594_gptzero-accuracy.png
    32.4 KB · Views: 0
Last edited:
  • Like
Reactions: avoid
iamanavalanche

iamanavalanche

fast words, deliverance
May 20, 2024
157
chatgpt should never be used for ctb info. it srsly makes up shit sometimes with fake citations. ppl need to do their own research T_T
 
  • Like
Reactions: Ch4in3dcr0w, EmptyBottle and avoid
Unsure and Useless

Unsure and Useless

Drifting Aimlessly without Roots
Feb 7, 2023
356
I don't understand why people are using ChatGPT for CTB info to begin with. It runs off of what it was trained to do, and the corporations that train AI are incentivized to keep us alive for as long as humanly possible to ensure we are working (for them). Even if AI was used with good intentions, people that use it are spreading misinformation that can potentially sabotage another's CTB attempt
 
  • Like
Reactions: EmptyBottle, EvisceratedJester, avoid and 1 other person
quietwoods

quietwoods

Easypeazylemonsqueezy
May 21, 2025
373
The only thing more unreliable than AI is AI detection software. I've changed single letters and gotten wildly different results on some of the supposedly "leading" AI detectors.

Idk, don't think AI should be used or relevant to CTB. I'll ask it very simple questions here or there but don't trust it for shit.
 
  • Like
Reactions: Ch4in3dcr0w, psp3000, EvisceratedJester and 2 others
avoid

avoid

Jul 31, 2023
419
It's unfortunate that some people want to hurt others with words. When they read something that angers or hurt them, they sometimes want to return that hurt. Though by the looks of this thread, you don't seem affected much. I would've ignored such people but I understand why you chose to reply in the way you did. I assume to defend yourself.

As for AI, I'm sad that some people feel the need to pawn off AI generated texts as their own writings. I suspect they want to come across as intelligent to stroke their ego. I think lowly of such people. And I think it'll become even worse with time, seeing how big tech companies inject generative AI into all their software and operating systems. At some point in the future, we'll never be sure anymore who or what created something.

And about the user who seems to use generative AI, not many people are able to write posts of 300-500 words within 15 minutes with perfect grammar, perfect spelling, and rather expressive words (advanced English). The posts certainly read like AI-generated texts. But maybe the user wrote their posts in advance, which is what I do sometimes. Or used AI tools to enhance their writing. But no matter how suspicious you are, you'll never truly know whether someone wrote a text without the help of AI. It feels dystopian.
 
Last edited:
  • Like
Reactions: EmptyBottle
Renato

Renato

Member
Jun 11, 2025
45
I don't understand this witch hunt against ai content: those posts were definitely heartfelt and touching. I don't care if they were entirely written by a human or written by ai according to a prompt guideline. They had something genuine to say, and that's all I care about in a place like this.

Furthermore let's consider that those tools are not bulletproof: if that user was really writing those posts by themself, I cannot imagine how they are feeling right now keeping in mind this is a painful place to begin with. Sometimes I happen to read some goodbye thread that make me think no way this person is really going to CTB from what and how they write: but of course I keep those thoughts for myself without giving unsolicited advice or statements to strangers in a place like this.

Last consideration: the reddit post looks really sketchy and it makes little sense from a scientific stand point. If we need to point our finger at something in this situation is definitely towards a so called personal experience which conveniently contradicts everything we know about hanging.
 
quietwoods

quietwoods

Easypeazylemonsqueezy
May 21, 2025
373
I don't understand this witch hunt against ai content: those posts were definitely heartfelt and touching. I don't care if they were entirely written by a human or written by ai according to a prompt guideline. They had something genuine to say, and that's all I care about in a place like this.
I come here to talk to other human beings about the topics of mental health, suffering, and ending one's life on dignified terms.

It is not a """witch hunt""" to want to interact with human beings, not prompts fed into an LLM.

It's ok for people to use AI to assist themselves, especially those whose primary language is not English or have intellectual disabilities making articulating oneself difficult. The user in question's posts and comments are all entirely AI generated, vastly more than basic assistance, with very little effort into making the posts seem written by a human. We have absolutely no idea who they are or what their motives are. We aren't talking to a person but a LLM. Heck, it might not even be a person feeding the prompts.

Can we please not stick our heads in the sand and let this last refuge for the weary on the internet turn into another AI slop fest?
 
Last edited:
  • Like
Reactions: NoDeathNoFear and avoid
EvisceratedJester

EvisceratedJester

|| What Else Could I Be But a Jester ||
Oct 21, 2023
5,086
I don't understand why people are using ChatGPT for CTB info to begin with. It runs off of what it was trained to do, and the corporations that train AI are incentivized to keep us alive for as long as humanly possible to ensure we are working (for them). Even if AI was used with good intentions, people that use it are spreading misinformation that can potentially sabotage another's CTB attempt
It seems as though people are becoming increasingly reliant on ChatGPT and other AIs, which is concerning since a lot of the information they provide could easily be found by looking for it yourself. That is not even getting into the potential psychological impacts that over-reliance on these AIs may have, including recent research showing that there is a correlation between AI use and cognitive decline. Of course, there still needs to be more research on this, but it is concerning. I've honestly been finding the increase in posts written by AI on here to be really annoying. Governments need to start putting in laws in place to regulate this shit.

OP, AI detection software isn't that reliable and there have been many cases of original work being flagged as AI-generated by it. I would imagine that it has only become even worse at detecting AI due to AI becoming better at mimicking human writing patterns. My professor has pointed out that he usually has to rely on how many assignments he gets back that all sound similar to one another, often revolving around the same specific ideas or themes, in order to tell if his students have used AI or not, rather than relying on their writing itself.
 
Last edited:
  • Like
Reactions: Ch4in3dcr0w and EmptyBottle
Unsure and Useless

Unsure and Useless

Drifting Aimlessly without Roots
Feb 7, 2023
356
I don't understand this witch hunt against ai content: those posts were definitely heartfelt and touching. I don't care if they were entirely written by a human or written by ai according to a prompt guideline. They had something genuine to say, and that's all I care about in a place like this.
Adding onto what quietwoods said, people aren't using AI content just for creating heartfelt text. Some people are coming onto this site and posting blatantly false information about methods due to AI usage. One notable example is this thread:


Had anyone actually followed the advice laid out here by aiming for the temporal region, they would've essentially lobotomized themself

And even if we are strictly referring to using AI to create text based on a human experience, there's no guarantee that some what this person is saying is actually genuine because AI can, and does, make up things. How are we able to trust that someone says is genuine when there's a very real possibility that half the things they've "written" aren't even true?

Furthermore, calling it a "witch hunt" is a biased exaggeration of what's happening. A witch hunt, by definition, is "the deliberate searching out and deliberate harassment of those ... with unpopular views" (Source)

Unless you consider "harassment" to be people letting others know that the information contained in AI generated posts is not completely trustworthy, then we're going to have to agree to disagree, but people here are entitled to want a post that's actually genuine, not some emotionless text generated by a machine
 
  • Informative
  • Like
Reactions: EmptyBottle, quietwoods and avoid
Renato

Renato

Member
Jun 11, 2025
45
I come here to talk to other human beings about the topics of mental health, suffering, and ending one's life on dignified terms.
I respect your own reasons for being here, but I see it in a broader way: I come here to contaminate myself with other like minded thoughts on these topics and such a positive interaction can rise either directly from other humans or from ai generated content (which in turn was based on something said somehow by other humans).

Adding onto what quietwoods said, people aren't using AI content just for creating heartfelt text. Some people are coming onto this site and posting blatantly false information about methods due to AI usage. One notable example is this thread:
I will never defend AI posts which spread misinformation, that's way beyond acceptable: but it simply wasn't the case with that user. Ironically it was the opposite: the reddit post was surely written by a human but it was a clear example of malicious misinformation while the (probably) AI generated one was simply stating some facts about hanging according to science.

I judge a post by it's substance, not from the type of neurons that originated it.
 
  • Like
  • Informative
Reactions: EmptyBottle and Lyn
EmptyBottle

EmptyBottle

🔑 LTO tape exists
Apr 10, 2025
749
The only thing more unreliable than AI is AI detection software. I've changed single letters and gotten wildly different results on some of the supposedly "leading" AI detectors.

Idk, don't think AI should be used or relevant to CTB. I'll ask it very simple questions here or there but don't trust it for shit.
Definitely, no detection system is perfect. I picked the AI detector with the lowest error rate... even still, there could be minor human editing and the info could have been fact checked (although such checks are unlikely imo).

"moralistic fiction" was the phrase (out of a few phrases) that set off alarm bells in my head*, that told me to run tests. When I ran tests against posts I believed were human (like @leloyon's story), the detector agreed it is human.

A later (suspect) post in the hanging thread said mixed content. I don't know what parts were human, or what fact checking was done to the post... and thankfully the mod 🔑 relocked 🔑 the thread before AI-suspect results could cause dangerous irl consequences (people act on information they learn from here).


* who even uses a phrase like "moralistic fiction"? Phrases like "pro-lifer fiction" and simpler, would generally be used by SaSu authors instead. And the em dashes too, yet another red flag, when combined with the other suspicious phrases, told me it was AI. GPTZero just confirmed those signs.
I will never defend AI posts which spread misinformation, that's way beyond acceptable: but it simply wasn't the case with that user. Ironically it was the opposite: the reddit post was surely written by a human but it was a clear example of malicious misinformation while the (probably) AI generated one was simply stating some facts about hanging according to science.
I later stated in the thread, that the post could either be a recollection of partial hanging instead of full hanging (and/or they made serious errors in their attempt), and that I can't be sure it is legitimate.

Yes, the AI did state text that could be factual. Ideally, I'd have fact checked each symptom the AI states against human articles, however I have encountered subtly misleading facts in AI (in a non CTB context). It was overly strict about using bcrypt level 10 and above, when official documentation recommends using a level that is strong, but something the system can handle easily (which could be level 9)

PS: GPTZero said this post (including quotes) was human, even with the emoji in the middle.
 
Last edited:
  • Like
Reactions: Ch4in3dcr0w

Similar threads

Adûnâi
Replies
0
Views
158
Offtopic
Adûnâi
Adûnâi
J
Replies
2
Views
340
Suicide Discussion
thelastmessiah
thelastmessiah
S
Replies
52
Views
6K
Suicide Discussion
huifu
huifu