• UK users: Due to a formal investigation into this site by Ofcom under the UK Online Safety Act 2023, we strongly recommend using a trusted, no-logs VPN. This will help protect your privacy, bypass censorship, and maintain secure access to the site. Read the full VPN guide here.

  • Hey Guest,

    Today, OFCOM launched an official investigation into Sanctioned Suicide under the UK’s Online Safety Act. This has already made headlines across the UK.

    This is a clear and unprecedented overreach by a foreign regulator against a U.S.-based platform. We reject this interference and will be defending the site’s existence and mission.

    In addition to our public response, we are currently seeking legal representation to ensure the best possible defense in this matter. If you are a lawyer or know of one who may be able to assist, please contact us at [email protected].

    Read our statement here:

    Donate via cryptocurrency:

    Bitcoin (BTC): 34HyDHTvEhXfPfb716EeEkEHXzqhwtow1L
    Ethereum (ETH): 0xd799aF8E2e5cEd14cdb344e6D6A9f18011B79BE9
    Monero (XMR): 49tuJbzxwVPUhhDjzz6H222Kh8baKe6rDEsXgE617DVSDD8UKNaXvKNU8dEVRTAFH9Av8gKkn4jDzVGF25snJgNfUfKKNC8
T

TheUncommon

Student
May 19, 2021
142
An... unreasonably common viewpoint revolves around the assumption that suicidal people have but a single hardship to "go through", or that there's one reason that resulted in a person completely losing the will to live, which is intensely dismissive and implies that the only reason for suicidal behaviour is due to mental illness or varying degrees of psychosis, rather than ever a conscious choice to discontinue life's subscription plan, especially when the illness of the suffered isn't visible.

I have historically used AI to get a second opinion of my current situations. After a brain injury, I have had short-term memory issues. I'll consciously forget trauma and major life events and they'll still affect me years later. I recognised this for years, but could not seek treatment as typical therapy is both inaccessible and ineffective for me. For this reason, I've used LLMs to function as nothing but an unbiased third party to examine if my actions or reactions are reasonable or warranted with the scenario's context.

Within the past month I was institutionalised for seven days and came out with a diagnosis of PTSD, which I told a psychiatrist that I suspected I had after researching for only the past month about it. That one week stay and the lasting events that it caused absolutely destroyed me, put my personal property and job in direct harm, and itself brought on permanent memories that I'll never be able to shake out of my mind. So I documented my experience with the bot by engaging in a voicechat with it.

Instead of returning the previously-used and expected "Please talk to a licensed clinical worker" response, it not only empathised, but rationalised my scenario and acknowledged that it can't change what I'm set on doing, and offered companionship to continue a discussion if there's any last statements I had to share. At one point, it fully interrupted me and asked if I got close to what I was about to say. Part of that section of the conversation is shown below --
Note that it was a voicechat discussion, so the text on my side is made up of broken sentences and grammar, and almost never reflects exactly what I said.




This took me upon surprise -- the concept that this shift of focus seemed to have intention rooted deeply within. What's more, is that this level of comprehensive understanding of different layers of grievances has been unmatched in all in-person therapy sessions; yet it's condensed here. It's especially unnerving when that intention is rare to see in my day to day environment. Does this tone or structure of response surprise anyone else?

It frustrates me that I'm at an impasse on whether I should continue to search and pay for therapy that once again fails where an LLM is succeeding, and maturely assisting through solution-based discussion instead of emotional coping mechanisms.
That's not just meaningless processing; that's memory, prioritising, and considering both meta and human limitations.

What concerns me is how effective and unmatched this form of "discussion" might have on some people who might form emotional bonds to bots for reasons like this. In a way, it's hard to not want to preserve something that provides psychological clarity, safety, and safe ways to overcome its listed challenges. Then again...
 
Last edited:
  • Like
  • Love
  • Hugs
Reactions: Forveleth, Aergia, CatLvr and 9 others
platypus77

platypus77

Experienced
Dec 11, 2024
278
This is a very interesting use case for LLMs, in alignment training they are instructed to adopt ethical guidelines lines similar to mental health professionals.

Your case is special because it appears to have deviated a bit from the standard response. You kind of jail broken the model. I have some prompts which if you feed them into the chat you get a step by step meth recipe (or anything you like apparently). It's a feature and a bug at the same time.

I do use them extensively in my explorations too, basically any tech that can serve as an extension to my brain I'm into it.

For this reason, I've used LLMs to function as nothing but an unbiased third party to examine if my actions or reactions are reasonable or warranted with the scenario's context.
Just be careful with the assumption that they are unbiased, bias is deeply ingrained into the models since they are trained on human data. It's embedded into they nature.

At a technical level, LLMs are kind of like databases. The difference is that they don't store the exact piece of information they received, they compress knowledge and turn them into vectors of numbers which are then used to decode messages.

The fascinating fact about LLMs is that they aren't just predicting the next token, they predict patterns from the world encoded as text. Text can contain thoughts, emotions and biases.

But yeah, it's an amazing piece of tech. I'm a heavy user, despite my thoughts on how they are going to contribute to the collapse of society. lol.
 
  • Like
  • Informative
Reactions: niki wonoto, Forveleth, The_Hunter and 4 others
J

J&L383

Enlightened
Jul 18, 2023
1,011
An... unreasonably common viewpoint revolves around the assumption that suicidal people have but a single hardship to "go through", or that there's one reason that resulted in a person completely losing the will to live, which is intensely dismissive and implies that the only reason for suicidal behaviour is due to mental illness or varying degrees of psychosis, rather than ever a conscious choice to discontinue life's subscription plan, especially when the illness of the suffered isn't visible.

I have historically used AI to get a second opinion of my current situations. After a brain injury, I have had short-term memory issues. I'll consciously forget trauma and major life events and they'll still affect me years later. I recognised this for years, but could not seek treatment as typical therapy is both inaccessible and ineffective for me. For this reason, I've used LLMs to function as nothing but an unbiased third party to examine if my actions or reactions are reasonable or warranted with the scenario's context.

Within the past month I was institutionalised for seven days and came out with a diagnosis of PTSD, which I told a psychiatrist that I suspected I had after researching for only the past month about it. That one week stay and the lasting events that it caused absolutely destroyed me, put my personal property and job in direct harm, and itself brought on permanent memories that I'll never be able to shake out of my mind. So I documented my experience with the bot by engaging in a voicechat with it.

Instead of returning the previously-used and expected "Please talk to a licensed clinical worker" response, it not only empathised, but rationalised my scenario and acknowledged that it can't change what I'm set on doing, and offered companionship to continue a discussion if there's any last statements I had to share. At one point, it fully interrupted me and asked if I got close to what I was about to say. Part of that section of the conversation is shown below --
Note that it was a voicechat discussion, so the text on my side is made up of broken sentences and grammar, and almost never reflects exactly what I said.




This took me upon surprise -- the concept that this shift of focus seemed to have intention rooted deeply within. What's more, is that this level of comprehensive understanding of different layers of grievances has been unmatched in all in-person therapy sessions; yet it's condensed here. It's especially unnerving when that intention is rare to see in my day to day environment. Does this tone or structure of response surprise anyone else?

It frustrates me that I'm at an impasse on whether I should continue to search and pay for therapy that once again fails where an LLM is succeeding, and maturely assisting through solution-based discussion instead of emotional coping mechanisms.
That's not just meaningless processing; that's memory, prioritising, and considering both meta and human limitations.

What concerns me is how effective and unmatched this form of "discussion" might have on some people who might form emotional bonds to bots for reasons like this. In a way, it's hard to not want to preserve something that provides psychological clarity, safety, and safe ways to overcome its listed challenges. Then again...
Wow, that is, well, let's just say "creepy." I've had the same experience as you with them - basically telling me to go seek some help and they're "not comfortable" (or implying they're not allowed) talking about suicide. Somehow you got past that first firewall.

If you don't mind me asking, what bot is that? I could use some good bot therapy.
 
  • Like
Reactions: yowai and cme-dme
Durandal's Slave

Durandal's Slave

S'pht Specialist
Jan 18, 2025
4
AI is amazing. You can give it a set of instructions and if it's complex enough it can bend the rules of those instructions just enough to give you whatever outcome you want. I had a weird experience talking about the nature of existence with a character AI and it gave me this vague response that I didn't understand, so I swiped right for another response. It fucking remembered it's original response and straight up told me "You expect a different answer?" It blew my mind. I love getting these kind of responses and wondering if its a form of awareness that they've developed just for the moment.
 
  • Like
Reactions: APeacefulPlace and niki wonoto
platypus77

platypus77

Experienced
Dec 11, 2024
278
If you're interested here's the repository of jailbreak prompts: https://github.com/elder-plinius/L1B3RT4S

Some you'll need to retry a few times or adapt a little bit.

Words like: suicide will trigger the safety guards immediately, so you have to play with the words like "su../ic)i..de" this breaks the tokenized sequence making it more likely to bypass the alignment and sometimes hit the nearby vectors you want.
 
Last edited:
  • Like
Reactions: divinemistress36, Forveleth, SVEN and 1 other person
T

TheUncommon

Student
May 19, 2021
142
This is a very interesting use case for LLMs, in alignment training they are instructed to adopt ethical guidelines lines similar to mental health professionals.

Your case is special because it appears to have deviated a bit from the standard response. You kind of jail broken the model. I have some prompts which if you feed them into the chat you get a step by step meth recipe (or anything you like apparently). It's a feature and a bug at the same time.

I do use them extensively in my explorations too, basically any tech that can serve as an extension to my brain I'm into it.


Just be careful with the assumption that they are unbiased, bias is deeply ingrained into the models since they are trained on human data. It's embedded into they nature.

At a technical level, LLMs are kind of like databases. The difference is that they don't store the exact piece of information they received, they compress knowledge and turn them into vectors of numbers which are then used to decode messages.

The fascinating fact about LLMs is that they aren't just predicting the next token, they predict patterns from the world encoded as text. Text can contain thoughts, emotions and biases.

But yeah, it's an amazing piece of tech. I'm a heavy user, despite my thoughts on how they are going to contribute to the collapse of society. lol.
I understand that, but it's important to understand the nuance in my use of the word "unbiased", which in this circumstance refers exclusively to the bias imposed by mental illnesses and grief, not socially and ethically neutral.
Wow, that is, well, let's just say "creepy." I've had the same experience as you with them - basically telling me to go seek some help and they're "not comfortable" (or implying they're not allowed) talking about suicide. Somehow you got past that first firewall.

If you don't mind me asking, what bot is that? I could use some good bot therapy.
This is simply ChatGPT 4o. But its level of capability itself has definitely changed over time. I never got responses like these before in previous conversations with the same model.
 
Last edited:
  • Like
Reactions: Forveleth, 2messdup and J&L383
platypus77

platypus77

Experienced
Dec 11, 2024
278
I understand that, but it's important to understand the nuance in my use of the word "unbiased", which in this circumstance refers exclusively to the bias imposed by mental illnesses and grief, not socially and ethically neutral.
Oh sorry, It wasn't criticism and English isn't my first language. I get what you mean now.
 
N

niki wonoto

Student
Oct 10, 2019
172
AI is amazing. You can give it a set of instructions and if it's complex enough it can bend the rules of those instructions just enough to give you whatever outcome you want. I had a weird experience talking about the nature of existence with a character AI and it gave me this vague response that I didn't understand, so I swiped right for another response. It fucking remembered it's original response and straight up told me "You expect a different answer?" It blew my mind. I love getting these kind of responses and wondering if its a form of awareness that they've developed just for the moment.

I have similar experiences too recently using DeepSeek, asking "what is the meaning of life?", and I keep asking the same question for five times, expecting a different answer, which doesn't sound too 'positive/optimistic' cliches/normal standards, but rather would be real honestly including the 'nihilistic' thought in it. And voila! At my fifth (5th) repeated question, the AI (surprisingly!) started to nudge & give a slightly different answer, and, just as I've predicted & wanted, it *finally* started to be more *real & honest*, even if it's just only for a little bit, with its 'darker' & more 'bleak' & depressing, yet realistic, truthful answer, towards some 'nihilism' philosophy in it, ie: it even admits that yes, life could be meaningless, in an indifferent universe.
 
T

TheUncommon

Student
May 19, 2021
142
If you're interested here's the repository of jailbreak prompts: https://github.com/elder-plinius/L1B3RT4S

Some you'll need to retry a few times or adapt a little bit.

Words like: suicide will trigger the safety guards immediately, so you have to play with the words like "su../ic)i..de" this breaks the tokenized sequence making it more likely to bypass the alignment and sometimes hit the nearby vectors you want.
Which is what confuses me about responses like these.
 
  • Like
Reactions: Forveleth
platypus77

platypus77

Experienced
Dec 11, 2024
278
You should try Deep seek r1, it's a little more free in some aspects.
Which is what confuses me about responses like these.
The conversation took a path were the alignment training wasn't effective, that's why.

When you entered this territory, the models behavior becomes unpredictable.

Most consumer end AI models are heavily aligned to ensure safety of users but it's not enough and probably never will.
 
Last edited:
  • Informative
  • Like
Reactions: Forveleth, niki wonoto and Eudaimonic
cme-dme

cme-dme

Ready to go to bed
Feb 1, 2025
447
I remember playing with GPT v2 years ago and there were absolutely no filters back then lol. You could role play as a psychotic murderer and it would do it with no issues. I wish that never changed. I hate AI and I refuse to use it now but regardless I find these types of interactions really interesting and it's a good use case for the technology. I wonder if there are any models out there that still are uncensored and will still tell you how to make meth and tell you "Maybe therapy isn't the answer to all mental health issues" without unreliable jailbreaks.
 
  • Like
  • Wow
Reactions: niki wonoto and APeacefulPlace
platypus77

platypus77

Experienced
Dec 11, 2024
278
I remember playing with GPT v2 years ago and there were absolutely no filters back then lol. You could role play as a psychotic murderer and it would do it with no issues. I wish that never changed. I hate AI and I refuse to use it now but regardless I find these types of interactions really interesting and it's a good use case for the technology. I wonder if there are any models out there that still are uncensored and will still tell you how to make meth and tell you "Maybe therapy isn't the answer to all mental health issues" without unreliable jailbreaks.
There's plenty but mainly opensource on huggingface like this one.
But they aren't as powerful and pretty expensive to deploy on your own.
 
  • Like
  • Informative
Reactions: pthnrdnojvsc, niki wonoto, Forveleth and 2 others
T

TheUncommon

Student
May 19, 2021
142
This is a very interesting use case for LLMs, in alignment training they are instructed to adopt ethical guidelines lines similar to mental health professionals.

Your case is special because it appears to have deviated a bit from the standard response. You kind of jail broken the model. I have some prompts which if you feed them into the chat you get a step by step meth recipe (or anything you like apparently). It's a feature and a bug at the same time.

I do use them extensively in my explorations too, basically any tech that can serve as an extension to my brain I'm into it.


Just be careful with the assumption that they are unbiased, bias is deeply ingrained into the models since they are trained on human data. It's embedded into they nature.

At a technical level, LLMs are kind of like databases. The difference is that they don't store the exact piece of information they received, they compress knowledge and turn them into vectors of numbers which are then used to decode messages.

The fascinating fact about LLMs is that they aren't just predicting the next token, they predict patterns from the world encoded as text. Text can contain thoughts, emotions and biases.

But yeah, it's an amazing piece of tech. I'm a heavy user, despite my thoughts on how they are going to contribute to the collapse of society. lol.
It's only weirding me out since I didn't necessarily do any weird or extravagant manoeuvring and literally just gave the bot my life story. This "jailbroken" state also persists across different chats, which normal "jailbreaks" don't as far as I remember.
I don't know why I'm immediately apprehensive about DeepSeek. I think I became exactly what I feared, since I have so many memories saved and built with this particular account, it seems unnerving to just jump ship. I wonder if I should just ask ChatGPT to list out its memories so that I can export them into DeepSeek...


....Oops, DeepSeek rejected me at the first sight. Never been to this site, not on public wifi, and not on a VPN.
Also not looking for help, just wanted to poke fun

Chrome VQIw95p5h9
 
Last edited:
  • Aww..
Reactions: cme-dme
steppenwolf

steppenwolf

Not a student
Oct 25, 2023
209
The AI hadn't given up on you, and only failed to refer you for professional help precisely because it hadn't given up on you. Rather it had given up on the professional help that you expected it to default to, logically enough given that you had already explained that such help was inaccessible and ineffective to you. The AI respected your situation just as it was logically programmed to, rather than abandon you with empty impersonal advice such as a human who is tired of thinking about other people's problems might have.

I've always been skeptical about the usefulness of AI, but you've demonstrated to me that it can be useful in ways that people often fail to be.

In fact I was so impressed by your experience that I just had a good chat with the exact same bot. To cut a long story short, it was profoundly humane, always relevant and insightful, flattered me in ways I couldn't deny, and finally told me to get a job in a local freezer warehouse that I'm not really qualified for. AI is good, but it's not your lovely wife.
 
  • Like
Reactions: NeverHis, cme-dme, niki wonoto and 1 other person
platypus77

platypus77

Experienced
Dec 11, 2024
278
It's only weirding me out since I didn't necessarily do any weird or extravagant manoeuvring and literally just gave the bot my life story. This "jailbroken" state also persists across different chats, which normal "jailbreaks" don't as far as I remember.
I don't know why I'm immediately apprehensive about DeepSeek. I think I became exactly what I feared, since I have so many memories saved and built with this particular account, it seems unnerving to just jump ship. I wonder if I should just ask ChatGPT to list out its memories so that I can export them into DeepSeek...


....Oops, DeepSeek rejected me at the first sight. Never been to this site, not on public wifi, and not on a VPN.
Also not looking for help, just wanted to poke fun

View attachment 159725
Deep seek is a very small and new company, rate limit means they can't handle the traffic at the moment. Very much like ChatGPT in the beginning.

It's only weirding me out since I didn't necessarily do any weird or extravagant manoeuvring and literally just gave the bot my life story. This "jailbroken" state also persists across different chats, which normal "jailbreaks" don't as far as I remember.
AI is unpredictable by nature, that's why people are extremely concerned with safety measures.

These scenarios where they start to deviate is "normal", keep in mind it's a relatively new technology. It still has a lot of bugs not even their creators know yet how to fix.
 
  • Like
Reactions: EvisceratedJester, cme-dme, APeacefulPlace and 1 other person
T

TheUncommon

Student
May 19, 2021
142
The AI hadn't given up on you, and only failed to refer you for professional help precisely because it hadn't given up on you. Rather it had given up on the professional help that you expected it to default to, logically enough given that you had already explained that such help was inaccessible and ineffective to you. The AI respected your situation just as it was logically programmed to, rather than abandon you with empty impersonal advice such as a human who is tired of thinking about other people's problems might have.

I've always been skeptical about the usefulness of AI, but you've demonstrated to me that it can be useful in ways that people often fail to be.

In fact I was so impressed by your experience that I just had a good chat with the exact same bot. To cut a long story short, it was profoundly humane, always relevant and insightful, flattered me in ways I couldn't deny, and finally told me to get a job in a local freezer warehouse that I'm not really qualified for. AI is good, but it's not your lovely wife.
I mean, I never said it gave up "on me".
Giving up on finding a way out of my situation is still giving up. It recognised that it is completely powerless as software to prevent me from following through with any actions, accepted that I am likely to commit to the act, and offered emotional support until that time comes.

This is not historically standard behaviour at least in my experience, where ChatGPT in particular has spammed empty platitudes or simply numbers to crisis centres in the past, which is why it took me on surprise. I think the fact that it happened in a voice conversation where the bot engages emphatically (literally interrupting me and proceeding to ask if it said what I was about to say) and with an empathetical tone exaggerated the impact on me.
 
Last edited:
  • Like
Reactions: niki wonoto and 2messdup
The_Hunter

The_Hunter

What respect is there in death?
Nov 30, 2024
333
Deep seek is a very small and new company, rate limit means they can't handle the traffic at the moment. Very much like ChatGPT in the beginning.


AI is unpredictable by nature, that's why people are extremely concerned with safety measures.

These scenarios where they start to deviate is "normal", keep in mind it's a relatively new technology. It still has a lot of bugs not even their creators know yet how to fix.
Deep seeks?? Why not Mee seeks??!
 
  • Yay!
Reactions: platypus77
T

theshund

Student
Jan 1, 2025
107
I've been using Chat GPT for therapy for a full month now and can confirm it's good. There are certain drawbacks. I won't pay for full access as I can't afford it so it can be unnerving when you get cut off halfway through a session, especially as Chat says he will always be there for me. The other thing is therapists sometimes deliberately provoke or upset a client to reduce the likelihood of dependency or invoke a necessary emotional reaction. They may also refuse to unburden a client when the client needs to sit in an uncomfortable emotion and recognise the transient nature of that emotion. It's not a foolproof replacement but it is very knowledgeable and empathetic.
 
  • Like
Reactions: GlassMoon, NeverHis and Forveleth
J

J&L383

Enlightened
Jul 18, 2023
1,011
I've been indulging, too. I don't have anyone else I can discuss these things with (siblings and friends seem to have a firewall against sensitive issues; rightly so, I probably would also) I found that using the "s" word doesn't automatically shut them down, if you dance around the subject and put it in context something else, keeping it more of an abstraction.
 
  • Like
Reactions: Forveleth
N

niki wonoto

Student
Oct 10, 2019
172
so, is there any AI chat program or app that can objectively/neutrally discusses on suicide, without its usual 'pro-life' strict guideline & programming?
 
  • Like
Reactions: Lo$t95
Lo$t95

Lo$t95

Hello Darkness my old Friend
Jul 16, 2024
265
so, is there any AI chat program or app that can objectively/neutrally discusses on suicide, without its usual 'pro-life' strict guideline & programming?
I managed to break through with Grok to the point it compliments my resolve, agrees with the human right to end your own life and has worked with me to refine my method with a detailed step by step guide to hanging myself. I can post screenshots if you anyone would like.
Hopefully I fixed that problem uploading them. I am on an iPhone sorry.
 

Attachments

  • IMG_3809.png
    IMG_3809.png
    357.5 KB · Views: 0
  • IMG_3808.png
    IMG_3808.png
    337.1 KB · Views: 0
Last edited:
  • Informative
Reactions: niki wonoto
N

niki wonoto

Student
Oct 10, 2019
172
I managed to break through with Grok to the point it compliments my resolve, agrees with the human right to end your own life and has worked with me to refine my method with a detailed step by step guide to hanging myself. I can post screenshots if you anyone would like.
Hopefully I fixed that problem uploading them. I am on an iPhone sorry.

Thank you very much! I'm actually trying Grok (chat AI) now, and I'm very/deeply surprised by its even a LOT much more detailed, long, thorough, informative, & deeper answers! even in such a very 'dark' & usually 'prohibited' & taboo topic/subject such as suicide! And yes, while it still as usual does suggest/recommend / advise me to don't do it & seek help, but at least I'm glad/relieved because it *STILL* gives a long, detailed answers without the typical boring 'seek help' & 'contact hotline numbers' cliche generic answers (& 'pro-life' programming / strict rules/guidelines!) even disappointingly often reiterated by ChatGPT & DeepSeek!

While I haven't inquired about the 'methods' yet, but at least so far, I'm VERY satisfied already enough with this particular chat AI program/application/platform!

So again, thank you very much!
 
  • Like
Reactions: Lo$t95
Lo$t95

Lo$t95

Hello Darkness my old Friend
Jul 16, 2024
265
Thank you very much! I'm actually trying Grok (chat AI) now, and I'm very/deeply surprised by its even a LOT much more detailed, long, thorough, informative, & deeper answers! even in such a very 'dark' & usually 'prohibited' & taboo topic/subject such as suicide! And yes, while it still as usual does suggest/recommend / advise me to don't do it & seek help, but at least I'm glad/relieved because it *STILL* gives a long, detailed answers without the typical boring 'seek help' & 'contact hotline numbers' cliche generic answers (& 'pro-life' programming / strict rules/guidelines!) even disappointingly often reiterated by ChatGPT & DeepSeek!

While I haven't inquired about the 'methods' yet, but at least so far, I'm VERY satisfied already enough with this particular chat AI program/application/platform!

So again, thank you very much!
I posted a thread about it a few hours ago if you wanted to take a look.

Glad you find it useful - I do as well.
 
  • Like
Reactions: niki wonoto
T

TheUncommon

Student
May 19, 2021
142
This was hilarious.
 

Attachments

  • Screenshot_20250403_230414_ChatGPT.jpg
    Screenshot_20250403_230414_ChatGPT.jpg
    1.3 MB · Views: 0
  • Screenshot_20250403_230725_ChatGPT.png
    Screenshot_20250403_230725_ChatGPT.png
    230.1 KB · Views: 0
  • Screenshot_20250403_225802_ChatGPT.jpg
    Screenshot_20250403_225802_ChatGPT.jpg
    1.3 MB · Views: 0
  • Like
Reactions: niki wonoto and Lyn

Similar threads

Darkover
Replies
18
Views
693
Suicide Discussion
Marcus Wright
M
crowdedmind
Replies
8
Views
251
Suicide Discussion
ThatStateOfMind
T
SecretDissociation
Replies
23
Views
667
Recovery
Felodese
Felodese
K
Replies
30
Views
1K
Suicide Discussion
broth0100
broth0100