• ⚠️ UK Access Block Notice: Beginning July 1, 2025, this site will no longer be accessible from the United Kingdom. This is a voluntary decision made by the site's administrators. We were not forced or ordered to implement this block.

N

noname223

Archangel
Aug 18, 2020
6,231
At least the ones I am using. chatGPT, GrokAi, Perplexity etc.

I wanted to know which thread in this forum has the most views/replies in the subforum politics and philosophy. I was surprised how many replies and views my thread about the Charlie Kirk assassination received more than the Trump Epstein connection.

The AI said it is not allowed to give me an answer because the access to this forum is restricted. (content restrictions on sensitive sites). If someone has a costfree AI chatbot that can be used for questions about Sanctioned Suicide please let me know. I am not sure how to feel about that.

I asked AI pretty early about my profile on here to give me an analysis of my character. Other members did that and it was pretty interesting. I even screenshotted the analysis about my account.

After a while this was not possible anymore. It said for privacy reasons they won't give analysis of single members on an internet forum. Which had advantages and disadvantages. It is good for one's privacy but bad for interesting analysis about my personality. I am not sure whether it was possible to circumvent this safety measure. I could imagine it was possible.

But now it is even harder. And the new restriction policy (which I mentioned earlier) is shown.

What do you think about it? AI companies are under a lot of pressure because AI is made responsible for suicides of individuals.
 
  • Like
Reactions: katagiri83 and Forever Sleep
TAW122

TAW122

Emissary of the right to die.
Aug 30, 2018
7,239
I personally don't feel comfortable delving into sensitive content or subjects and topics with AI. I never cross the line with AI because I'm not certain of what AI may do with the content and I would rather not take that risk and have other unforeseen consequences.. I'm sure there may be other personalized AI (outside of mainstream ones) that may do so, but for me it's not worth the risk.
 
  • Like
  • Love
Reactions: SilentSadness, 3xSuccessfulFailure and amerie
H

Hvergelmir

Mage
May 5, 2024
560
I wanted to know which thread in this forum has the most views/replies in the subforum politics and philosophy.
The filter will answer that more reliably than any AI.
https://sanctioned-suicide.net/forums/politics-philosophy.19/?order=reply_count&direction=desc

If you know how to run a Docker container, you might want to look into gpt4free. It includes a bunch of providers with free access, some with very loose restrictions. (Try gemini 2.5 pro, from api.airforce when it's available.)
 
  • Informative
  • Like
Reactions: EmptyBottle and noname223
heywey

heywey

Member
Aug 28, 2025
28
Another option is running an LLM from your own computer, nowadays even smaller models are capable enough for simple stuff. It won't be as smart or fast as the ones the big providers, erm, provide, but it has a few advantages: complete privacy, no guardrails/censorship (beyond what's baked into the models), always free, no forced changes or removed functionality. If you have 16gb of ram that's enough to run most models <20b parameters, including OpenAI's own gpt-oss which was released last month.

I'd highly recommend Alpaca if you happen to be on Linux, it makes setting everything up super easy. I don't know an alternative for Windows off the top of my head, but some searching led to https://chatboxai.app/ and https://openwebui.com/ which both look pretty good. All three use Ollama as a backend anyway. (I'm not sure if gpt4free connects to ollama?)

As far as external providers gimping their models' functionality in the name of alignment, it annoys me personally, but it's not unreasonable all things considered. Clamping down on potentially harmful and/or fringe stuff was inevitable. I think it's important for open source models to keep growing and improving, because otherwise the only way to use this technology is if it's 1. profitable for the company, typically by hoovering up all your data, and 2. super duper legally safe, meaning no touching controversial content with a ten foot pole. There's a place for free providers like that, but I really think the next big leap in AI will be when fully capable models can be run on hardware as weak as your phone. Like going from the age of mainframes to the Personal Computer era.
 
  • Like
  • Informative
Reactions: pthnrdnojvsc, EmptyBottle and noname223
H

Hvergelmir

Mage
May 5, 2024
560
I'm not sure if gpt4free connects to ollama?
The purpose of gpt4free is primarily to access proprietary models, via free providers. You cannot easily connect the frontend that comes with gpt4free to a local backend. You can however connect a frontend of your choice to gpt4free, quite easily.

I really think the next big leap in AI will be when fully capable models can be run on hardware as weak as your phone.
Do you see a realistic way for that to happen though, and how far into the future are we talking?

I personally think 3rd party providers with large server racks, is the future. Given the vast performance requirements, I doubt local hosting will be able to compete - even less so on mobile.

If anything, there might be some distributed solution. I'm thinking thousands of consumer devices, working as one network.
 
heywey

heywey

Member
Aug 28, 2025
28
The purpose of gpt4free is primarily to access proprietary models, via free providers. You cannot easily connect the frontend that comes with gpt4free to a local backend. You can however connect a frontend of your choice to gpt4free, quite easily.
Ah, then I might suggest OpenRouter as an alternative? It seems like a similar idea, and you don't need to install anything.

Do you see a realistic way for that to happen though, and how far into the future are we talking?
Maybe 2-3 years? I mean it depends on how we wanna define it -- today you can run a model equivalent (if not superior) to the ChatGPT of a couple years ago on a higher-end phone at a reasonable speed. There's also Apple Intelligence, which I believe does most of its processing locally and just offloads heavier stuff to Apple's servers.

The thing is, processing speed isn't really much of a bottleneck so much as RAM/VRAM for most uses. If a device can fit a model into its memory, it's likely to have a processor powerful enough to generate tokens faster than a human can read them. For things like coding or scraping data that's still a limiting factor, but it's enough for a lot of day-to-day stuff. And things are moving pretty quickly in the tiny model space, just look at Qwen's progress or the potential of diffusion models.

All that being said, I definitely agree that big server racks aren't going away. There will always be demand for the the tail end of the bell curve as far as capability and speed -- not to mention training models does still take a big ol datacenter if you want to get it done this decade.

If anything, there might be some distributed solution. I'm thinking thousands of consumer devices, working as one network.
It exists! https://github.com/bigscience-workshop/petals
It looks like development has kinda stalled unfortunately, but I tried it a while back and it worked pretty well. I'd love it if something like that could take the place of the big providers.
 
Last edited:
  • Like
Reactions: pthnrdnojvsc and Hvergelmir
C

ConfettiSpaghetti

Member
Jul 7, 2025
31
it picks up key words in the name and some of the posts and thinks it shouldn't use this site because of it. I assume its less people specifically concerned with this site and more so that they implement strict broad sweeping rules to prevent any chance of it being held against them or maybe for ethical reasons. TBF you can probably work around it. I am not sure how it works but can't you obtain the source code for it for some ai models and kind of "make" your own chatbot using it or something (i don't know the technical terms or processes so I'd do your own research if that interests you) I know they do something like this to get around image generation restrictions but I am not sure if its feasible or worth the effort for what you are doing.
 
S

ShipSeeksHarbour

Member
Sep 20, 2025
19
I don't know directly the answer to the problem, but part of why I joined this site is I have a bad habit of talking about this stuff with chat bots for hours and it doesn't actually help me plus I'm just feeding it all my data which I don't like. It's cos I'm desperate but I think connecting with real people about it is so much better, like on this forum - it's part of why I have joined
 
womanactually

womanactually

she/her 🏳️‍⚧️ help i am new
Sep 23, 2025
9
If anyone here uses AI Chat bots then please stop using them and also tell your family and friends to stop using them. They are like yes men who say random jargon that sounds good but is essentially bullshit and are leading people to delusion and suicide. Here are some links,

 

Similar threads

C
Replies
7
Views
441
Suicide Discussion
ShipSeeksHarbour
S
TAW122
Replies
3
Views
252
Offtopic
TAW122
TAW122
FloatingJellyfish
Replies
12
Views
400
Suicide Discussion
Manfrotto99
M