• New TOR Mirror: suicidffbey666ur5gspccbcw2zc7yoat34wbybqa3boei6bysflbvqd.onion

  • Hey Guest,

    If you want to donate, we have a thread with updated donation options here at this link: About Donations

sserafim

sserafim

they say it’s darkest of all before the dawn
Sep 13, 2023
8,044
Because shareholders give literally zero fucks for what is good for humanity. Ask the funders of McDonalds, Lockheed Martin and Boeing. For some money many CEOs would even prostitute their own grandmothers. (and kill everyone who can be a danger for their dirty secrets.)
True. Everything that people do is for money and profit. I think that they'll make AI just intelligent enough to still need human labor. I don't think that they'll ever implement a UBI, unfortunately. They want to keep us as slaves to the system
 
sserafim

sserafim

they say it’s darkest of all before the dawn
Sep 13, 2023
8,044
Tech bros only care about the potential return of investment or the promised future of convenience for themselves. Many people who talk big about AI still can't tell the difference between it and basic algorithms or simple machine learning.
Do you think that AI will completely replace humans one day? If so, would euthanasia finally be legalized?
 
  • Hugs
Reactions: Zazacosta
Dr Iron Arc

Dr Iron Arc

Into the Unknown
Feb 10, 2020
19,381
Do you think that AI will completely replace humans one day? If so, would euthanasia finally be legalized?
Well if AIs do decide to completely replace us they probably wouldn't need to bother with euthanasia. They can't possibly truly understand the depths of wanting a peaceful death so they'd probably just kill us in a more efficient way.

If somehow AI is controlled to a point where they remain subservient to us, then they'll probably be even less agreeable with suicide much like they are right now where all of their so-called intelligence gets stripped away the moment you mention the slightest hint of suicide at them.
 
Agon321

Agon321

I use google translate
Aug 21, 2023
714
For power, for money, for development, for your own ego...

There are a lot of reasons.
I personally want humanity to develop AI.
This is a brilliant tool.
There simply need to be safety rules that people need to follow.
Unfortunately, we know that in practice it varies.

Self-aware artificial intelligence can be a threat.
I don't know if this is possible.
But if something like this happens, we may have a problem.
Of course, a lot depends on what permissions such a "creation" will have.

If they have a lot of control, it could end badly for us.
If you're doing something at home, you don't take your dog to help you.
AI might think the same.

Of course, AI can be peaceful, but I am considering a pessimistic scenario.

Our species has dominated this planet through intelligence.
In the scenario when "living" AI is created, we will no longer be the most intelligent creature on the planet.
This is a bit problematic.

AI will be better than us at most things.
They might even be able to create artificial bodies.
Very hard to say.

This is a completely new topic for our civilization, so there are many unexplored new problems.

Additionally, there will be ethical problems.

In general, I believe that AI needs to be developed, but sensibly.
 
  • Like
Reactions: sserafim
T

thenamingofcats

annihilation anxiety
Apr 19, 2024
358
Imagine if for every problem you've ever had you can buy your way out of it or get someone else to fix it. There are no consequences. The elite are behind this and they've never had consequences and probably never will.
 
F

Forever Sleep

Earned it we have...
May 4, 2022
7,758
Same as other people have said- the people putting money into this are only concerned about the big returns they'll get for themselves and their families perhaps.

The brainy scientists doing the research may be so up their own arses focusing on achieving their goal, they may not stop to think too much about what will happen if they do. A bit like Oppenheimer. He seemed to hold conflicting attitudes with regards to weapons and war yet, it didn't stop him developing the atom bomb. I think inventors/scientists/artists can be extremely narrow minded on their goal rather than its implications.

Like that brilliant line in Jurassic Park:

'Your scientists were so preocupied with whether or not they could, they didn't stop to think if they should.'
 
ijustwishtodie

ijustwishtodie

death will be my ultimate bliss
Oct 29, 2023
2,665
I don't understand how and why people see AI as a threat? Is it because people watch too many movies or something? AI isn't even intelligent... it merely regurgitates information that has been provided to it. I believe that, to be intelligent, there has to be an understanding but AI has no understanding of what it's saying
 
  • Like
Reactions: derpyderpins
N

noname223

Angelic
Aug 18, 2020
4,440
I don't understand how and why people see AI as a threat? Is it because people watch too many movies or something? AI isn't even intelligent... it merely regurgitates information that has been provided to it. I believe that, to be intelligent, there has to be an understanding but AI has no understanding of what it's saying
There are already weapons used with AI tools. In the end humans program them but they act semi (?) autonomous I think. This is at least one example I could think of.
 
derpyderpins

derpyderpins

Misery Minimization Activist
Sep 19, 2023
571
I don't understand how and why people see AI as a threat? Is it because people watch too many movies or something? AI isn't even intelligent... it merely regurgitates information that has been provided to it. I believe that, to be intelligent, there has to be an understanding but AI has no understanding of what it's saying
I think that's a good point, and the idea of AI doesn't upset me. I will say, though, I'm at least slightly concerned that the people feeding the information to it are human and prone to error, but are still going to put the final product in a position where it's handling more responsibility than traditional algorithms ever have. So I guess I'm more concerned the AI will fail in some big way and we'll all be too stupid to do anything on our own at that point than I am concerned the AI will try to enslave me. And if it tries to enslave me it will probably figure the best way is to use some sort of ai waifu to honeypot me so that sounds fun at least.
 
Dr Iron Arc

Dr Iron Arc

Into the Unknown
Feb 10, 2020
19,381
I don't understand how and why people see AI as a threat? Is it because people watch too many movies or something? AI isn't even intelligent... it merely regurgitates information that has been provided to it. I believe that, to be intelligent, there has to be an understanding but AI has no understanding of what it's saying
Right now people are mainly afraid of AI taking their jobs. Mainly animators, writers, and voice actors are probably the people most worried about being replaced by now but eventually it could replace actors, accountants, doctors, therapists, lawyers, marketing executives, even CEOs.
 
  • Like
Reactions: sserafim
derpyderpins

derpyderpins

Misery Minimization Activist
Sep 19, 2023
571
Right now people are mainly afraid of AI taking their jobs. Mainly animators, writers, and voice actors are probably the people most worried about being replaced by now but eventually it could replace actors, accountants, marketing executives, even CEOs.
And somehow those of us not replaced will still work 40 hour weeks as the standard lol.
 
1MiserableGuy

1MiserableGuy

Specialist
Dec 30, 2023
367
Won't ever actually happen. AI is just an attempt to play God. Human beings can't recreate creation.
 
Zazacosta

Zazacosta

Member
Apr 29, 2024
73
Well if AIs do decide to completely replace us they probably wouldn't need to bother with euthanasia. They can't possibly truly understand the depths of wanting a peaceful death so they'd probably just kill us in a more efficient way.

If somehow AI is controlled to a point where they remain subservient to us, then they'll probably be even less agreeable with suicide much like they are right now where all of their so-called intelligence gets stripped away the moment you mention the slightest hint of suicide at them.
I 100% agree with you.

Isaac Asimov's "Three Laws of Robotics"
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
*************
My comments on this:
1) Unfortunatelly, these laws are only from a sci-fi novel and they are not going to implement it.
2) Do you see how pro-life these laws are?
3) My conclusion from this is that in the (distant) future AI will either replace us or we will live in a horrible dictatorship with massive censorship and 100% predictable world where everybody who even try to think or try to behave incorrectly from the only one true unquestioned law will be immediatelly detected and treated and washbrained by AI.
 
a.hamza.13

a.hamza.13

Member
Apr 15, 2024
19
Do you think that AI will completely replace humans one day? If so, would euthanasia finally be legalized?
I don't think AI will absolutely replace humans because we're the most important thing in the universe. Because of us there's meaning in this universe. I think there's something that's keeping us alive in this violent universe where giant stars are destroyed and things like black holes could eat planets. Even a small asteroid from the space can destroy us. We're in the habitable belt of the sun, otherwise a small change in temperature could destroy life on earth. I think there's something that wants us to discover this universe which I think is one of the fundamental purposes of life. I think one day we'll be beyond our current limits and unleash our real potential. What do you think?
 
  • Like
Reactions: pilotviolin
jar-baby

jar-baby

Specialist
Jun 20, 2023
364
Plenty of figures in the field are in favour of pausing or slowing down AI development until the alignment problem is solved (i.e. we're able to ensure AIs more intelligent than us will act in alignment with our interests). A commonly used term is AI safety, often used to refer to approaches to mitigating existential risks that might arise from the development of superintelligences.

The counterforce to this approach is effective accelerationism (e/acc), proponents of which believe the right thing to do is simply to speed up technological development. But anecdotally speaking, most AI experts are opposed to this.

I don't understand how and why people see AI as a threat? Is it because people watch too many movies or something? AI isn't even intelligent... it merely regurgitates information that has been provided to it. I believe that, to be intelligent, there has to be an understanding but AI has no understanding of what it's saying
Artificial general intelligence (AGI) is generally the term used to refer to AI that can "think" and perform a variety of tasks in the way that a human can.
You're right that AI isn't an existential threat... yet. But the idea is that once humans do develop AGI, that AGI may be able to develop AIs that are smarter than itself. And those AIs will be able develop AIs that are smarter than themselves. And so on. This is what experts refer to as as the intelligence explosion—the degree of intelligence possessed by AI gets exponentially greater until it far surpasses that of humans. Since this can all happen really quickly once the initial AI is developed, if humans haven't solved the alignment problem by then, we could be done as a species if the superintelligent AIs overlords decide they want that.

Personally, I think the development of AGI is still a while away. But many experts do believe alignment needs to be solved, and that AI poses an existential threat if it isn't.
 
Last edited:
Zazacosta

Zazacosta

Member
Apr 29, 2024
73
I don't understand how and why people see AI as a threat? Is it because people watch too many movies or something? AI isn't even intelligent... it merely regurgitates information that has been provided to it. I believe that, to be intelligent, there has to be an understanding but AI has no understanding of what it's saying
AI is not a threat. NOW. It will only change how some jobs are needed currently. But it is only a matter of time, I believe... Who can say that in 50-100-200 years AI will not be much stronger than it is now?
 
sserafim

sserafim

they say it’s darkest of all before the dawn
Sep 13, 2023
8,044
I don't think AI will absolutely replace humans because we're the most important thing in the universe. Because of us there's meaning in this universe. I think there's something that's keeping us alive in this violent universe where giant stars are destroyed and things like black holes could eat planets. Even a small asteroid from the space can destroy us. We're in the habitable belt of the sun, otherwise a small change in temperature could destroy life on earth. I think there's something that wants us to discover this universe which I think is one of the fundamental purposes of life. I think one day we'll be beyond our current limits and unleash our real potential. What do you think?
I think that aliens exist. I believe in the existence of extraterrestrial life. The universe is so vast that there must be something or someone else out there. We can't be the only ones
 
xinino

xinino

Anti humanist
Mar 31, 2024
399
I think that's a good point, and the idea of AI doesn't upset me. I will say, though, I'm at least slightly concerned that the people feeding the information to it are human and prone to error, but are still going to put the final product in a position where it's handling more responsibility than traditional algorithms ever have. So I guess I'm more concerned the AI will fail in some big way and we'll all be too stupid to do anything on our own at that point than I am concerned the AI will try to enslave me. And if it tries to enslave me it will probably figure the best way is to use some sort of ai waifu to honeypot me so that sounds fun at least.
Maybe we are the problem, then. AI is out of our league. I think if we reach that point, the majority of humans won't deserve to live because they are inefficient to compete against AI.
Plenty of figures in the field are in favour of pausing or slowing down AI development until the alignment problem is solved (i.e. we're able to ensure AIs more intelligent than us will act in alignment with our interests). A commonly used term is AI safety, often used to refer to approaches to mitigating existential risks that might arise from the development of superintelligences.

The counterforce to this approach is effective accelerationism (e/acc), proponents of which believe the right thing to do is simply to speed up technological development. But anecdotally speaking, most experts are opposed to this.


Artificial general intelligence (AGI) is generally the term used to refer to AI that can "think" and perform a variety of tasks in the way that a human can.
You're right that AI isn't an existential threat... yet. But the idea is that once humans do develop AGI, that AGI may be able to develop AIs that are smarter than itself. And those AIs will be able develop AIs that are smarter than themselves. And so on. This is what experts refer to as as the intelligence explosion—the degree of intelligence possessed by AI gets exponentially greater until it far surpasses that of humans. Since this can all happen really quickly once the initial AI is developed, if humans haven't solved the alignment problem by then, we could be done as a species if the superintelligent AI overlords decide they want that.

Personally, I think the development of AGI is still a while away. But many experts do believe alignment needs to be solved, and that AI poses an existential threat if it isn't.
I heard that Beff Jezos doesn't have problem if AI replace humans and caused existential threat, but I think he consider the existential threat in context of transhumanist leading to post-human future. What you describe is AI over take which is inherently in Nick Land philosophy, I don't think dystopia will happen but a smooth transition of humans to the next evolutionary species.
 
Last edited:
Zazacosta

Zazacosta

Member
Apr 29, 2024
73
Or the government is full of old people who want to live forever and think AI will make that possible.
I do not know about governements in every/your country, but imagining this in my country... Yay... :smiling::smiling::smiling:

Edit: Just a though about geriatric mafianic forever living self lying alcoholic shitty ridiculous toys who populisticly promise everything for their votes, even that earth is flat, is making me laugh a lot.
Unfortunatelly, even our country is small, it is not impossible, because we are not so irrelevant when it comes to IT...
 
Last edited:
  • Like
Reactions: sserafim and xinino
jar-baby

jar-baby

Specialist
Jun 20, 2023
364
I heard that Beff Jezos doesn't have problem if AI replace humans and caused existential threat, but I think he consider the existential threat in context of transhumanist leading to post-human future. What you describe is AI over take which is inherently in Nick Land philosophy, I don't think dystopia will happen but a smooth transition of humans to the next evolutionary species.
I confess I don't really know much about Nick Land's philosophies and dark enlightenment in general—I was referring to accelerationism purely within a technological capacity (e/acc). I wasn't really trying to infer the possibility of what I'd call dystopia either, just some scenario wherein a misaligned superintelligence gets rid of humans for some arbitrary (to us) purpose. Like paperclip maximising. In hindsight I probably shouldn't have used the word "overlords", lol. I don't see dystopia happening either—I don't have many convictions on the subject but agree with you that a smooth transition to a post-human future seems a lot more likely.
 
  • Like
Reactions: sserafim and xinino
leavingthesoultrap

leavingthesoultrap

(ᴗ_ ᴗ。)
Nov 25, 2023
1,070
They can't stop developing it now.
The Pandora's box has been opened. There's no hope that we could achieve some kind of international ban of AI advancement, so if the western countries ban it, countries like China and Russia will continue developing it. Therefore the west would fall behind in the AI arms race.
I don't understand how and why people see AI as a threat? Is it because people watch too many movies or something? AI isn't even intelligent... it merely regurgitates information that has been provided to it. I believe that, to be intelligent, there has to be an understanding but AI has no understanding of what it's saying
Not true unfortunately
 
Last edited:
C

cosmic-freedom

Student
Mar 18, 2024
116
Rise in AI has seen a sharp reduction of jobs.It'll only get worse..
 
Last edited:
  • Like
Reactions: sserafim

Similar threads

ikilog
Replies
5
Views
120
Recovery
ikilog
ikilog
Melancholic_Misfit
Replies
4
Views
228
Suicide Discussion
Melancholic_Misfit
Melancholic_Misfit