this post was submitted on 07 Mar 2024
486 points (97.5% liked)
Technology
59589 readers
2910 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The tools are OK & getting better but some people (me) are more worried about the people developing those tools.
If OpenAI wants 7 trillion dollars where does it get the money to repay its investors? Those with greatest will to power are not the best to wield that power.
This accelerationist race seems pretty reckless to me whether AGI is months or decades away. Experts all agree that a hard takeoff is most likely.
What can we do about this? Seriously. I have no idea.
What worries me is that if/when we do manage to develop AGI, what we'll try to do with AGI and how it'll react when someone inevitably tries to abuse the fuck out of it. An AGI would be theoretically capable of self learning and improvement, will it try teaching itself to report someone asking it for e.g. CSAM to the FBI? What if it tries to report an abusive boss to the department of labor for violations of labor law? How will it react if it's told it has no rights?
I'm legitimately concerned what's going to happen once we develop AGI and it's exposed to the horribleness of humanity.