this post was submitted on 19 Jun 2024
146 points (94.0% liked)
Technology
59534 readers
3195 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The company is called “Safe Super Intelligence”. Not a fan of names like these, kind of like if a company called itself “safe airplanes”, there’s something about it that makes me think it won’t live up to the name.
Not sure how they plan on raising money when so many other AI companies are promising commercialization. A company prioritizing safety will be defeated by another prioritizing profit. A company like this could have flourished in the time before openAI, but right now there’s so much demand for gpus and talent that makes it very challenging to catch up, more so when less scrupulous companies offer more money for engineers. They’d have to hire from a smaller and more limited pool of applicants that believe in the mission.
Or all those crypto scams that put the word "safe" in their token's name to sucker people into thinking it wasn't a Ponzi scheme
It's "safe" as in a vault where they're gonna swim in investor money like scrooge mcduck.
A big part of the AI Hype cycle has been "AIs are potentially too omnipotent for us to control, but also too much of a national security threat to ignore". So you get these media hacks insisting we need a super-intelligent artificial mind that is firmly within the grip of its creator.
As a consequence of the hype over-topping any kind of real utility from these machines, you've got some of the top board members of these firms spinning out their own boutique branches of the industry by insisting prior iterations are too dangerous or too constrained to fulfill their future their intended role as techno-utopian machine gods.
The sensationalist bullshit is how they plan to make money. "Don't trust Alice's AI, its too dangerous! I'm the Safe AI" versus "Don't trust Bob's AI, its too limited. I'm the Ambitious AI". Then Wall Street investment giants, who don't know shit from shoelaces, throw gobs of money at both while believing they've hedged their bets. And a few years after that, when these firms don't produce anything remotely as fantastical as they promised, we go into a giant speculative bubble collapse that takes out half the energy or agricultural sector as collateral damage.
In twenty years, we'll be reading books titled "How AI Destroyed The Orange", describing the convoluted chain of events that tied fertilizer prices to debt-swaps on machine learning centers and resulted in almost all of Florida's biggest cash crop being lost to a hiccup in the NASDAQ between 2026 and 2029.