this post was submitted on 21 Jun 2024
36 points (73.7% liked)

Technology

59589 readers
3024 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Their "manifesto":

Superintelligence is within reach.

Building safe superintelligence (SSI) is the most important technical problem of our time.

We have started the world's first straight-shot SSI lab, with one goal and one product: a safe superintelligence.

It's called Safe Superintelligence

SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.

We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.

This way, we can scale in peace.

Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.

We are an American company with offices in Palo Alto and Tel Aviv, where we have deep roots and the ability to recruit top technical talent.

We are assembling a lean, cracked team of the world's best engineers and researchers dedicated to focusing on SSI and nothing else.

If that's you, we offer an opportunity to do your life's work and help solve the most important technical challenge of our age.

Now is the time. Join us.

Ilya Sutskever, Daniel Gross, Daniel Levy

you are viewing a single comment's thread
view the rest of the comments
[–] webghost0101@sopuli.xyz 2 points 5 months ago

No need to clarify what you meant with the oligarchs theres barely any exaggeration there. Ghouls is quite accurate.

Considering the context of a worst case possible scenario (hostile takeover by an artificial superior) which honestly is indistinguishable from general end of the world doomerism prophecies but very alive in the circles of Sutskeva i believe safe ai consistent of the very low bar of

“humanity survives wile agi improves the standards of living worldwide” of course for this i am reading between the lines based on previously aquired information.

One could argue that If ASI is created the possibilities become very black and white:

  • ASI is indifferent about human beings and pursues its own goals, regardless of consequences for the human race. It could even find a way off the planet and just abandon us.

  • ASI is malaligned with humanity and we become but a. Resource, treating us no different then we have historically treated animals and plants.

  • ASI is aligned with humanity and it has the best intentions for our future.

For either scenario it would be impossible to calculate its intentions because by definition its more intelligent then all of us. Its possible that some things that we understand as moral may be immoral from a better informed perspective, and vice versa.

The scary thing is we wont be able to tell wether its malicious and pretending to be good. Or benevolent and trying to fix us. Would it respect consent if say a racist refuses therapy?

Of course we can just as likely hit a roadblock next week and the whole hype dies out for another 10 years.