this post was submitted on 04 May 2026
67 points (92.4% liked)

Technology

84324 readers
6407 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 10 comments
sorted by: hot top controversial new old
[–] Iconoclast@feddit.uk 2 points 31 minutes ago* (last edited 29 minutes ago)

I worry about AI itself - not the companies developing it. Back when I started worrying about it 12 years ago, influenced by Stuart Russel and Nick Bostrom I was expecting it to take at least 50 years before we had AI that resembles what we have now, so suffice it to say that the fact that we're here already doesn't exactly ease my worry.

I've yet to hear a single convincing argument against the idea that even attempting to create something more intelligent than us is a really bad idea - very likely to be our last bad idea ever. Whether Mythos is actually as capable as Anthropic claims is beside the point for me. Even if it's not, it's only a matter of time until someone creates one that is.

[–] andallthat@lemmy.world 2 points 1 hour ago* (last edited 47 minutes ago)

they want to create urgency and FOMO. That way:

  1. investors throw all their money to the new incredibly fast-growing shiny tech before they can stop and think to trivial things like how much it costs or whether it's actually doing useful things

  2. AI companies can continuously flood the zone with announcements of incredible new feats of intelligence by their LLMs. By the time studies come out, showing that these feats were not so impressive after all, they have released two newer, more powerful models, capable of even more impressive (real or invented) feats.

  3. AI companies can try positioning themselves as the "good, ethical guys" that you have to root for (and give all your money to), because the alternative is for the bad, unethical guys to create this AGI with no guardrails that will destroy the world. It's "we can't stop because if we stop someone else will do it"

  4. this kind of pressure works for governments too. We can't let China/the US/Iran/Russia (pick your specific adversary) control this potentially destructive technology first!

  5. things that scare us, regular humans, make the rich and powerful salivate. We are scared of losing our jobs, they are happy to cut people costs (see... well, just about everyone in Tech). We are scared AI can create a surveillance state, they want to sell surveillance tech to companies and governments (see Palantir). "This tech makes regular people afraid" is music to the ears of the 0.1%.

[–] melsaskca@lemmy.ca 8 points 4 hours ago

Because they are fascist too?

[–] inari@piefed.zip 45 points 6 hours ago (3 children)

Here's one theory. According to critics, it benefits AI companies to keep you fixated on apocalypse because it distracts from the very real damage they're already doing to the world.

I don't think that's really it.

I think they have these grandiose claims just to hype their product up for investors, so people won't focus on how these LLMs are so unreliable and inaccurate

[–] DeckPacker@piefed.social 4 points 1 hour ago

I think both statements are true at the same time

[–] kinsnik@lemmy.world 12 points 4 hours ago (1 children)

Yeah, it is so people think that the ai companies are seeing the next, not-yet-public versions and are scared, they must be so powerful, right?

Altman has been claiming chat gpt made him feel dumb since 4.5

[–] fullsquare@awful.systems 7 points 3 hours ago

Altman has been claiming chat gpt made him feel dumb since 4.5

perfectly believable tbh

[–] XLE@piefed.social 7 points 4 hours ago

Why not both?

There's an entire cottage industry around "AI Safety", and it's entirely accurate to say they only focus on the apocalyptic to the detriment of the real.

They've even been caught on camera distracting politicians...

https://en.wikipedia.org/wiki/AI_Safety_Summit_2023

[–] Grimy@lemmy.world 4 points 3 hours ago* (last edited 3 hours ago)

It's regulatory capture. They scream about how it's super dangerous for three years. The politicians get lobbied so the public is "protected", then open source models (especially the evil Chinese ones) are banned and high end models are only allowed through subscription services.

[–] ChicoSuave@lemmy.world 2 points 3 hours ago

It's part of the sales pitch to turn compute into a utility and rate limit people from technology unless they are a subscription paying member of the herd.