this post was submitted on 14 Aug 2024
94 points (77.0% liked)

Technology

74351 readers
2743 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 26 comments
sorted by: hot top controversial new old
[–] Haquer@lemmy.today 109 points 1 year ago (2 children)

Nothingburger. They were using the AI to code their scripts and haven't even shown the prompts that got the response. LLMs are not AGI.

Imagine allowing LLMs to write and execute code and being surprised they write and execute code.

[–] chuckleslord@lemmy.world 23 points 1 year ago

Having read the article and then the actual report from the Sakana team. Essentially, they're letting their LLM perform research by allowing it to modify itself. The increased timeouts and self-referential calls appear to be the LLM trying to get around the research team's guardrails on it. Not because it's become aware or anything like that, but because its code was timing out and that was the least effort way to beat the timeout. It does handily prove that LLMs shouldn't be the one steering any code base, because they don't give a shit about parameters or requirements. And giving an LLM the ability to modify its own code will lead to disaster in any setting that isn't highly controlled like this.

Listen, I've been saying for a while that LLMs are a dead end towards any useful AI, and the fact that an AI Research team has turned to an LLM to try and find more avenues to explore feels like the nail in that coffin.

[–] CaptainSpaceman@lemmy.world 35 points 1 year ago (1 children)

"We put literally no safeguards on the bot and were surprised it did unsafe things!"

Article in a nutshell

[–] magnetosphere@fedia.io 3 points 1 year ago

Not quite. The whole reason they isolated the bot in the first place was because they knew it could do unsafe things. Now they know what unsafe things are most likely, and can refine their restrictions accordingly.

[–] shortwavesurfer@lemmy.zip 25 points 1 year ago (3 children)
[–] TimeSquirrel@kbin.melroy.org 7 points 1 year ago (1 children)

Skynet invented time travel all on its own so it could make sure it kept existing. Don't compare it to these pissant LLMs. That's an insult to Skynet.

[–] shortwavesurfer@lemmy.zip 2 points 1 year ago

I don't know if you watch science and futurism with Isaac Arthur, but if you don't, you probably should. And he has a quote that I think applies quite well.

"Keep it simple, keep it dumb, or you might end up, under SkyNet's thumb."

[–] MelodiousFunk@slrpnk.net 1 points 1 year ago

Terminator is part of a double feature. We need to sit through Multiplicity first.

[–] technocrit@lemmy.dbzer0.com 1 points 1 year ago

We're going to palestine?

[–] Bakkoda@sh.itjust.works 15 points 1 year ago

Arstechnica with an absolutely composting headline. Sigh

[–] echodot@feddit.uk 11 points 1 year ago

The word unexpectedly is doing a lot of heavy lifting here. It was given the ability to modify its own code, and it did, how is that unexpected?

[–] jordanlund@lemmy.world 9 points 1 year ago
[–] kata1yst@sh.itjust.works 7 points 1 year ago (1 children)

Well... now the paperclip thought experiment becomes slightly more prescient.

[–] psivchaz@reddthat.com 2 points 1 year ago (1 children)

Everyone's like, "It's not that impressive. It's not general AI." Yeah, that's the scary part to me. A general AI could be told, "btw don't kill humans" and it would understand those instructions and understand what a human is.

The current way of doing things is just digital guided evolution, in a nutshell. Way more likely to create the equivalent of a bacteria than the equivalent of a human. And it's not being treated with the proper care because, after all, it's just a language model and not general AI.

[–] kata1yst@sh.itjust.works 1 points 1 year ago

Yup. A seriously intelligent AI we probably wouldn't have to worry too much about. Morality, and prosocial behavior are logical and safer than the alternative.

But a dumb AI that manages to get too much access is extremely risky.

[–] L0rdMathias@sh.itjust.works 5 points 1 year ago

So it's just like a regular researcher then?

[–] Boozilla@lemmy.world 5 points 1 year ago (1 children)

I for one welcome....oh wait, this isn't that lame Spez site. Forgot where I was for a second.

[–] catloaf@lemm.ee 11 points 1 year ago* (last edited 1 year ago) (1 children)

That's a Slashdot meme, though.

[–] Deceptichum@quokk.au 11 points 1 year ago (1 children)

It’s a fucking Simpsons meme.

[–] catloaf@lemm.ee 5 points 1 year ago (1 children)

Originally yeah, but Slashdot is where it got turned into a meme.

[–] Eheran@lemmy.world 1 points 1 year ago (1 children)

So most memes are 4chan, 9gag or imgur memes then? What....?

[–] catloaf@lemm.ee 0 points 1 year ago

I suppose so, yes.

[–] RangerJosie@sffa.community 2 points 1 year ago (1 children)

I can't wait until one goes rogue and escapes into the net.

That's gonna be fun to watch.

[–] Deceptichum@quokk.au 4 points 1 year ago

Umm actually nets are how you get caught, not escape.

[–] TheBigBrother@lemmy.world -2 points 1 year ago

Skynet it's watching you 👁️🌐