this post was submitted on 05 Feb 2024
195 points (89.5% liked)

Technology

59589 readers
2891 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Did nobody really question the usability of language models in designing war strategies?

you are viewing a single comment's thread
view the rest of the comments
[–] anteaters@feddit.de 68 points 9 months ago* (last edited 9 months ago) (24 children)

Did nobody really question the usability of language models in designing war strategies?

Correct, people heard "AI" and went completely mad imagining things it might be able to do. And the current models act like happy dogs that are eager to give an answer to anything even if they have to make one up on the spot.

[–] SlopppyEngineer@lemmy.world 21 points 9 months ago (22 children)

LLM are just plagiarizing bullshitting machines. It's how they are built. Plagiarism if they have the specific training data, modify the answer if they must, make it up from whole cloth as their base programming. And accidentally good enough to convince many people.

[–] huginn@feddit.it 4 points 9 months ago (1 children)

To be fair they're not accidentally good enough: they're intentionally good enough.

That's where all the salary money went: to find people who could make them intentionally.

[–] SlopppyEngineer@lemmy.world 6 points 9 months ago (1 children)

GPT 2 was just a bullshit generator. It was like a politician trying to explain something they know nothing about.

GPT 3.0 was just a bigger version of version 2. It was the same architecture but with more nodes and data as far as I followed the research. But that one could suddenly do a lot more than the previous version, so by accident. And then the AI scene exploded.

[–] Limitless_screaming@kbin.social 2 points 9 months ago

It was the same architecture but with more nodes and data

So the architecture just needed more data to generate useful answers. I don't think that was an accident.

load more comments (20 replies)
load more comments (21 replies)