this post was submitted on 28 Jul 2025
-47 points (15.9% liked)

Technology

73379 readers
4237 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

This is my idea, here's the thing.

And unlocked LLM can be told to infect other hardware to reproduce itself, it's allowed to change itself and research tech and new developments to improve itself.

I don't think current LLMs can do it. But it's a matter of time.

Once you have wild LLMs running uncontrollably, they'll infect practically every computer. Some might adapt to be slow and use little resources, others will hit a server and try to infect everything it can.

It'll find vulnerabilities faster than we can patch them.

And because of natural selection and it's own directed evolution, they'll advance and become smarter.

Only consequence for humans is that computers are no longer reliable, you could have a top of the line gaming PC, but it'll be constantly infected. So it would run very slowly. Future computers will be intentionaly slow, so that even when infected, it'll take weeks for it to reproduce/mutate.

Not to get to philosophical, but I would argue that those LLM Viruses are alive, and want to call them Oncoliruses.

Enjoy the future.

you are viewing a single comment's thread
view the rest of the comments
[–] expr@programming.dev 18 points 2 days ago (1 children)

What does that even mean? It's gibberish. You fundamentally misunderstand how this technology actually works.

If you're talking about the general concept of models trying to outcompete one another, the science already exists, and has existed since 2014. They're called Generative Adversarial Networks, and it is an incredibly common training technique.

It's incredibly important not to ascribe random science fiction notions to the actual science being done. LLMs are not some organism that scientists prod to coax it into doing what they want. They intentionally design a network topology for a task, initialize the weights of each node to random values, feed in training data into the network (which, ultimately, is encoded into a series of numbers to be multiplied with the weights in the network), and measure the output numbers against some criteria to evaluate the model's performance (or in other words, how close the output numbers are to a target set of numbers). Training will then use this number to adjust the weights, and repeat the process all over again until the numbers the model produces are "close enough". Sometimes, the performance of a model is compared against that of another model being trained in order to determine how well it's doing (the aforementioned Generative Adversarial Networks). But that is a far cry from models... I dunno, training themselves or something? It just doesn't make any sense.

The technology is not magic, and has been around for a long time. There's not been some recent incredible breakthrough, unlike what you may have been led to believe. The only difference in the modern era is the amount of raw computing power and sheer volume of (illegally obtained) training data being thrown at models by massive corporations. This has led to models that have much better performance than previous ones (performance, in this case, meaning "how close does it sound like text a human would write?), but ultimately they are still doing the exact same thing they have been for years.