this post was submitted on 09 Aug 2025
943 points (99.1% liked)

Technology

76252 readers
2968 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] j4k3@lemmy.world 26 points 2 months ago* (last edited 2 months ago) (16 children)

It is not the tool, but is the lazy stupid person that created the implementation. The same stupidity is true of people that run word filtering in conventional code. AI is just an extra set of eyes. It is not absolute. Giving it any kind of unchecked authority is insane. The administrators that implemented this should be what everyone is upset at.

The insane rhetoric around AI is a political and commercial campaign effort by Altmann and proprietary AI looking to become a monopoly. It is a Kremlin scope misinformation campaign that has been extremely successful at roping in the dopes. Don't be a dope.

This situation with AI tools is exactly 100% the same as every past scapegoated tool. I can create undetectable deepfakes in gimp or Photoshop. If I do so with the intent to harm or out of grossly irresponsible stupidity, that is my fault and not the tool. Accessibility of the tool is irrelevant. Those that are dumb enough to blame the tool are the convenient idiot pawns of the worst of humans alive right now. Blame the idiots using the tools that have no morals or ethics in leadership positions while not listening to these same types of people's spurious dichotomy to create monopoly. They prey on conservative ignorance rooted in tribalism and dogma which naturally rejects all unfamiliar new things in life. This is evolutionary behavior and a required mechanism for survival in the natural world. Some will always scatter around the spectrum of possibilities but the center majority is stupid and easily influenced in ways that enable tyrannical hegemony.

AI is not some panacea. It is a new useful tool. Absent minded stupidity is leading to the same kind of dystopian indifference that lead to the ""free internet"" which has destroyed democracy and is the direct cause of most political and social issues in the present world when it normalized digital slavery through ownership over a part of your person for sale, exploitation, and manipulation without your knowledge or consent.

I only say this because I care about you digital neighbor. I know it is useless to argue against dogma but this is the fulcrum of a dark dystopian future that populist dogma is welcoming with open arms of ignorance just like those that said the digital world was a meaningless novelty 30 years ago.

[–] verdigris@lemmy.ml 4 points 2 months ago* (last edited 2 months ago) (11 children)

You seem to be handwaving all concerns about the actual tech, but I think the fact that "training" is literally just plagiarism, and the absolutely bonkers energy costs for doing so, do squarely position LLMs as doing more harm than good in most cases.

The innocent tech here is the concept of the neural net itself, but unless they're being trained on a constrained corpus of data and then used to analyze that or analogous data in a responsible and limited fashion then I think it's somewhere on a spectrum between "irresponsible" and "actually evil".

[–] SugarCatDestroyer@lemmy.world 4 points 2 months ago (5 children)

If the world is ruled by psychopaths who seek absolute power for the sake of even more power, then the very existence of such technologies will lead to very sad consequences and, perhaps, most likely, even to slavery. Have you heard of technofeudalism?

[–] verdigris@lemmy.ml 2 points 2 months ago* (last edited 2 months ago) (2 children)

Okay sure but in many cases the tech in question is actually useful for lots of other stuff besides repression. I don't think that's the case with LLMs. They have a tiny bit of actually usefulness that's completely overshadowed by the insane skyscrapers of hype and lies that have been built up around their "capabilities".

With "AI" I don't see any reason to go through such gymnastics separating bad actors from neutral tech. The value in the tech is non-existent for anyone who isn't either a researcher dealing with impractically large and unwieldy datasets, or of course a grifter looking to profit off of bigger idiots than themselves. It has never and will never be a useful tool for the average person, so why defend it?

[–] SugarCatDestroyer@lemmy.world 1 points 2 months ago

There's nothing to defend. Tell me, would you defend someone who is a threat to you and deprives you of the ability to create, making art unnecessary? No, you would go and kill him while this bastard hasn't grown up. Well, what's the point of defending a bullet that will kill you? Are you crazy?

[–] a_wild_mimic_appears@lemmy.dbzer0.com -1 points 2 months ago (1 children)

I am an average person, and my GPU is running a chatbot which currently gives me a course in Regular Expressions. My GPU also generates images for me from time to time when i need an image, because i am crappy at drawing. There are a lot of uses for the technology.

[–] verdigris@lemmy.ml 1 points 2 months ago (1 children)

Okay so you could have just looked up one of dozens of resources on regex. The images you "need" are likely bad copies of images that already exist, or they're weird collages of copied subject matter.

My point isn't that there's nothing they can do at all, it's that nothing they can do is worth the energy cost. You're spending tons of energy to effectively chew up information already on the web and have it vomited back to you in a slightly different form, when you could have just looked up the information directly. It doesn't save time, because you have to double check everything. The images are also plagiarized, and you could be paying an artist if they're something important, or improving your artistic abilities if they aren't. I struggle to think of many cases where one of those options is unfeasible, it's just the "easy" way out (because the energy costs are obfuscated) to have a machine crunch up some existing art to get a approximation of what you want.

Regarding energy use see my other reply. It's like if you scold people for running their microwave 10s too long. Watching 2 hours of Netflix is a lot worse. go read up here

load more comments (2 replies)
load more comments (7 replies)
load more comments (11 replies)