It probably doesn't matter from a popular perception standpoint. The talking point that AI burns massive amounts of coal for each deepfake generated is now deeply ingrained, it'll be brought up regularly for years after it's no longer true.
FaceDeer
If someone wants to pay me to upvote them I'm open to negotiation.
A lot of people are keen to hear that AI is bad, though, so the clicks go through on articles like this anyway.
img2img is not "training" the model. Completely different process.
You realize that those "billions of dollars" have actually resulted in a solution to this? "Model collapse" has been known about for a long time and further research figured out how to avoid it. Modern LLMs actually turn out better when they're trained on well-crafted and well-curated synthetic data.
Honestly, everyone seems to assume that machine learning researchers are simpletons who've never used a photocopier before.
Seems like lemmy.ml is really collapsing in on itself. Overall not good for the general health of the fediverse.
I'd argue that a biased overly-centralized instance like that collapsing in on itself is good for the general health of the Fediverse.
there needs to be some kind of accountability/ redress if open & free communities are going to be a long term project.
The redress is having lots of servers to switch to, much like how on Reddit the redress was "start your own subreddit if the one you're on is moderated poorly." I can't imagine any system that would let you "take control" of some other instance without that being ridiculously abusable.
Workarounds for those sorts of limitations have been developed, though. Chain-of-thought prompting has been around for a while now, and I recall recently seeing an article about a model that had that built right into it; it had been trained to use tags to enclose invisible chunks of its output that would be hidden from the end user but would be used by the AI to work its way through a problem. So if you asked it whether cats had feathers it might respond "Feathers only grow on birds and dinosaurs. Cats are mammals. No, cats don't have feathers." And you'd only see the latter bit. It was a pretty neat approach to improving LLM reasoning.
And they're overlooking that radionuclide contamination of steel actually isn't much of a problem any more, since the surge in background radionuclides caused by nuclear testing peaked in 1963 and has since gone down almost back to the original background level again.
I guess it's still a good analogy, though. People bring up Low Background Steel because they think radionuclide contamination is an unsolved problem (despite it having been basically solved), and they bring up "model collapse" because they think it's an unsolved problem (despite it having been basically solved). It's like newspaper stories, everyone sees the big scary front page headline but nobody pays attention to the little block of text retracting it on page 8.
Which is actually a pretty good thing.
I wouldn't call it a "dud" on that basis. Lots of models come out with lagging support on the various inference engines, it's a fast-movibg field.
I would expect that's part of the point, if a C program can't be converted to a language that doesn't allow memory violations that probably indicates that there are execution pathways that result in memory violations.