FaceDeer

joined 8 months ago
[–] FaceDeer@fedia.io 9 points 3 months ago

I would expect that's part of the point, if a C program can't be converted to a language that doesn't allow memory violations that probably indicates that there are execution pathways that result in memory violations.

[–] FaceDeer@fedia.io 19 points 3 months ago (2 children)

It probably doesn't matter from a popular perception standpoint. The talking point that AI burns massive amounts of coal for each deepfake generated is now deeply ingrained, it'll be brought up regularly for years after it's no longer true.

[–] FaceDeer@fedia.io 4 points 4 months ago

If someone wants to pay me to upvote them I'm open to negotiation.

[–] FaceDeer@fedia.io 4 points 4 months ago

A lot of people are keen to hear that AI is bad, though, so the clicks go through on articles like this anyway.

[–] FaceDeer@fedia.io 3 points 4 months ago (1 children)

img2img is not "training" the model. Completely different process.

[–] FaceDeer@fedia.io 4 points 4 months ago

You realize that those "billions of dollars" have actually resulted in a solution to this? "Model collapse" has been known about for a long time and further research figured out how to avoid it. Modern LLMs actually turn out better when they're trained on well-crafted and well-curated synthetic data.

Honestly, everyone seems to assume that machine learning researchers are simpletons who've never used a photocopier before.

[–] FaceDeer@fedia.io 1 points 4 months ago

Seems like lemmy.ml is really collapsing in on itself. Overall not good for the general health of the fediverse.

I'd argue that a biased overly-centralized instance like that collapsing in on itself is good for the general health of the Fediverse.

there needs to be some kind of accountability/ redress if open & free communities are going to be a long term project.

The redress is having lots of servers to switch to, much like how on Reddit the redress was "start your own subreddit if the one you're on is moderated poorly." I can't imagine any system that would let you "take control" of some other instance without that being ridiculously abusable.

[–] FaceDeer@fedia.io 1 points 4 months ago

Workarounds for those sorts of limitations have been developed, though. Chain-of-thought prompting has been around for a while now, and I recall recently seeing an article about a model that had that built right into it; it had been trained to use tags to enclose invisible chunks of its output that would be hidden from the end user but would be used by the AI to work its way through a problem. So if you asked it whether cats had feathers it might respond "Feathers only grow on birds and dinosaurs. Cats are mammals. No, cats don't have feathers." And you'd only see the latter bit. It was a pretty neat approach to improving LLM reasoning.

[–] FaceDeer@fedia.io 1 points 4 months ago* (last edited 4 months ago)

And they're overlooking that radionuclide contamination of steel actually isn't much of a problem any more, since the surge in background radionuclides caused by nuclear testing peaked in 1963 and has since gone down almost back to the original background level again.

I guess it's still a good analogy, though. People bring up Low Background Steel because they think radionuclide contamination is an unsolved problem (despite it having been basically solved), and they bring up "model collapse" because they think it's an unsolved problem (despite it having been basically solved). It's like newspaper stories, everyone sees the big scary front page headline but nobody pays attention to the little block of text retracting it on page 8.

[–] FaceDeer@fedia.io 3 points 4 months ago (1 children)

Which is actually a pretty good thing.

[–] FaceDeer@fedia.io 1 points 4 months ago (1 children)

I wouldn't call it a "dud" on that basis. Lots of models come out with lagging support on the various inference engines, it's a fast-movibg field.

view more: ‹ prev next ›