I can't recall the last time I pirated anything executable (games and other software). There are legitimate free options for everything I've wanted, and executable code is just too risky.
FaceDeer
Joke's on you, I skimmed by it without engaging for more than three sec... shit.
There's an assumption in the comments that this is Lemmy-specific, so I figured I should also mention a tool I used recently when copying subscriptions from a kbin instance to an mbin instance.
It's exactly that "new loud wave of complainers" I'm talking about.
I've been in computing and specifically game programming for a long time now, almost two decades, and I can't recall ever having someone barge in on a discussion of game AI with "that's not actually AI because it's not as smart as a human!" If someone privately thought that they at least had the sense not to disrupt a conversation with an irrelevant semantic nitpick that wasn't going to contribute anything.
None of this is AI-specific. Youtube wants you to label your videos if you use "altered or synthetic content" that could mislead people about real people or events. 99% of what Corridor Crew puts out would probably need to be labeled, for example, and they mostly use traditional digital effects.
It's some weird semantic nitpickery that suddenly became popular for reasons that baffle me. "AI" has been used in videogames for decades and nobody has come out of the woodwork to "um, actually" it until now. I get that people are frightened of AI and would like to minimize it but this is a strange way to do it.
At least "stochastic parrot" sounded kind of amusing.
The term "artificial intelligence" was established in 1956 and applies to a broad range of algorithms. You may be thinking of Artificial General Intelligence, AGI, which is the more specific "thinks like we do" sort that you see in science fiction a lot. Nobody is marketing LLMs as AGI.
I was kind of hoping the hysteria would be over by now. Walled gardens are a bad thing, I'm pleased when holes are poked in them.
The saying "when a measure becomes a target it ceases to be a good measure" (Goodhart's Law) has been making the rounds online recently, this is a good example of that.
Ironically, this is a common problem faced when training AIs too.
This really just shines a light on a more significant underlying problem with scientific publication in general, that being that there's just way too much of it. "Publish or perish" is resulting in enormous pressure to churn out papers whether they're good or not.
Yeah, these AIs are literally trying to give us what they "think" we expect them to respond with.
Which does make me a little worried given how frequently our fictional AIs end up in "kill all humans!" Mode. :)
Do we count those as negative numbers when the implants help them recover?