Where is the MIT study in question? The link in the article, apparently to a PDF, redirects elsewhere
snf
aka enshittification
You know, I think I'm overdue for a donation to Wikipedia. They honestly might end up being the last bastion of sanity
It pains me to argue this point, but are you sure there isn't a legitimate use case just this once? The text says that this was aimed at making Wikipedia more accessible to less advanced readers, like (I assume) people whose first language is not English. Judging by the screenshot they're also being fully transparent about it. I don't know if this is actually a good idea but it seems the least objectionable use of generative AI I've seen so far.
It's actually kind of worrisome that they have to guess it was his code based on the function/method name. Do these people not use version control? I guess not, they sure as hell don't do code reviews if this guy managed to get this code into production
Yeah I see what you mean. There's a decent argument to be made that something like reasoning appears as an emergent property in this kind of system, I'll admit. Still, the fact that fundamentally the code works as a prediction engine rules out any sort of real cognition, even if it makes an impressive simulacrum. There's just no ability to invent, no true novelty, which -- to my mind at least -- is the hallmark of actual reasoning.
an open source reasoning AI
It's still an LLM right? I'm going to have to take issue with your use of the word 'reasoning' here
Beware of he who would deny you access to information, for in his heart he dreams himself your master.
I certainly see your point. That said, devil's advocate, the comment doesn't refer merely to political opponents, but rather:
the most evil people I can think of
Can you honestly say that you wouldn't painlessly snap, let's say, a proven serial child rapist out of existence? I think I'd do it, if for some reason the justice system was unable to stop them.
Well fuck that