nednobbins

joined 2 years ago
[–] nednobbins@lemm.ee 2 points 3 weeks ago

I wouldn't either but that's exactly what lmsys.org found.

That blog post had ratings between 858 and 1169. Those are slightly higher than the average rating of human users on popular chess sites. Their latest leaderboard shows them doing even better.

https://lmarena.ai/leaderboard has one of the Gemini models with a rating of 1470. That's pretty good.

[–] nednobbins@lemm.ee 2 points 3 weeks ago (2 children)

I imagine the "author" did something like, "Search http://google.scholar.com/ find a publication where AI failed at something and write a paragraph about it."

It's not even as bad as the article claims.

Atari isn't great at chess. https://chess.stackexchange.com/questions/24952/how-strong-is-each-level-of-atari-2600s-video-chess
Random LLMs were nearly as good 2 years ago. https://lmsys.org/blog/2023-05-03-arena/
LLMs that are actually trained for chess have done much better. https://arxiv.org/abs/2501.17186

[–] nednobbins@lemm.ee 1 points 3 weeks ago

Like humans are way better at answering stuff when it’s a collaboration of more than one person. I suspect the same is true of LLMs.

It is.

It's really common for non-language implementations of neural networks. If you have an NN that's right some percentage of the time, you can often run it through a bunch of copies of the NNs and take the average and that average is correct a higher percentage of the time.

Aider is an open source AI coding assistant that lets you use one model to plan the coding and a second one to do the actual coding. It works better than doing it in a single pass, even if you assign the the same model to planing and coding.

[–] nednobbins@lemm.ee 50 points 3 weeks ago (12 children)

Sometimes it seems like most of these AI articles are written by AIs with bad prompts.

Human journalists would hopefully do a little research. A quick search would reveal that researches have been publishing about this for over a year so there's no need to sensationalize it. Perhaps the human journalist could have spent a little time talking about why LLMs are bad at chess and how researchers are approaching the problem.

LLMs on the other hand, are very good at producing clickbait articles with low information content.

[–] nednobbins@lemm.ee 17 points 3 weeks ago (1 children)

Have you looked up the history of the word "moron"?

[–] nednobbins@lemm.ee 2 points 1 month ago (1 children)

It's 27T Pro. I like it better than the iPhone it replaced.

The only downsides I've seen so far are that it requires a separate app for wifi calling and it has fewer zoom options for the camera. I'd like to figure out how to get the IR blaster to read signals (so I can easily clone my remotes).

[–] nednobbins@lemm.ee 7 points 1 month ago (3 children)

Yeah. I'm typing this on a $300 Chinese phone with 10600mAH battery, reverse wireless charging, a thermal imaging camera, and it's waterproof and shock resistant.

[–] nednobbins@lemm.ee 5 points 4 months ago (1 children)

There is already a foolproof method that is immune to any abuse of trust by admins; create an alt account.

[–] nednobbins@lemm.ee 3 points 4 months ago (1 children)

It's more like if played a song on Guitar Hero enough to be able to pick up a guitar and convince a guitarist that you know the song.

Code from ChatGPT (and other LLMs) doesn't usually work on the first try. You need to go fix and add code just to get it to compile. If you actually want it to do whatever your professor is asking you for, you need to understand the code well enough to edit it.

It's easy to try for yourself. You can go find some simple programming challenges online and see if you can get ChatGPT to solve a bunch of them for you without having to dive in and learn the code.

[–] nednobbins@lemm.ee 83 points 4 months ago (10 children)

The bullshit is that anon wouldn't be fsked at all.

If anon actually used ChatGPT to generate some code, memorize it, understand it well enough to explain it to a professor, and get a 90%, congratulations, that's called "studying".

[–] nednobbins@lemm.ee 5 points 5 months ago (1 children)

Nobody builds cars under slave like conditions. It’s just not possible. Modern car factories are highly automated plants that require skilled operators. In the case of the VW Xinjiang, that was QC inspectors. There’s no way a hole in the wall car factory using outdated labor practices can come close to competing against modern production.

[–] nednobbins@lemm.ee 1 points 7 months ago (1 children)

Couldn't you say the same for the Republicans, or any party for that matter? ie, "Join them and if enough people like you join them they'll change."

Realistically, some new political operator isn't going to get any relevant positions. And nobody with the relevant positions will listen to a new political operator.

That may work in theory but it's basically saying to create a new Demcratic party from within.

view more: next ›