52
this post was submitted on 11 Nov 2024
52 points (93.3% liked)
Technology
59495 readers
3050 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yeah, well Alibaba nearly (and sometimes) beat GPT-4 with a comparatively microscopic model you can run on a desktop. And released a whole series of them. For free! With a tiny fraction of the GPUs any of the American trainers have.
Bigger is not better, but OpenAI has also just lost their creative edge, and all Altman's talk about scaling up training with trillions of dollars is a massive con.
o1 is kind of a joke, CoT and reflection strategies have been known for awhile. You can do it for free youself, to an extent, and some models have tried to finetune this in: https://github.com/codelion/optillm
But one sad thing OpenAI has seemingly accomplished is to "salt" the open LLM space. Theres way less hacky experimentation going on than there used to be, which makes me sad, as many of its "old" innovations still run circles around OpenAI.
... "Alibaba (LLM)" ... is it this ? ... ?
Qwen2.5: A Party of Foundation Models!
https://qwenlm.github.io/blog/qwen2.5/
BTW, as I wrote that post, Qwen 32B coder came out.
Now a single 3090 can beat GPT-4o, and do it way faster! In coding, specifically.
Great news 😁🥂, someone should make a new post on this !