this post was submitted on 09 Mar 2025
226 points (97.9% liked)

Technology

64937 readers
3987 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

This is again a big win on the red team at least for me. They developed a "fully open" 3B parameters model family trained from scratch on AMD Instinct™ MI300X GPUs.

AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) [...]. Instella models outperform existing fully open models of similar sizes and achieve competitive performance compared to state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B [...].

As shown in this image (https://rocm.blogs.amd.com/_images/scaling_perf_instruct.png) this model outperforms current other "fully open" models, coming next to open weight only models.

A step further, thank you AMD.

PS : not doing AMD propaganda but thanks them to help and contribute to the Open Source World.

you are viewing a single comment's thread
view the rest of the comments
[–] domi@lemmy.secnd.me 12 points 3 hours ago (1 children)

I disagree, LLMs have been very helpful for me and I do not see how an open source AI model trained with open source datasets is detrimental to society.

[–] Ulrich@feddit.org -4 points 2 hours ago (1 children)

I don't know what to say other than pull your head outta the sand.

[–] sugar_in_your_tea@sh.itjust.works 8 points 1 hour ago (1 children)

No you.

Explain your exact reasons for thinking it's malicious. There's a lot of FUD surrounding "AI," a lot of which come from unrealistic marketing BS and poor choices by C-suite types that have nothing to do with the technology itself. If you can describe your concerns, maybe I or others can help clarify things.

[–] frezik@midwest.social 3 points 1 hour ago (1 children)

These models are trained on human creations with the express intent to drive out those same human creators. There is no social safety net available so those creators can maintain a reasonable living standard without selling their art. It won't even work--the models aren't good enough to replace these jobs, but they're good enough to fool the C-suite into thinking they can--but they'll do lots of damage in the attempt.

The issues are primarily social, not technical. In a society that judges itself on how well it takes care of the needs of everyone, I would have far less of an issue with it.

[–] sugar_in_your_tea@sh.itjust.works 1 points 36 minutes ago

The issues are primarily social, not technical.

Right, and having a FOSS alternative is certainly a good thing.

I think it's important to separate opposition to AI policy from a specific implementation. If your concerns are related to the social impact of a given technology, that is where the opposition should go, not toward the technology itself.

That said, this is largely similar to opposition to other types of technological change. Every time a significant change in technology comes about, there is a significant impact to jobs. The printing press destroyed the livelihood of scribes, but it made books dramatically cheaper, which created new jobs for typesetters, booksellers, etc. The automobile dramatically cut back jobs like farriers, stable hands, etc, but created new jobs for drivers, mechanics, etc. I'm sure each of those large shifts in technology also had an overreaction by business owners as they adjusted to the new normal. It certainly sucks for those impacted, but it tends to benefit those who can quickly adapt and make use of the new technology.

So I totally understand the hesitation around AI, especially given the overreaction by C-suites in gutting their workforce based on the promises made by AI marketing teams. However, that has nothing to do with the technology, but the social issues around the technology. Instead of hating AI in general, redirect that anger onto the actual problems:

  • poor social safety net
  • expensive education
  • lack of consequences for false marketing
  • lack of consequences for C-suite mistakes

Hating on a FOSS model just because it's related to an industry that is seeing abuse is the wrong approach.