this post was submitted on 19 Apr 2025
421 points (92.0% liked)

Technology

69154 readers
3140 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] gravitas_deficiency@sh.itjust.works 5 points 3 days ago (2 children)

You can get a Coral TPU for 40 bucks or so.

You can get an AMD APU with a NN-inference-optimized tile for under 200.

Training can be done with any relatively modern GPU, with varying efficiency and capacity depending on how much you want to spend.

What price point are you trying to hit?

[–] boonhet@lemm.ee 8 points 3 days ago (2 children)

What price point are you trying to hit?

With regards to AI?. None tbh.

With this super fast storage I have other cool ideas but I don't think I can get enough bandwidth to saturate it.

[–] barsoap@lemm.ee 1 points 2 days ago

With regards to AI?. None tbh.

TBH, that might be enough. Stuff like SDXL runs on 4G cards (the trick is using ComfyUI, like 5-10s/it), smaller LLMs reportedly too (haven't tried, not interested). And the reason I'm eyeing a 9070 XT isn't AI it's finally upgrading my GPU, still would be a massive fucking boost for AI workloads.

[–] gravitas_deficiency@sh.itjust.works -2 points 3 days ago (1 children)

You’re willing to pay $none to have hardware ML support for local training and inference?

Well, I’ll just say that you’re gonna get what you pay for.

[–] bassomitron@lemmy.world 9 points 2 days ago (1 children)

No, I think they're saying they're not interested in ML/AI. They want this super fast memory available for regular servers for other use cases.

[–] boonhet@lemm.ee 3 points 2 days ago (1 children)
[–] caseyweederman@lemmy.ca 1 points 2 days ago (1 children)

I have a hard time believing anybody wants AI. I mean, AI as it is being sold to them right now.

[–] boonhet@lemm.ee 3 points 2 days ago

I mean the image generators can be cool and LLMs are great for bouncing ideas off them at 4 AM when everyone else is sleeping. But I can't imagine paying for AI, don't want it integrated into most products, or put a lot of effort into hosting a low parameter model that performs way worse than ChatGPT without a paid plan. So you're exactly right, it's not being sold to me in a way that I would want to pay for it, or invest in hardware resources to host better models.

[–] WorldsDumbestMan@lemmy.today 1 points 2 days ago

I just use pre-made AI's and write some detailed instructions for them, and then watch them churn out basic documents over hours...I need a better Laptop