this post was submitted on 02 Dec 2024
381 points (99.0% liked)

Technology

59772 readers
3115 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] fuckwit_mcbumcrumble@lemmy.dbzer0.com 40 points 2 days ago (1 children)

Man I hope Battlemage is an actually profitable launch, or at least not a massive loss. Otherwise who knows if the next CEO will axe their GPU line. People liked to fearmonger them killing Arc before, with with a new change in management I can actually see that happening.

[–] brucethemoose@lemmy.world 19 points 2 days ago (2 children)

Its a lower midrange only launch like it appears to be, it will be extremely unprofitable. AMD may even eat large chunks of this market with the Strix Halo APU, which could be similar to the B570 with no need for a discrete GPU.

Theres actually a big and growing demand for ANY high VRAM GPU for the LLM crowd (that AMD is ignoring for inexplicable reasons beyond Strix Halo) but it appears Intel can't even compete there. No 256 bit APU, their GPU is 192 bit so capped at like 24GB...

[–] PalmTreeIsBestTree@lemmy.world 5 points 2 days ago (1 children)

This is why I got a 4070 ti super because it has a 256 bit bus.

[–] brucethemoose@lemmy.world 3 points 2 days ago (1 children)

Eh actually the 4060 TI is way better for LLMs :P With Nvidia its all about VRAM capacity.

I only game and a larger bus is better for 4K.

[–] rumba@lemmy.zip 1 points 1 day ago (1 children)

Intel is totally missing the boat honestly. Their mobile i9 with the built-in GPU can share DDR5 with the video card.

You can put 96 gigs of RAM in a small form factor and load in a monster model. It's not super fast, But it works, and it's a lot faster than not offloading layers off the CPU.

They should be selling nuk sized PCs with built-in graphics and 128 gigs of the fastest RAM they can put on the boards.

[–] brucethemoose@lemmy.world 1 points 1 day ago (1 children)

IMO its not really "enough" until the bus is 256 bit. Thats when 32B-72B class models start to look even theoretically runnable at decent speeds.

[–] rumba@lemmy.zip 2 points 1 day ago (1 children)

he was getting 1.4 tokens on a 70B model. Not setting the world on fire, but enough to load and script against 70b

https://www.youtube.com/watch?v=xyKEQjUzfAk

[–] brucethemoose@lemmy.world 1 points 1 day ago* (last edited 1 day ago) (1 children)

Also that is a very low context test. A longer context will bog it down, even setting aside the prompt processing time.

...On the other hand, you could probably squeeze a bit more running openvino instead of llama.cpp, so that is still respectable.

[–] rumba@lemmy.zip 2 points 1 day ago

text test. A longer co

yeah, it's definitely not good enough for user-facing work, but if I'm working on development for something like translations, being able to see the 70b output to compare it to other models, it's super useful before I send it off to something that costs more money to run.

9/10 times, the bigger model isn't significantly better for what I'm trying to do, but it's really nice to confirm that.