this post was submitted on 24 Jul 2024
198 points (92.7% liked)

Technology

59605 readers
4202 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Voyajer@lemmy.world 8 points 4 months ago* (last edited 4 months ago) (5 children)

I've run quantized 70B models on CPU with 32 gigs but it is very slow

[–] bizarroland@fedia.io 3 points 4 months ago (3 children)

I have a home server with 140 gigs of RAM, it was surprisingly cheap. It's an HP z6 with the 6146 gold xeon processor.

I found a seller who was selling it with a low spec silver and 16 gigs of RAM for like 250 bucks.

Found the processor upgrade for about $120 and spend another $150 on 128gb of second-hand ECC ddr4.

I think the total cost was something like $700 after throwing a couple of 8 TB hard drives in.

I've also placed a Nvidia 4070 in it, which I got doing some horse trading.

How close am I on the specs to being able to run the 70b version?

[–] BaroqueInMind@lemmy.one 2 points 4 months ago* (last edited 4 months ago) (2 children)

What's the bus speed of the RAM? You might run it just fine but still bottlenecked there.

[–] bizarroland@fedia.io 1 points 4 months ago (1 children)
[–] BaroqueInMind@lemmy.one 2 points 4 months ago* (last edited 4 months ago)

With 144Gb of total RAM, you should be able to run any CPU intensive software.

The LLMs use GPU vRAM though, so it doesn't matter how much system RAM you have, since GPU vRAM is what the xformers and tensor scripts prioritize and have been ultimately optimized to use over CPU and RAM.

load more comments (1 replies)