this post was submitted on 01 Dec 2024
100 points (82.1% liked)
Technology
59772 readers
3162 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Does anyone have an idea how much RAM would this need?
Looks like it has 32B in the name, so enough RAM to hold 32 billion weights plus activations (current values for the layer being run right now, which I think should be less than a gigabyte). It is probably made of 16 bit floats to start with, so something like 64 gigabytes, but if you start quantizing it to cram more weights into fewer bits, you can go down to like 4 bits per weight, or more like 16 gigabytes of memory to run (a slightly worse version of) the model.
So you're telling me there's a chance.
I think there are consumer-grade GPUs that can run this on a single card with enough quantization. Or if you want to run it on CPU you can buy and plug in enough DIMMs if you have an only somewhat large amount of money.
Pulled whatever is available on Ollama by this name and it seems to just fit on a 3090. Takes 23GB VRAM.
I asked it and it gave me this answer:
Priceless.
It’s so innocent. So cute. Like your child telling you that they don’t need to eat.
It depends on how low you're willing to go on the quant and what you consider acceptable token speeds. Qwen 32b q3ks can be partially offloaded on my 8gb vram 1070ti and runs at about 2t/s which is just barely what I consider usable for real time conversation.
For a 16k context window using q4_k_s quants with llamacpp it requires around 32GB. You can get away with less using smaller context windows and lower accuracy quants but quality will degrade and each chain of thought requires a few thousand tokens so you will lose previous messages quickly.