I'm afraid to even ask for the minimum specs on this thing, open source models have gotten so big lately
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
Every billion parameters needs about 2 GB of VRAM - if using bfloat16 representation. 16 bits per parameter, 8 bits per byte -> 2 bytes per parameter.
1 billion parameters ~ 2 Billion bytes ~ 2 GB.
From the name, this model has 72 Billion parameters, so ~144 GB of VRAM
Ok but will this run on my TI-83? It's a + model.
Only if it’s silver.
Dang. So close.
My 83 was ganked by some kid I knew so my folks bought me a silver. He denied it. I learned that day to write my name in secret spots.
That kid you knew was a dick. At least he taught you a valuable lesson, I guess.
He absolutely was a dick. I stopped being mates with him after that. My school was like “yeah the cameras didn’t work that day actually”
no. but put this clustering software i wrote in ti-basic on 40 million of them? still no
It's been discovered that you can reduce the bits per parameter down to 4 or 5 and still get good results. Just saw a paper this morning describing a technique to get down to 2.5 bits per parameter, even, and apparently it 's fine. We'll see if that works out in practice I guess
I'm more experienced with graphics than ML, but wouldn't that cause a significant increase in computation time, since those aren't native types for arithmetic? Maybe that's not a big problem?
If you have a link for the paper I'd like to check it out.
My understanding is that the bottleneck for the GPU is moving data into and out of it, not the processing of the data once it's in there. So if you can get the whole model crammed into VRAM it's still faster even if you have to do some extra work unpacking and repacking it during processing time.
The paper was posted on /r/localLLaMA.
You can take a look at exllama and llama.cpp source code on github if you want to see how it is implemented.
CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.) To run Qwen-72B-Chat in bf16/fp16, at least 144GB GPU memory is required (e.g., 2xA100-80G or 5xV100-32G). To run it in int4, at least 48GB GPU memory is requred (e.g., 1xA100-80G or 2xV100-32G).
It's derived from Qwen-72B, so same specs. Q2 clocks it in at only ~30GB.
Just a data center or two. Easy peasy dirt cheapy.
I think I read somewhere that you'll basically need 130 GB of RAM to load this model. You could probably get some used server hardware for less than $600 to run this.
Oh if only it were so simple lmao, you need ~130GB of VRAM, aka the graphics card RAM. So you would need about 9 consumer grade 16GB graphics cards and you'll probably need Nvidia because of fucking CUDA so we're talking about thousands of dollars. Probably approaching 10k
Ofc you can get cards with more VRAM per card, but not in the consumer segment so even more $$$$$$
Afaik you can substitute VRAM with RAM at the cost of speed. Not exactly sure how that speed loss correlates to the sheer size of these models, though. I have to imagine it would run insanely slow on a CPU.
I tested it with a 16GB model and barely got 1 token per second. I don't want to imagine what it would take if I used 16GB of swap instead, let alone 130GB.
Unless you’re getting used datacenter grade hardware for next to free, I doubt this. You need 130 gb of VRAM on your GPUs
So can I run it on my Radeon RX 5700? I overclocked it some and am running it as a 5700 XT, if that helps.
Around 48gb of VRAM if you want to run it in 4bits
Oh yay another model I can't run on my computer :'(
i thought this was about the MUD server software, and got excited. alas.
That's nice and all, but what are some FOSS models I can run on GPU with only 4GB?
I've tried Deepseek Coder, and it's pretty nice for what I use it for. Then there's TinyLlama, which... well it's fast, but I need to be veeeery exact in how I prompt it.
Unfortunately LLMs need a lot of VRAM. You could try using koboldcpp, it runs on the CPU but let's you offload layers onto the GPU. That way you might be able to stay withing those 4gb even with larger models.
Edit: I forgot to mention there's a fork of koboldcpp with rocm for AMD cards, which is about twice as fast if I remember correctly. Only relevant if you have an AMD card tho.
Edit 2: This is the model I use btw
I'm currently playing around with the Jan client, which uses the nitro engine. I think I need to read up on it more, because when I set the ngl value to 15 in order to offload 50% to GPU like the Jan guide says, nothing happens. Though that could be an issue specific to Jan.
Maybe 50% GPU is already using too much VRAM and it crashes. You could try to set it to 0% GPU and see if that works.
4GB is practically nothing in this space. Ideally you want at least 10GB of dedicated vram if you can't get even more. Keep in mind you're also probably trying to share that vram with your operating system. So it's more like ~3GB before you even started.
Kolboldcpp is capable of using both your GPU and CPU together, you might wanna consider that. (Using a feature called layers) There's a trade-off that occurs between the memory available and the quality of its output and the speed of the calculation.
The model mentioned in this post can be run on the CPU with enough system ram or swap.
If you wanna keep it all on the GPU check out 4bit models. Also there's been a lot of work into trying to do this with the raspberry Pi. I suspect that their work could help you out here as well.
Depends on your needs. Best look around in !localllama@sh.itjust.works or similar. (I don't wanna say reddit but r/localLlama is much larger.)
If you're more into creative writing, maybe look for places that discuss SillyTavern (r/SillyTavernAI is an option). It's software for role-play chats, which may not be what you want. But the community is (relatively) large and likely to have good tips for non-coding/less technical applications.
If only I had the $ to get a rig that could run this locally
Since I had an okay experience with EasyDiffusion I tried running text gen locally through oobabooga, but no matter which model I load, it just crashes whenever it tries to generate anything, regardless if it runs through the UI's chat or SillyTavern. No error in the terminal either, it just stops and throws me back into the command line.
OOTL: What is a LLM and what does it do?