this post was submitted on 07 Aug 2024
97 points (97.1% liked)
Technology
59495 readers
3135 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Oobabooga is a pretty beginner-friendly solution for running LLMs locally. Models are freely available on Huggingface, but look for GGUF quantizations that will fit in your VRAM. The good thing about GGUFs is that they're typically offered in a wide range of sizes so you can pick one that will fit on your GPU. If you use all your VRAM and start offloading to system memory then the generation will be far slower.
I've had the best results with Noromaid20B and Rose20B quants running on a 16GB 4080. Don't expect it to be as smart as GPT 4.0, but those models do a pretty good job of following instruction and writing decent prose.
Once you mess around with Oobabooga a bit, I'd highly recommend picking up the SillyTavern front-end. Oobabooga runs the actual model while SillyTavern manages characters, world lore, and offers a wide range of other features including a "visual novel" mode where you can set up character sprites that emote based on the content of the messages. It takes a while to get the hang of but it's pretty cool.