this post was submitted on 14 Feb 2024
481 points (97.4% liked)

Technology

60942 readers
3874 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Gork@lemm.ee 71 points 11 months ago (19 children)

Are there any Open Source girlfriends that we can download and compile?

[–] pennomi@lemmy.world 11 points 11 months ago (3 children)

Pretty easy to roll your own with Kobold.cpp and various open model weights found on HuggingFace.

[–] DarkThoughts@kbin.social 4 points 11 months ago (1 children)

I tried oobabooga and it basically always crashes when I try to generate anything, no matter what model I try. But honestly, as far as I can tell all the good models require absurd amounts of vram, much more than consumer cards have, so you'd need at least like a small gpu server farm to local host them reliably yourself. Unless of course you want like practically nonexistent context sizes.

[–] exu@feditown.com 4 points 11 months ago

You'll want to use a quantised model on your GPU. You could also use the CPU and offload some parts to the GPU with llama.cpp (an option in oobabooga). Llama.cpp models are in the GGUF format.

load more comments (1 replies)
load more comments (16 replies)