this post was submitted on 14 Feb 2024
481 points (97.4% liked)

Technology

59569 readers
4136 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Gork@lemm.ee 71 points 9 months ago (6 children)

Are there any Open Source girlfriends that we can download and compile?

[–] herrcaptain@lemmy.ca 57 points 9 months ago (1 children)

Hey now, I don't want anyone looking at my girlfriend's source code. That's personal!

[–] demonsword@lemmy.world 30 points 9 months ago

I don’t want anyone looking at my girlfriend’s source code

it's okay, dude, we all already did...

[–] DarkThoughts@kbin.social 16 points 9 months ago* (last edited 9 months ago) (2 children)

The bots (what the actual girlfriends or whatever other characters are) aren't the problem. You can find them on chub.ai for example or write them yourself fairly easily. The issue the software, and even more so the hardware. You need something like the mentioned Kobold.ccp or oobabooga, and then you'd also need a trained LLM model that you can get on huggingface.co, which is already where it gets complicated (they'll be loaded within kobold or oobabooga). You also need to understand how they work in regards to context sizes & bytes, because they need a lot, and I mean A LOT of vram to work properly. Basically, the more vram you have, the better the contextual understanding, their memory is. Otherwise you'd have a bot that maybe knows to only contextualize the last couple messages. For paid services like novelai.net you basically have your bots run through big ass server farms with lots of GPUs that bundle their vram and processing power, giving you "decent" context sizes (imo the greatest weak point of LLMs and it is deeply rooted in how they work) and decent speed. NovelAI also supports front-ends like SillyTavern which is great for local bot management and settings, regardless if you self host or use a paid service (NOT EVERY PAID SERVICE HAS AN API FOR THIS! OpenAI's ChatGPT technically does too but they do not allow NSFW content and can ban you for that if caught).
There's a bunch of "free" online services too, like janitorai.com but most of them have slow speeds and the chat degrades significantly after just a few messages, because they have low context sizes. The better / paid models suffer from this degradation too but it is slower and less noticeable, at least at first. You can use that to get an idea of how LLMs work though.

Edit: Should technically self explanatory / common sense, but I would advise not to share ANY personal information through online service chats that could identify you as a person!

[–] Gork@lemm.ee 19 points 9 months ago (3 children)

Does it make it faster if the GPU has waifu stickers on it?

[–] DarkThoughts@kbin.social 13 points 9 months ago

I don't know, I'm not a weeb.

[–] Turun@feddit.de 3 points 9 months ago

Define "it"

Because waifu stickers may indeed speed up "it" for some definition of "it"

[–] HelloHotel@lemmy.world 1 points 9 months ago* (last edited 8 months ago)

itll do the opposite im afraid, OW! Hot... umm whats that awful smell of burning plastic.

[–] SwampYankee@mander.xyz -1 points 9 months ago (2 children)

Basically, the more vram you have, the better the contextual understanding, their memory is. Otherwise you’d have a bot that maybe knows to only contextualize the last couple messages.

Hmm, if only there was some hardware analogue for long-term memory.

[–] OKRainbowKid@feddit.de 2 points 9 months ago (1 children)

What are you trying to say? Do you understand what the problem is?

[–] SwampYankee@mander.xyz 1 points 9 months ago

I guess I'm wondering if there's some way to bake the contextual understanding into the model instead of keeping it all in vram. Like if you're talking to a person and you refer to something that happened a year ago, you might have to provide a little context and it might take them a minute, but eventually, they'll usually remember. Same with AI, you could say, "hey remember when we talked about [x]?" and then it would recontextualize by bringing that conversation back into vram.

Seems like more or less what people do with Stable Diffusion by training custom models, or LORAs, or embeddings. It would just be interesting if it was a more automatic process as part of interacting with the AI - the model is always being updated with information about your preferences instead of having to be told explicitly.

But mostly it was just a joke.

[–] DarkThoughts@kbin.social 1 points 9 months ago

Yes, databases (saved on a hard drive). SillyTavern has Smart Context but that seems not that easy to install so I have no idea how well that actually works in practice yet.

[–] pennomi@lemmy.world 11 points 9 months ago (2 children)

Pretty easy to roll your own with Kobold.cpp and various open model weights found on HuggingFace.

[–] TipRing@lemmy.world 8 points 9 months ago

Also for an interface, I'd recommend KoboldLite for writing or assistant and SillyTavern for chat/RP.

[–] DarkThoughts@kbin.social 4 points 9 months ago (1 children)

I tried oobabooga and it basically always crashes when I try to generate anything, no matter what model I try. But honestly, as far as I can tell all the good models require absurd amounts of vram, much more than consumer cards have, so you'd need at least like a small gpu server farm to local host them reliably yourself. Unless of course you want like practically nonexistent context sizes.

[–] exu@feditown.com 4 points 9 months ago

You'll want to use a quantised model on your GPU. You could also use the CPU and offload some parts to the GPU with llama.cpp (an option in oobabooga). Llama.cpp models are in the GGUF format.

[–] e-ratic@kbin.social 4 points 9 months ago* (last edited 9 months ago)
[–] itsAsin@lemmy.world 2 points 9 months ago (1 children)

i second this request. please

[–] DarkThoughts@kbin.social 1 points 9 months ago

See my other reply for some basic info & pointers.