this post was submitted on 07 Jan 2024
198 points (96.7% liked)

Selfhosted

40296 readers
251 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I'm interested in hosting something like this, and I'd like to know experiences regarding this topic.

The main reason to host this for privacy reasons and also to integrate my own PKM data (markdown files, mainly).

Feel free to recommend me videos, articles, other Lemmy communities, etc.

you are viewing a single comment's thread
view the rest of the comments
[–] exu@feditown.com 3 points 10 months ago (1 children)

If you're using llama.cpp, have a look at the GGUF models by TheBloke on huggingface. He puts approximate RAM required in the readme based on the quantisation level.

From personal experience I'd estimate 12G for 7B models based on how full RAM was with 16 gigs. For mixtral at least 32G.

Thanks, appreciate it (I'm new to local text CPU models, I know it was a stupid question).