this post was submitted on 12 Oct 2024
183 points (95.5% liked)

Selfhosted

40347 readers
403 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Instructions here: https://github.com/ghobs91/Self-GPT

If you’ve ever wanted a ChatGPT-style assistant but fully self-hosted and open source, Self-GPT is a handy script that bundles Open WebUI (chat interface front end) with Ollama (LLM backend).

  • Privacy & Control: Unlike ChatGPT, everything runs locally, so your data stays with you—great for those concerned about data privacy.
  • Cost: Once set up, self-hosting avoids monthly subscription fees. You’ll need decent hardware (ideally a GPU), but there’s a range of model sizes to fit different setups.
  • Flexibility: Open WebUI and Ollama support multiple models and let you switch between them easily, so you’re not locked into one provider.
you are viewing a single comment's thread
view the rest of the comments
[–] jonno@discuss.tchncs.de -2 points 1 month ago (2 children)

Are you running these llms in containers completely cut off from the internet? My understanding was that the “local first” llms aren’t truly offline and only try and answer base queries offline before contacting their provider for support. This invalidating the privacy argument.

[–] TheHobbyist@lemmy.zip 19 points 1 month ago (1 children)

The interface called open-webui can run in a container, but ollama runs as a service on your system, from my understanding.

The models are local and only answer queries by default. It all happens on the system without any additional tools. Now, if you want to give them internet access, you can, it is an option you have to setup and open-webui makes that possible though I have not tried it myself. I just see it.

I have never heard of any llm "answer base queries offline before contacting their provider for support". It's almost impossible for the LLM to do it by itself without you setting things up for it that way.

[–] Hule@lemmy.world 1 points 1 month ago

I've seen this behavior mentioned on phones (Google, Samsung). They have a chip for the basic tasks, but for heavier stuff (e. g. images) they call home.

[–] voracitude@lemmy.world 6 points 1 month ago

Where would an open source LLM that you run locally phone home to, exactly? It requires a lot of GPU compute, do you think someone's just going to give that away for free, without even requiring an account they can turn into saleable data?

But wait, there's an even better way to be sure: download OpenHardwareMonitor so you can watch your GPU go to 100%, and this or GPT4All or something. Then airgap your computer, and try it yourself.