this post was submitted on 23 Sep 2024
14 points (88.9% liked)

Linux

48323 readers
638 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

Hi all!

I'm trying to use a local LLM to help me write in an Obsidian.md vault. The local LLM is running through a flatpak called GTP4ALL, which states that it can expose the model through an OpenAI server. The obsidian.md plugin can't reach the LLM on the specified port, so I'm wondering if the flatpak settings need to be changed to allow this.

(This is all on bazzite, so the obsidian program is a flatpak too)

Anyone have an idea where to start?

top 6 comments
sorted by: hot top controversial new old
[–] that_leaflet@lemmy.world 8 points 2 months ago (1 children)

Flatpak doesn’t care about your ports, they can access them if they have network permission.

[–] UNY0N@lemmy.world 2 points 2 months ago

Thank you for the info!

[–] asap@lemmy.world 3 points 2 months ago (1 children)

It'll be easier to run the LLM in Podman on Bazzite.

[–] UNY0N@lemmy.world 2 points 2 months ago (1 children)
[–] Toribor@corndog.social 1 points 1 month ago (1 children)

I've not tried GPT4ALL but Ollama combined with Open WebUI is really great for selfhosted LLMs and can run with podman. I'm running Bazzite too and this is what I do.

[–] UNY0N@lemmy.world 1 points 1 month ago

I was trying so hard to get GTP4ALL to work inside of obsidian because it has this localdocs feature, where you can feed it documents and it integrates them into itself. I want the model to be aware of the whole vault, and also generate notes based on the vault contents, constantly updating.

It looks like podman desktop + chatGPG obsidian plugin is the way to go through, I'm playing around with it and it looks promising.