this post was submitted on 23 May 2024
45 points (94.1% liked)

Selfhosted

40329 readers
419 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I am a teacher and I have a LOT of different literature material that I wish to study, and play around with.

I wish to have a self-hosted and reasonably smart LLM into which I can feed all the textual material I have generated over the years. I would be interested to see if this model can answer some of my subjective course questions that I have set over my exams, or write small paragraphs about the topic I teach.

In terms of hardware, I have an old Lenovo laptop with an NVIDIA graphics card.

P.S: I am not technically very experienced. I run Linux and can do very basic stuff. Never self hosted anything other than LibreTranslate and a pihole!

all 25 comments
sorted by: hot top controversial new old
[–] Skrufimonki@lemmynsfw.com 8 points 6 months ago (1 children)

While you can run an llm on an "old" laptop with an Nvidia GC it will likely be really slow. Like several minutes to much much longer slow. Huggingface.co is a good place to start and has a ton of different LLMs to choose from that range from small enough to run on your hardware to ones that won't.

As you are a teacher you know that research is going to be vital to your understanding and implementing this project. There is a plethora of information out there. There will not be a single person's answer that will work perfectly for your wants and your hardware.

When you have figured out your plan and then run into issues that's a good point to ask questions with more information about your situation.

I say this cause I just went through this. Not to be an ass.

[–] lemmyvore@feddit.nl 2 points 6 months ago (1 children)

Can they not get a TPU on USB, like the Coral Accelerator or something?

[–] theterrasque@infosec.pub 1 points 6 months ago

It's less the calculations and more about memory bandwidth. To generate a token you need to go through all the model data, and that's usually many many gigabytes. So the time it takes to read through in memory is usually longer than the compute time. GPUs have gb's of RAM that's many times faster than the CPU's ram, which is the main reason it's faster for llm's.

Most tpu's don't have much ram, and especially cheap ones.

[–] Fisch@discuss.tchncs.de 6 points 6 months ago

What I'm using is Text Generation WebUI with an 11B GGUF model from Huggingface. I offloaded all layers to the GPU, which uses about 9GB of VRAM. With GGUF models, you can choose how many layers to offload to the GPU, so it uses less VRAM. Layers that aren't offloaded use system RAM and the CPU, which will be slower.

[–] OpticalMoose@discuss.tchncs.de 5 points 6 months ago

Probably better to ask on !localllama@sh.itjust.works. Ollama should be able to give you a decent LLM, and RAG (Retrieval Augmented Generation) will let it reference your dataset.

The only issue is that you asked for a smart model, which usually means a larger one, plus the RAG portion consumes even more memory, which may be more than a typical laptop can handle. Smaller models have a higher tendency to hallucinate - produce incorrect answers.

Short answer - yes, you can do it. It's just a matter of how much RAM you have available and how long you're willing to wait for an answer.

[–] stanleytweedle@lemmy.world 4 points 6 months ago

I'm in the early stages of this myself and haven't actually run an LLM locally but the term that steered me in the right direction for what I was trying to do was 'RAG' Retrieval-Augmented Generation.

ragflow.io (terrible name but good product) seems to be a good starting point but is mainly set up for APIs at the moment though I found this link for local LLM integration and I'm going to play with it later today. https://github.com/infiniflow/ragflow/blob/main/docs/guides/deploy_local_llm.md

[–] umami_wasbi@lemmy.ml 3 points 6 months ago
[–] h3ndrik@feddit.de 3 points 6 months ago* (last edited 6 months ago)

It depends on the exact specs of your old laptop. Especially the amount of RAM and VRAM on the graphics card. It's probably not enough to run any reasonably smart LLM aside from maybe Microsoft's small "phi" model.

So unless it's a gaming machine and has 6GB+ of VRAM, the graphics card will probably not help at all. Without, it's going to be slow. I recommend projects that are based on llama.cpp or use it as a backend, for that kind of computers. It's the best/fastest way to do inference on slow computers and CPUs.

Furthermore you could use online-services or rent a cloud computer with a beefy graphics card by the hour (or minute.)

[–] dlundh@lemmy.world 2 points 6 months ago (2 children)

I watched NetworkChucks tutorial and just did what he did but on my Macbook. Any recent Macbook(M-series) will suffice. https://youtu.be/Wjrdr0NU4Sk?si=myYdtKnt_ks_Vdwo

[–] PipedLinkBot@feddit.rocks 1 points 6 months ago

Here is an alternative Piped link(s):

https://piped.video/Wjrdr0NU4Sk?si=myYdtKnt_ks_Vdwo

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] slurpinderpin@lemmy.world 0 points 6 months ago

NetworkChuck is the man

[–] pushECX@lemmy.world 2 points 6 months ago* (last edited 6 months ago)

I'd recommend trying LM Studio (https://lmstudio.ai/). You can use it to run language models locally. It has a pretty nice UI and it's fairly easy to use.

I will say, though, that it sounds like you want to feed perhaps a large number of tokens into the model, which will require a model made for a large context length and may require a pretty beefy machine.

[–] Sims@lemmy.ml 2 points 6 months ago

You need more than a llm to do that. You need a Cognitive Architecture around the model that include RAG to store/retrieve the data. I would start with an agent network (CA) that already includes the workflow you ask for. Unfortunately I don't have a name ready for you, but take a look here: https://github.com/slavakurilyak/awesome-ai-agents

[–] umami_wasbi@lemmy.ml 2 points 6 months ago
[–] s38b35M5@lemmy.world 1 points 6 months ago (1 children)

https://matilabs.ai/2024/02/07/run-llms-locally/

Haven't done this yet, but this is a source I saved in response to a similar question a while back.

[–] Sekki@lemmy.ml 2 points 6 months ago (1 children)

While this will get you a selfhosted LLM it is not possible to feed data to them like this. As far as I know there are a 2 possibilities:

  1. Take an existing model and use the literature data to fine tune the model. The success of this will depend on how much "a lot" means when it comes to the literature

  2. Create a model yourself using only your literature data

Both approaches will require some yrogramming knowledge and understanding of how a llm works. Additionally it will require a preparation of the unstructured literature data to a kind of structured data that can be used to train or fine tune the model.

Im just a CS student so not an expert in this regard ;)

[–] s38b35M5@lemmy.world 1 points 6 months ago

Thx for this comment.

My main drive for self hosting is to escape data harvesting and arbitrary query limits, and to say, "I did this." I fully expect it to be painful and not very fulfilling...

[–] Evotech@lemmy.world 1 points 6 months ago* (last edited 6 months ago)

There's a few.

Very easy if you set it up with Docker.

Best is probably just ollama and use danswer as a frontend. Danswer will do all the RAG stuff for you. Like managing / uploading documents and so on

Ollama is becoming the standard selfnhosted LLM. And you can add any models you want / can fit.

https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image

https://docs.danswer.dev/quickstart

[–] RichardoC@lemmy.world 1 points 6 months ago

Jan.ai might be a good starting point or ollama? There's https://tales.fromprod.com/2024/111/using-your-own-hardware-for-llms.html which has some guidance for using jan.ai for both server and client

[–] d416@lemmy.world 1 points 6 months ago

The easiest way to run local LLMs on older hardware is Llamafile https://github.com/Mozilla-Ocho/llamafile

For non-nvidia GPUs, webgpu is the way to go https://github.com/abi/secret-llama

[–] theterrasque@infosec.pub 1 points 6 months ago* (last edited 6 months ago)

Reasonable smart.. that works preferably be a 70b model, but maybe phi3-14b or llama3 8b could work. They're rather impressive for their size.

For just the model, if one of the small ones work, you probably need 6+ gb VRAM. If 70b you need roughly 40gb.

And then for the context. Most models are optimized for around 4k to 8k tokens. One word is roughly 3-4 tokens. The VRAM needed for the context varies a bit, but is not trivial. For 4k I'd say right half a gig to a gig of VRAM.

As you go higher context size the VRAM requirement for that start to eclipse the model VRAM cost, and you will need specialized models to handle that big context without going off the rails.

So no, you're not loading all the notes directly, and you won't have a smart model.

For your hardware and use case.. try phi3-mini with a RAG system as a start.

[–] applepie@kbin.social -5 points 6 months ago (1 children)

You would need 24gb vram card to even start this thing up. Prolly would yield shiti results

[–] Bipta@kbin.social 5 points 6 months ago (1 children)

They didn't even mention a specific model. Why would you say they need 24gb to run any model? That's just not true.

[–] applepie@kbin.social -1 points 6 months ago

I didnt say any. Based on what he is asking, he can't just run this shit on an old laptop.