this post was submitted on 25 Sep 2024
151 points (89.9% liked)
Technology
59495 readers
3135 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
HOLY HELL THAT'S COOL. It can do so much too!!!
I locally installed some small LLM model more than a year ago. It took up like 25 gigs or something along with all CUDA libraries n stuff. It was alright, but I figured that cloud based solutions were the best for my use case, as they were better and for free.
I had no idea that open sourced AI progressed so much in the last year. Amazing stuff!
It depends how you run it etc. You may have not been using a quantized model.
I was using the quantized version :(
But again, do remember that this was when the first open sourced AI models had just begun to come out. Stuff from Open Assistant for example. I don't even remember the name of the model that I was running (it was just too weird and funny lol). I just remember it being HUGE, quite dumb and making my device sweat lol.