this post was submitted on 07 Aug 2024
97 points (97.1% liked)

Technology

59589 readers
2891 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] 1984@lemmy.today 23 points 3 months ago (6 children)

There are models you can download and run at home that doesn't have the politically correct censorship inside. It's very nice to not have artificial politeness for example, and the models actually answers your actual questions.

You need a powerful computer for some of them though.

[–] Steviepoo@lemmy.world 6 points 3 months ago (5 children)

Such as? Where would technologically proficient AI-beginner start?

[–] merari42@lemmy.world 13 points 3 months ago (2 children)

For a user without much technical experience using a ready-made gui like Jan.ai with automatic model download and ability to run models with the ggml library on consumer grade hardware like mac M-series chips or cheap GPUs by either Nvidia or AMD is probably a good start.

For a little bit more technically proficient users Ollama is probably a great choice to start to host your own OpenAI-like API for local models. I mostly run gemma2 or small llama 3.1 like models with that.

[–] Ravi@feddit.org 1 points 3 months ago

OpenWebUI is also a great and simple solution, that's using Ollama under the hood. Was pretty easy to setup with Docker.

load more comments (1 replies)
load more comments (3 replies)
load more comments (3 replies)