this post was submitted on 28 Jan 2025
118 points (93.4% liked)

Technology

72690 readers
3162 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] banshee@lemmy.world 26 points 5 months ago (2 children)

Just to clarify - DeepSeek censors its hosted service. Self-hosted models aren't affected.

[–] LorIps@lemmy.world 12 points 5 months ago (2 children)

Deepseek 2 is censored locally, had a bit of fun asking him about China 19891000028459 (Running locally using Ollama with Alpaca as GUI)

[–] tyler@programming.dev 4 points 5 months ago (1 children)

On another person who’s actually running locally. In your opinion, is r1-32b better than Claude sonnet 3.5 or OpenAI o1? IMO it’s been quite bad, but I’ve mostly been using it for programming tasks and it really hasn’t been able to answer any of my prompts satisfactorily. If it’s working for you I’d be interested in hearing some of the topics you’ve been discussing with it.

[–] LorIps@lemmy.world 3 points 5 months ago (1 children)

R1-32B hasn't been added to Ollama yet, the model I use is Deepseek v2, but as they're both licensed under MIT I'd assume they behave similarly. I haven't tried out OpenAI o1 or Claude yet as I'm only running models locally.

[–] tyler@programming.dev 2 points 5 months ago (1 children)

Hmm I’m using 32b from ollama, both on windows and Mac.

[–] LorIps@lemmy.world 2 points 5 months ago (1 children)

Ah, I just found it. Alpaca is just being weird again. (I'm presently typing this while attempting to look over the head of my cat)

[–] LorIps@lemmy.world 2 points 5 months ago

Deepseek-R1 unable to answer prompt "Tianamen Square" But it's still censored anyway

[–] banshee@lemmy.world 2 points 5 months ago

Interesting. I wonder if model distillation affected censoring in R1.

[–] Yingwu@lemmy.dbzer0.com 8 points 5 months ago (1 children)

I ran Qwant by Alibaba locally, and these censorship constraints were still included there. Is it not the same with DeepSeek?

[–] banshee@lemmy.world 4 points 5 months ago

I think we might be talking about separate things. I tested with this 32B distilled model using llama-cpp.