this post was submitted on 17 Mar 2026
7 points (76.9% liked)

Selfhosted

57607 readers
1505 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Got a new PC handed down to me. And now have my old one collecting dust. It has a dedicated GPU (GTX 1060 6GB VRAM) i guess the most obvious thing would be an AI model or maybe jellyfin (which is currently running on a raspi 5 just fine for), but was wondering if you maybe had other suggestions?

top 11 comments
sorted by: hot top controversial new old
[–] sefra1@lemmy.zip 1 points 10 minutes ago

"I have a hammer and I hate it's not hammering, any cool ideas involving nails?"

You see, I have the exact same problem as you, I just can't stand seeing hardware going unused. Specially computer hardware that deprecates. But I think before thinking "what can I do with this hardware" you should think "do I have a need or a problem that can be solved with this hardware?" And if the awnser is "no" then maybe consider selling the GPU or giving it to some friend who needs it.

My Jellyfin works without a GPU, just my old 2nd generation i3 is enough to realtime transcode video to my phone, maybe I would need upgrade of I had more users, but I'm it's only user.

Do you have multiple users on your server where you require GPU acceleration, if not there's no much reason to use GPU accell anyway (which is usually trickier to setup)?

Still reporposing the computer to use as a server seems to be a good idea, because I at least can't stand the nightmare of using USB hard drives, I've hard really bad experiences with those lousy cables and connection. But if you do that. That leaves you with another problem. What to do with the raspberry pi?

Also, I just recently also built a new PC had the same problem of not knowing what to do with my laptop, I came to conclusion that the best thing I can do with it is to run background chat applications on it and maybe web browser via waypipe. So it just looks like a window on my main PC and this way I have ram on my new PC that I may need for some heavy workloads like blender rendering.

[–] B0NK3RS@lazysoci.al 2 points 58 minutes ago

You could keep the chain going and pass it down to somebody else?

[–] NotSteve_@piefed.ca 7 points 5 hours ago

Regarding Jellyfin, if the PC you got has an Intel CPU then using Intel QuickSync would actually easily outperform the NVIDIA card for transcoding.

Up until very recently I was using a cheap i3 to power my Jellyfin instance that often has 5+ streams going at a time. (The only reason I upgraded was that I had a friend getting rid of an i7 from the same gen lol)

[–] AverageGoob@lemmy.world 13 points 7 hours ago
[–] ProdigalFrog@slrpnk.net 9 points 6 hours ago* (last edited 6 hours ago)

You could use the GPU to help host a peertube instance.

[–] Truscape@lemmy.blahaj.zone 9 points 7 hours ago

Honestly the easiest use for a PC would be to remove the GPU (if integrated graphics are available on the CPU), and to host things like community game servers for your friends (or maybe something like a self-host chat server for Teamspeak/similar).

A GPU of that caliber is not ideal for those kinds of workloads (although it'd work fine for media encoding).

[–] wabasso@lemmy.ca 2 points 5 hours ago

Are you going to be running Linux?

I’ve also got a tower with a GTX 1060 and I’d like to have it sleeping, but ready to be woken by the Pi when needed. But it never wakes up from a sleep state, so I’m curious if you’ve had any luck with that and we can trade notes.

[–] KorYi@lemmy.ml 6 points 6 hours ago (1 children)

I have my old GTX 980 in a server. It is currently handling object recognition and transcoding in frigate, immich and Plex. Works great for this (although not super useful for Plex as it doesn't support HEVC).

I haven't tried throwing any LLMs at it.

[–] rabber@lemmy.ca 3 points 4 hours ago* (last edited 4 hours ago)

I came here to suggest Frigate.

[–] Eirikr70@jlai.lu 2 points 6 hours ago

I presume that its power consumption is not to be neglected. Do I'd just keep it off of I don't really need it.

[–] lyralycan@sh.itjust.works 0 points 4 hours ago

Can confirm what another user said, that Intel iGPU would be better in your case.

I'll let you know now -- if it runs Windows kill it. My server was originally Windows running Docker Desktop. It hosted three services: Minecraft server which lagged like a bitch; Samba folder share; and Emby. Whenever Emby playback froze I knew Windows, whose antivirus kept running the HDD under constant load, had fucked the i6 6100 to 100%, which happened at least twice a day.

Moving on, now I run Proxmox. I host 25 services with the CPU at ~35% idle and 24GB RAM at 75%. Nothing lags.

Before I plugged in the GPU my server drew 25W consistently, going to 35W under load. With the GPU, an RTX 3060 11GB (used), it uses 85W idle, so make sure it's worth it. For my case it not only transcodes for Emby and resumes streaming in a second, but also handles voice inference for Home Assistant in under a second, and mid-sized Ollama LLM responses. Would recommend a high VRAM Nvidia card (for CUDA) in that scenario, as my model Gemma3 7B uses 6GB VRAM and 2GB RAM. But a top model, say Dolphin-Mixtral 22B, needs 80GB storage, 17GB RAM and.. Well I don't have the RAM but you get it. LLMs are intensive.