Selfhosted

57456 readers
759 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
1
 
 

Due to the large number of reports we've received about recent posts, we've added Rule 7 stating "No low-effort posts. This is subjective and will largely be determined by the community member reports."

In general, we allow a post's fate to be determined by the amount of downvotes it receives. Sometimes, a post is so offensive to the community that removal seems appropriate. This new rule now allows such action to be taken.

We expect to fine-tune this approach as time goes on. Your patience is appreciated.

2
 
 

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

3
4
20
submitted 6 hours ago* (last edited 6 hours ago) by idunnololz@lemmy.world to c/selfhosted@lemmy.world
 
 

I've accumulated enough self hosted stuff that I feel like I want a dashboard now so I don't have to remember which IP & port I need for which service (not all my services are exposed to the WWW).

I looked at some dashboard solutions already but there is a huge amount of them. I also use Home Assistant as the dashboard for my home.

So I'm looking to bounce some ideas off this community. Should I add one more service to my servers in the form of a dashboard, or should I maybe create a dashboard in Home Assistant?

If going with a standalone dashboard service, which one?
If going with Home Assistant, is there some good add-ons or something I can use to make managing my services easier?

Let me know what you guys think and thank you!

5
 
 

Joplin doesn't seem fully FOSS.
Logseq seems nice but I won't be able to hit it at notes.mydomain.works

What are good options? Ideally for keeping recipes and things

6
 
 

Just a PSA.

See this thread

Sorry to link to Reddit, but not only is the dev sloppily using using Claude to do something like 20k line PRs, but they are completely crashing out, banning people from the Discord (actually I think they wiped everything from Discord now), and accusing people forking their code of theft.

It’s a bummer because the app was pretty good… thankfully Calibre-web and Kavita still exist.

7
8
1172
Your logging is probably down (media.piefed.social)
submitted 17 hours ago* (last edited 17 hours ago) by Ek-Hou-Van-Braai@piefed.social to c/selfhosted@lemmy.world
 
 
9
 
 

Heya.

I'm still pretty new to the homelab scene, so the more detail you can add the better. I'd like to add some sort of log aggregation tool, something like Elastic, where I can go to look at logs from any of my systems that aren't working, or just make sure I don't miss any errors.

Pretty much everything I run is set up as a Proxmox LXC from Proxmox helper scripts, which most of the time means it's running as a systemctl service. Sometimes they run in Alpine instead, and a few of my apps also run in Docker.

What's a good app to aggregate logs from those sources? I've heard of Prometheus, Grafana and Loki but not sure if they do what I'm after, they seem pretty overwhelming and more focused on metrics, whereas I want to be able to search for and view logs. I'd appreciate if you also mention the basic steps to send the logs from each container to said app.

10
31
submitted 1 day ago* (last edited 2 hours ago) by Zonefive@sh.itjust.works to c/selfhosted@lemmy.world
 
 

I have three networked Win10/11 PCs at our small family business that occasionally need to be accessed and maintained from my Fedora PC at home. I’ve used Google Remote Desktop for a while but it’s unreliable and also F Google.

Was looking at the Gl-Inet Comet products which look promising as they say they work without cloud access, but they’re a tad spendy. If it’s the best option I’m willing to drop the coin.

Are there better options?

Thanks!

Edit: Wow, thank you all for so much useful feedback and excellent recommendations! I have lots to dig in to.

11
 
 

Hey Everyone, I'm currently wanting to switch from Headscale to Netbird. It looks like Netbird is much easier to self host now except I can't get it working with my current Traefik v3.6 config. Here is my config.yaml file for the routers and headers. Any ideas?

Everything loads up fine (from the logs) however I can't go to the domain address. I have a CNAME record in cloudflare not proxied. The documentation says to set an A record of "netbird.mydomain.com" however wouldn't that defeat the purpose of the reverse proxy? I have an A record setup already pointing to my public ip and everything gets sent to my traefik reverse proxy.

` ###http:

routers:

netbird-dashboard:
  rule: Host(`netbird.mydomain.com`)
  entryPoints:
    - https
  tls: {} 
  service: dashboard
  priority: 1

netbird-grpc:
  rule: >
    Host(`netbird.mydomain.com`)
    && (PathPrefix(`/signalexchange.SignalExchange/`)
    || PathPrefix(`/management.ManagementService/`))
  entryPoints:
    - https
  tls: {} 
  service: netbird-server-h2c
  priority: 100

  netbird-backend:
  rule: >
    Host(`netbird.mydomain.com`)
    && (PathPrefix(`/relay`)
    || PathPrefix(`/ws-proxy/`)
    || PathPrefix(`/api`)
    || PathPrefix(`/oauth2`))
  entryPoints:
    - https
  tls: {} 
  service: netbird-server
  priority: 100

services:

 dashboard:
  loadBalancer:
    servers:
      - url: "http://netbird/"

 netbird-server:
  loadBalancer:
    servers:
      - url: "http://netbird/"

netbird-server-h2c:
  loadBalancer:
    servers:
      - url: "h2c://netbird:80"

___`

12
 
 

I've been using a two-bay Synology nas for the last couple of years with 2x 8TB drives and a stupidly large media collection. I recently acquired a four-bay ugreen nas but don't have any drives for it yet because fuck you AI.

Since I need to have all the same drive size for raid, I was thinking of getting larger disks and taking out a second mortgage. Like start with 2x 12TB and some day add two more. If I do that, I'll have 2x 8TB that wouldn't be useful without keeping the Synology running, and I don't really need both.

The other idea was buying two more 8TB for the new nas, copying the media over, and then moving the two over to make four and decommission the Synology.

I am not well versed in raid, so there easily could be something I'm not considering or a way I should do this to make my life easier.

Also, any advice on how one normally keeps backups with such a large amount of data. I know raid ≠ backup, but right now I'm just praying to the tech gods that the disks keep spinning. I know others out there have large media collections… what do you do?

13
132
submitted 2 days ago* (last edited 1 day ago) by rook@lemmy.zip to c/selfhosted@lemmy.world
 
 

New update: my current setup is a dell power edge t310 with 6x4tb SAS, zeon CPU, and 12gb ECC all parts stock. No hardware raid. 2.5gb network card. Should I just replace the 6 drives? With larger capacities? That will probably be more than $10/tb... I didn't buy the 16 drives yet, they are used SAS drives 4tb each, turn to be about $40 each.

Current storage 8tb used out of 14... And lots of cold drives waiting to get copied... 10tb+ probably. Is it worth copying all the cold storage drives to the redundant nas.

Update: budget(200-600), the reason for the build is I found cheap 4tb drives for almost $10/Terabyte. So I want to use as much of them as I can

I am trying to build my final NAS build as a beginner.

I have a 6x4tb dell server, but it's not enough.

I am currently trying to build the final boss of my nasses. 4x16tb with truenas with raid

I am unsure of what parts to buy as I am a complete beginner.

I found a case that can hold all 14 drives.

I need a motherboard, CPU, ram, PSU

I am on a budget, kind of.

What motherboard do you recommend? Pulled from a workstations with CPU and ram? A server board? Normal consumer with normal consumer CPU? Motherboard should have some pcie slots for 2 sata cards and one 2.5 GB card.

What CPU to run all these drives?

What ram and how much? 16? 32? Ecc, non ecc? Ddr4? Ddr3?

Power supply: 850w or more?

All parts should be able to support the 16 drives with headroom...

I would appreciate any help on this build, I want to build this as soon as possible.

Thanks

14
 
 

Update

Forgejo seemed to be the winning answer so I tried setting it up. Total setup time was less than 10 minutes. I pushed 10 repositories to test it out and so far it seems pretty good. Thank you everyone for the answers!


As the title states, I am looking to host maybe ~100 git repositories locally on my home network.

I'm not planning on doing anything too crazy with my repositories. The solution doesn't need to support like 1000s of contributors however it should support the most basic features such as being able to see individual commits, branches, diffs, maybe some PR related mechanism, a web GUI, etc.

I don't like to tinker too much. The solution should work and be stable. Stability is a hard requirement. I want to write code and not have to worry about losing it. Yes I will make backups.

Please let me know what some of the best options are at the moment. Thank you!

15
 
 

Hello! I never used *arr stack, and was interested into it, but one thing is stopping me. I see a lot of articles like how it is Netflix (or any other ONLINE theater) replacement, but as I see it is not online. I see two big factors that stops me from trying seerr + jellyfin (and other stuff in between):

  1. You have two switch between those apps to search and then watch.
  2. You can't watch media before it's completely downloaded.

I imagine sitting on coach, searching for show. Then you want to watch some, and then you have to wait half an hour for full episode (or even season?) to download. And then you can realize that you not into it and have to repeat all the steps above. Is my expectation correct? Please don't consider this as negative opinion. Just want to know what to expect. I remember an app called "popcorn time" that does not have that flaws.

UPD: Thanks for replies guys! I read it all. I will deploy the stack some day, but right now I will keep my current setup (which is qbittorent-nox, some public web jackett instance local for my country, and just simple smb shared folder). I also have some selfhosted debris alternative torrserver for times I don't have enough space to download full show.

16
 
 

Hey everybody. I found this interesting. It's likely not a game changer for anyone, but "hardware" watchdogs in Proxmox was a new one for me, and was a cheap and easy, hacky fix to deal with a low value VM that was periodically hanging. This is a nice tool to add to the belt, hope you all enjoy!

17
389
submitted 3 days ago* (last edited 3 days ago) by Used_Gate@piefed.social to c/selfhosted@lemmy.world
 
 

Onionphone is a native Android application for anonymous, end-to-end encrypted push-to-talk voice and text communication over the Tor network. No servers, no accounts, no phone numbers — your .onion address is your identity.

Cross-platform compatible with Terminalphone — call between Android and Linux/Termux using the same protocol.

Optionally use your connection as a relay for ephermeral group channels.

Find the release page for version 1.0.2 which supports custom bridges for accessing censored networks.

18
 
 

I've been looking for a webproxy that would work with big websites like YouTube. So far I've found only very outdated and abandoned ones. Is there any up-to-date and actually functional webproxy I could host?

Edit: The reason I need this is that there are some locked up Windows computers I can't install any traditional VPNs on and that are used by non-tech-savvy people.

I can't pin the comment with the solution so here's the link

19
 
 

Using #Madblog as the easiest way to spin up an Indieweb/ActivityPub-compatible blog.

Zero db, zero JS, entirely hosted on text files.

20
 
 

I'm building a new activitypub/threadiverse software focused on the needs of self hosters who want a single user instance.

I've been posting with it semi-regularly for the last month, and I think it's ready for an open demo.

One of my objectives is to have the lightest resource usage for memory and CPU constrained hardware, as well as the fastest loading web interface for older phones and limited data plans. I ran out of data on my phone last week and having a 41kb front page came in very handy.

You can try the web UI at https://scrapetacular.ydns.eu/latest You can also POST AS A GUEST TO THE FEDIVERSE without signing up. I'm not sure you can do this anywhere else, I'm manually approving posts on the backend because .. well you know. If it asks for a user and pass, use guest and guest, your post will appear with a username like guest4269.

Ideally, open this post https://scrapetacular.ydns.eu/post/10127 and reply to it.

My other plan for mobile is to target the Sync for Lemmy app, as it's dead, meaning it's no longer a moving target.

I've made a few technical choices aimed at keeping things fast

These include:

No ORMs

  • They are convenient but make performance tuning difficult when things get complex as you don't write the queries directly

No Javascript

  • I may have to go back on this if I keep the guest posting function, it might need a captcha or anubis.

No nested comments in the web UI

  • Nested comments are super slow, you are essentially querying the database for the OP, then querying for the N immediate children, then doing N queries for all of their children, then keep going recursively until you reach your depth limit, or all comments are found, you then need to render this structure with html/css

No front page images

  • This is more of a personal preference that happened to make things load faster, the front page displays the text of the OP and last few comments IN FULL, giving a good preview of the conversation, and allocating more space to people who write rather than post memes. Inline images in posts are also replaced with links.

No upvotes/downvotes

  • DID YOU KNOW that most threadiverse traffic is upvotes, downvotes adn emojis? You get an instant speedup by simply not processing them. Also, since this is a single user instance, all my comments are by definition awesome

ROADMAP

  • Massive refactor
  • Make the project public
  • unit tests (this is basically my only requirement for v1)
  • sync for lemmy API
  • admin UI
  • "AI" to "My Butt" wordfilter (mandatory and hardcoded)
  • default subscription to /c/fuck_AI
  • Solve channel discoverablity once and for all
  • SUPERBLOCK (i.e. block everyone who liked this comment)
  • dockerfile? I don't use docker tbh
  • Read Mastodon posts? Do they even have good content?

Tech Stack

  • Go
  • SQLite

I'm using the pure Go sqlite library, Bluemonday for html sanitisation, Blackfriday for Markdown and Migrate for auto db migrations.

End

Thanks to Snoopy and the Cool Froges at jlai.lu for allowing me to test on their channel.

Is this project of interest to you? Have I missed anything obvious? Is there anything else you would like to know?

21
 
 

Using #Madblog as the easiest way to spin up an Indieweb/ActivityPub-compatible blog.

Zero db, zero JS, entirely hosted on text files.

22
 
 

Readme updated today:

This repository is no longer actively maintained.  

The TrueNAS build system previously hosted here has been moved to an internal infrastructure. This transition was necessary to meet new security requirements, including support for Secure Boot and related platform integrity features that require tighter control over the build and signing pipeline.  

No further updates, pull requests, or issues will be accepted. Existing content is preserved here for historical reference only.  

https://github.com/truenas/scale-build

Wondering if this is just the first step towards doing a minio in the future.

23
365
submitted 4 days ago* (last edited 4 days ago) by fccview@lemmy.world to c/selfhosted@lemmy.world
 
 

Hey,

Some of you may know me for Jotty and Cr*nmaster, been quiet with my head down lately improving my apps and trying to build a searxng alternative for myself.

Whilst I have used searxng for about a year now, I have had quite a few personal gripes with it (mostly stuff I personally would prefer worked differently) so in the past few weeks I have decided to make my take on it and ran it happily locally. Since publishing the beta to my discord server I ended up building a fairly extensive tool.

Degoog is actually pretty minimal, there's no much to it aside from a very comprehensive plugin/extension system. The idea being users can create their own engines, themes and plugins that hook into the core application and do.. pretty much anything, from adding stuff to the result page (e.g. speedtests, tmdb information, ip retrieval, rss feeds embedded on the home page) to full on OIDC systems.

This is still very much in beta and I figured the best way to get it out of beta would be to publish it to a wider audience (currently some users in our discord server have been testing it fairly successfully and i've been on top of bug fixing).

Repo: https://github.com/fccview/degoog

Official extensions: https://github.com/fccview/fccview-degoog-extensions

Docs: https://fccview.github.io/degoog

You can install custom plugins/extensions. You can make your own repo and add it to the store page in the settings, or you can just have your own plugins locally for yourself.

Let me know what you think, and feel free to ask any questions and feel free to join our discord (link in releases page on any of my apps) for a more direct chat about things <3

24
 
 

Hey!

I basically want to replace the Google Authenticator app in style and functionality:

  1. List all TOTP tokens and their validity time (with a name and order I decide).
  2. Allow me to periodically or on change back up the whole thing to some off-site storage, keeping the last N backups.
  3. Have a native app for Android or an actually good PWA.
  4. Don’t do magic bullshit like fetching icons, hide tokens, etc.
  5. Be actually secure (i.e. don’t roll your own auth)
  6. Just be a TOTP manager, and nothing more! No, I’m not interested in a password manager, thank you. I also don’t want any other OTP methods I don’t use.
  7. Don’t be a one-man projects where the availability is not clear in >1 year.

Any experience is welcomed. Thank you!

Edit: Thanks for all the great ideas, I just set up 2FAuth which seems to be the most minimalist and single-feature thing to self-host. I’ll evaluate how it performs but keep a backup in Google Authenticator. It does not match #7 but it seems to be actively used by the author and gets constant updates and fixes, so it’s most likely fine, I guess.

There is a 3rd-party app for it, but this app seems to be pretty much dead (last release in July 2025 and not in any app store) – or at least not released anymore but still worked on but only in the repository.

25
 
 

Frigate is NVR software with motion detection, object detection, recording, etc.. It has matured a lot over the past couple of years and I'm really happy with it.

I've been running Frigate for a while, but with version 0.17.0 it sounded like things have changed enough for me to update how I do things. I'm writing all of the following in case anyone else is in the same boat. There's a lot to read, but hopefully it helps make sense of the options.

Keeping my camera feeds the same, I was interested in switching my object detector from a Google Coral to the embedded graphics in my 13th gen Intel CPU. The main reason for this was because the Google Coral was flaky and I was having to reboot all the time. Maybe because I run Frigate in a virtual machine in Proxmox, so the Coral has to be passed through to the VM? Not sure.

I also wanted to figure out how to get the camera streams to work better in Home Assistant.

Switching from Google Coral to OpenVINO

This was relatively straight forward. I mostly followed these directions and ended up with:

detectors:  
  ov:  
    type: openvino  
    device: GPU  

Switching from the default to YOLOv9

Frigate comes with some default ability to detect objects such as person and car. I kept hearing that YOLOv9 was more accurate, and they even got YOLOv9 working with Google Coral devices, just with a limited set of objects. So, I wanted to switch.

This took me a minute to wrap my head around since it's not enabled out of the box.

I added the following to my config based on these directions :

model:  
  model_type: yolo-generic  
  width: 320 # <--- should match the imgsize set during model export  
  height: 320 # <--- should match the imgsize set during model export  
  input_tensor: nchw  
  input_dtype: float  
  path: /config/model_cache/yolo.onnx  
  labelmap_path: /labelmap/coco-80.txt  

... except for me the yolo file is called yolov9-t-320.onnx instead of yolo.onnx... but I could have just as easily renamed the file.

That brings us to the next part -- how to get the yolo.onnx file. It's a bit buried in the documentation, but I ran the commands provided here. I just copied the whole block of provided commands and ran them all at once. The result is an .onnx file in whatever folder you're currently in.

The .onnx file needs to be copied to /config/model_cache/, wherever that might be based on your Docker Compose.

That made me wonder about the other file, coco-80.txt. Well, it turns out coco-80.txt is already included inside the container, so nothing to do there. That file is handy though, because it lists 80 possible things that you can track. Here's the list on github.

I won't go over the rest of the camera/motion configuration, because if you're doing this then you definitely need to dive into the documentation for a bunch of other stuff.

Making the streams work in Home Assistant

I've had the Frigate integration running in Home Assistant for a long time, but clicking on the cameras only showed a still frame, and no video would play.

Home Assistant is not on the same host as Frigate, by the way. Otherwise I'd have an easier time with this. But that's not how mine is set up.

It turns out my problem was caused by me using go2rtc in my Frigate setup. go2rtc is great and acts as a re-streamer. This might reduce bandwidth which is important especially for wifi cameras. But, it's optional, and I learned that I don't want it.

go2rtc should work with Home Assistant if they're both running on the same host (same IP address), or if you run the Docker stack with network_mode: host so it has full access to everything. I tried doing that, but for some reason Frigate got into a boot loop, so I changed it back to the bridge network that I had previously.

The reason for this, apparently, is that go2rtc requires more than whatever published ports they say to open in Docker. Maybe it uses random ports or some other network magic. I'm not sure.

The downside of not having go2rtc is that the camera feeds in the Frigate UI are limited to 720p. I can live with that. The feeds in Home Assistant are still full quality, and recordings are still full quality.

By removing go2rtc from my config, Home Assistant now streams directly from the cameras themselves instead of looking for the go2rtc restream. You may have to click "Reconfigure" in the Home Assistant integration for the API to catch up.

Hope this helps. If not, sorry you had to read all of this.

view more: next ›