pe1uca

joined 2 years ago
 

All docker files look something like this

services:
  service_name:
    image: author/project:latest
    container_name: service_name
    volumes:
      - service_data:/app/data/

volumes:
  service_data:

Yes, this makes the data to persist, but it creates a directory with a random name inside /var/lib/docker/volumes/
This makes it really hard to actually have ownership of the data of the service (for example to create backups, or to migrate to another host)

Why is it standard practice to use this instead of having a directory mounted inside at the same level you have your docker-compose.yml?
Like this - ./service_data:/app/data

[–] pe1uca@lemmy.pe1uca.dev 2 points 4 months ago

iDrive e2 with duplicati and manually to an external SSD with rscyn every so often.

I was planing on asking a friend to setup a server at their home, but I feel somewhat comfortable with the current solution.

[–] pe1uca@lemmy.pe1uca.dev 1 points 6 months ago

Yeah, it was $2.5/tb/month, now it's $4.1/tb/month.
Still cheaper than backblaze's $6 which seems the only other option everyone suggests, so it'll have to do for the moment.

[–] pe1uca@lemmy.pe1uca.dev 1 points 6 months ago (1 children)

I'm assuming you mean updating every service, right?
If you don't need anything new from a service you can just stay on the version you use for as long as you like as long as your services are not public.
You could just install tailscale and connect everything inside the tailnet.
From there you'll just need to update tailscale and probably your firewall, docker, and OS, or when any of the services you use receives a security update.

I've lagged behind several versions of immich because I don't have time to monitor the updates and handle the breaking changes, so I just use a version until I have free time.
Then it's just an afternoon of reading through the breaking changes, updating the docker file and config, and running docker compose pull && docker compose up -d.
In theory there could be issues in here, that's were your backups come into place, but I've never had any issues.

The rest of the 20+ services I have are just running there, because I don't need anything new from them. Or I can just mindlessly run the same compose commands to update them.

There was only one or two times I had to actually go into some kind of emergency mode because a service suddenly broke and I had to spend a day or two figuring out what happened.

[–] pe1uca@lemmy.pe1uca.dev 14 points 6 months ago (3 children)

I'd say syncthing is not really a backup solution.
If for some reason something happens to a file on one side, it'll also happen to the file on the other side, so you'll loose your "backup".
Plus, what ensures you your friend won't be going around and snooping or making their own copies of your data.
Use a proper backup software to send your data offsite (restic, borg, duplicati, etc) which will send it encrypted (use a password manager to set a strong and unique password for each backup)

And follow the 3-2-1 rule MangoPenguin mentioned.
Remember, this rule is just for data you can't find anywhere else, so just your photos, your own generated files, databases of the services you self-host, stuff like that. If you really want you could make a backup of hard to find media, but if you already have a torrent file, then don't go doing backup of that media.

[–] pe1uca@lemmy.pe1uca.dev 10 points 6 months ago (7 children)

What do you mean jellyfin uses the *are suite?
I have Jellyfin with any media in different directories as long as I try to match the format the documents mention.
So, as long as I can get the media in any way I can just put it in any directory and it'll be added to the library.

Is it similar with Odin? Or does it directly fetch the media from where you want to download it?

[–] pe1uca@lemmy.pe1uca.dev 15 points 6 months ago (3 children)

FreshRSS has been amazing, as you said, other readers have other goals in mind and seems RSS is just an add-on.

On Android's also there are no good clients, I've been using the PWA which is good enough.
There are several extensions for mobile menu improvements, I have Smart Mobile Menu, Mobile Scroll Menu and Touch Control (it works great on Firefox, but not on brave, it's too sensitive there, so YMMV).

There's also ReadingTime, but there are feeds which don't send the whole body of the post, so you might only see a 1minute read because of that.


The extension AutoTTL processes the feeds and makes them update only when it's more likely for them to get new items instead of every X minutes configured by FreshRSS.
Still there's a problem when the MaxTTL happens, all feeds are allowed to be updated and you might hit some rate limits, so I developed a rate limiter. Still there's a problem with AutoTTL because how extensions are loaded and with the http code reported by FreshRSS.


I found this project which receive the emails of newsletters and turns them into a RSS feed, I've only used it for one feed and I've only received one entry, not sure if the newsletter is that bad or if the site struggles to receive/show them. Haven't tried something it.
https://github.com/leafac/kill-the-newsletter

There's also this repo linking a lot of sites with feeds, and some sites which don't offer feeds directly are provided via feedburner (which seems it's a Google service and wikipedia says "primarily for monetizing RSS feeds, primarily by inserting targeted advertisements into them", so use those at your own discretion) https://github.com/plenaryapp/awesome-rss-feeds

[–] pe1uca@lemmy.pe1uca.dev 2 points 7 months ago

Maybe you could submit an issue to the repo to include a way to change the format of the saved folders.
(I'm thinking something similar on how immich allows to change some formats)

I'm seeing in my instance the names seem like some sort of timestamp, not sure if the code uses them in a meaningful way, so probably the solution would be to create symlinks with the name of the site or some other format while keeping the timestamp so the rest of the code can still expect it.

[–] pe1uca@lemmy.pe1uca.dev 5 points 7 months ago (1 children)

I bought this one and it's been wonderful to run +20 services. A few of those are Forgejo (github replacement), Jellyfin (Plex but actually self-hosted), immich (Google Photos replacement), frigate (to process one security camera).
(Only Immich does transcoding, jellyfin already has all my media preprocessed from the GPU of my laptop)

I bought it bare-bone since I already had the RAM and an SSD, plus I wasn't to use windows. During this year I've bought another SSD and a HDD.

https://aoostar.com/products/aoostar-r7-2-bay-40t-nas-storage-amd-ryzen-7-5825u-mini-pc8c-16t-up-to-4-5ghz-with-w11-pro-ddr4-ram-2-m-2-nvme-%E5%A4%8D%E5%88%B6

I bought it on amazon, but you could buy it from the seller, although I'd recommend amazon to not deal with the import and have an easy return policy.

[–] pe1uca@lemmy.pe1uca.dev 3 points 7 months ago

Found the issue '^-^
UFW also blocks traffic between docker and host.
I had to add these rules

ufw allow proto tcp from 172.16.0.0/12 to 172.16.0.0/12 port 80  
ufw allow proto tcp from 172.16.0.0/12 to 172.16.0.0/12 port 443  
[–] pe1uca@lemmy.pe1uca.dev 1 points 7 months ago

Same problem.
I tried a few values and the same, ping works but curl doesn't.

 

I'm having issue making a container running in the network of gluetun to access the host.

In theory there's a variable for this: FIREWALL_OUTBOUND_SUBNETS
https://github.com/qdm12/gluetun-wiki/blob/main/setup/connect-a-lan-device-to-gluetun.md#access-your-lan-through-gluetun
When I include 172.16.0.0/12 I can ping the ip assigned with host-gateway but I can't curl anything.

The command just stays like this until it timesout

# curl -vvv 172.17.0.1
*   Trying 172.17.0.1:80...

I also tried adding 100.64.0.0/10 to connect to tailscale but is the same response, can ping and timedout curl.

Any other request works properly connected via the VPN configured in gluetun

Do you guys have any idea what I might be missing?

[–] pe1uca@lemmy.pe1uca.dev 22 points 7 months ago (2 children)

Why not report it in the repo?

[–] pe1uca@lemmy.pe1uca.dev 2 points 7 months ago

Maybe FreshRSS with some extensions?
I saw a recent commit to fire an event when saving a favorite, so probably you can get an extension to send the link to something like archivebox for the pages you favorite.

I've just fiddled with an already created extension, but they seem fairly simple to create your own easily.
Of course you can inject JS so you could make it more complex if you want.

 

So, I'm selfhosting immich, the issue is we tend to take a lot of pictures of the same scene/thing to later pick the best, and well, we can have 5~10 photos which are basically duplicates but not quite.
Some duplicate finding programs put those images at 95% or more similarity.

I'm wondering if there's any way, probably at file system level, for the same images to be compressed together.
Maybe deduplication?
Have any of you guys handled a similar situation?

 

I'm using https://github.com/rhasspy/piper mostly to create some audiobooks and read some posts/news, but the voices available are not always comfortable to listen to.

Do you guys have any recommendation for a voice changer to process these audio files?
Preferably it'll have a CLI so I can include it in my pipeline to process RSS feeds, but I don't mind having to work through an UI.
Bonus points if it can process the audio streams.

 

I've only used ufw and just now I had to run this command to fix an issue with docker.
sudo iptables -I INPUT -i docker0 -j ACCEPT
I don't know why I had to run this to make curl work.

So, what did I exactly just do?
This is behind my house router which already has reject input from wan, so I'm guessing it's fine, right?

I'm asking since the image I'm running at home I was previously running it in a VPS which has a public IP and this makes me wonder if I have something open there without knowing :/

ufw is configured to deny all incoming, but I learnt docker by passes this if you configure the ports like 8080:8080 instead of 127.0.0.1:8080:8080. And I confirmed it by accessing the ip and port.

 

I started tinkering with frigate and saw the option to use a coral ai device to process the video feeds for object recognition.

So, I started checking a bit more what else could be done with the device, and everything listed in the site is related to human recognition (poses, faces, parts) or voice recognition.

In some part I read stable diffusion or LLMs are not an option since they require a lot of ram which these kind of devices lack.

What other good/interesting uses can these devices have? What are some of your deployed services using these devices for?

 

I have a few servers running some services using a custom domain I bought some time ago.
Each server has its own instance of caddy to handle a reverse proxy.
Only one of those servers can actually do the DNS challenge to generate the certificates, so I was manually copying the certificates to each other caddy instance that needed them and using the tls directive for that domain to read the files.

Just found there are two ways to automate this: shared storage, and on demand certificates.
So here's what I did to make it work with each one, hope someone finds it useful.

Shared storage

This one is in theory straight forward, you just mount a folder which all caddy instances will use.
I went through the route of using sshfs, so I created a user and added acls to allow the local caddy user and the new remote user to write the storage.

setfacl -Rdm u:caddy:rwx,d:u:caddy:rwX,o:--- ./
setfacl -Rdm u:remote_user:rwx,d:u:remote_user:rwX,o:--- ./
setfacl -Rm u:remote_user:rwx,d:u:remote_user:rwX,o:--- ./

Then on the server which will use the data I just mounted it

remote_user@<main_caddy_host>:/path/to/caddy/storage /path/to/local/storage fuse.sshfs noauto,x-systemd.automount,_netdev,reconnect,identityfile=/home/remote_user/.ssh/id_ed25519,allow_other,default_permissions,uid=caddy,gid=caddy 0 0

And included the mount as the caddy storage

{
	storage file_system /path/to/local/storage
}

On demand

This one requires a separate service since caddy can't properly serve the file needed to the get_certificate directive

We could run a service which reads the key and crt files and combines them directly from the main caddy instance, but I went to serve the files and combine them in the server which needs them.

So, in my main caddy instance I have this:
I restrict the access by my tailscale IP, and include the /ask endpoint required by the on demand configuration.

@certificate host cert.localhost
handle @certificate {
	@blocked not remote_ip <requester_ip>
	respond @blocked "Denied" 403

	@ask {
		path /ask*
		query domain=my.domain domain=jellyfin.my.domain
	}
	respond @ask "" 200

	@askDenied `path('/ask*')`
	respond @askDenied "" 404

	root * /path/to/certs
	@crt {
		path /cert.crt
	}
	handle @crt {
		rewrite * /wildcard_.my.domain.crt
		file_server
	}

	@key {
		path /cert.key
	}
	handle @key {
		rewrite * /wildcard_.my.domain.key
		file_server
	}
}

Then on the server which will use the certs I run a service for caddy to make the http request.
This also includes another way to handle the /ask endpoint since wildcard certificates are not handled with *, caddy actually asks for each subdomain individually and the example above can't handle wildcard like domain=*.my.domain.

package main

import (
	"io"
	"net/http"
	"strings"

	"github.com/labstack/echo/v4"
)

func main() {
	e := echo.New()

	e.GET("/ask", func(c echo.Context) error {
		if domain := c.QueryParam("domain"); strings.HasSuffix(domain, "my.domain") {
			return c.String(http.StatusOK, domain)
		}
		return c.String(http.StatusNotFound, "")
	})

	e.GET("/cert.pem", func(c echo.Context) error {
		crtResponse, err := http.Get("https://cert.localhost/cert.crt")
		if err != nil {
			return c.String(http.StatusInternalServerError, "")
		}
		crtBody, err := io.ReadAll(crtResponse.Body)
		if err != nil {
			return c.String(http.StatusInternalServerError, "")
		}
		defer crtResponse.Body.Close()
		keyResponse, err := http.Get("https://cert.localhost/cert.key")
		if err != nil {
			return c.String(http.StatusInternalServerError, "")
		}
		keyBody, err := io.ReadAll(keyResponse.Body)
		if err != nil {
			return c.String(http.StatusInternalServerError, "")
		}

		return c.String(http.StatusOK, string(crtBody)+string(keyBody))
	})

	e.Logger.Fatal(e.Start(":1323"))
}

And in the CaddyFile request the certificate to this service

{
	on_demand_tls {
		ask http://localhost:1323/ask
	}
}

*.my.domain {
	tls {
		get_certificate http http://localhost:1323/cert.pem
	}
}
 

Seems the SSD sometimes heats up and the content disappears from the device, mostly from my router, sometimes from my laptop.
Do you know what I should configure to put the drive to sleep or something similar to reduce the heat?

I'm starting up my datahoarder journey now that I replaced my internal nvme SSD.

It's just a 500GB one which I attached to my d-link router running openwrt. I configured it with samba and everything worked fine when I finished the setup. I just have some media files in there, so I read the data from jellyfin.

After a few days the content disappears, it's not a connection problem from the shared drive, since I ssh into the router and the files aren't shown.
I need to physically remove the drive and connect it again.
When I do this I notice the somewhat hot. Not scalding, just hot.

I also tried this connecting it directly to my laptop running ubuntu. In there the drive sometimes remains cool and the data shows up without issue after days.
But sometimes it also heats up and the data disappears (this was even when the data was not being used, i.e. I didn't configure jellyfin to read from the drive)

I'm not sure how I can be sure to let the ssd sleep for periods of time or to throttle it so it can cool off.
Any suggestion?

 

I want to have something similar to a google's nest hub to display different type of information, like weather, bus times, my own services information, photo gallery, etc.

It's not a problem if I have to manually write plugins for custom integrations.
It'll be better if it's meant to be shown in a web browser.

I remember there were some related to a screen for a digital mirror, or a kiosk screen, but I can't find a good one to selfhost and extends to my needs.

The ones I've found are focused on showing stats of deployed services and quick links to them.

 

All guides to deploy using docker mention typing your keys/credentials/secrets into the docker compose file, or use a .env or similar file, I'm wondering how secure is this and if there's a better option.

Also, this has the issue of having to get into the server to manage them, remembering which file has each credential.

Is there a selfhostable secrets manager? I've only found proprietary/paid ones for large infrastructures and I just need it for a couple of my servers/projects.

view more: next ›