pe1uca

joined 1 year ago
[–] pe1uca@lemmy.pe1uca.dev 12 points 7 hours ago (2 children)

Why not report it in the repo?

[–] pe1uca@lemmy.pe1uca.dev 2 points 4 days ago

Maybe FreshRSS with some extensions?
I saw a recent commit to fire an event when saving a favorite, so probably you can get an extension to send the link to something like archivebox for the pages you favorite.

I've just fiddled with an already created extension, but they seem fairly simple to create your own easily.
Of course you can inject JS so you could make it more complex if you want.

[–] pe1uca@lemmy.pe1uca.dev 3 points 5 days ago

With invidious and in FreshRSS I use the youtube extension to use the embedded video player, you just need update this part of the code https://github.com/FreshRSS/Extensions/blob/master/xExtension-YouTube/extension.php#L153-L163
It easy just to replace for this:

    public function getHtmlContentForLink(FreshRSS_Entry $entry, string $link): string
    {
        $domain = 'www.youtube.com';
        if ($this->useNoCookie) {
            $domain = 'www.youtube-nocookie.com';
        }
        $domain = 'invidious.personal.com';
        $params = 'quality=dash';
        $url = str_replace('//www.youtube.com/watch?v=', '//'.$domain.'/embed/', $link);
        $url = str_replace('http://', 'https://', $url);
        $url = $url . '?' . $params;

        return $this->getHtml($entry, $url);
    }

The only change is to use $domain = 'invidious.personal.com';
And add the parameter quality=dash

Seems there's also this one https://github.com/tunbridgep/freshrss-invidious
but haven't tried it

[–] pe1uca@lemmy.pe1uca.dev 7 points 1 week ago (7 children)

That's a weird read having in mind I had to move to Wayland because x11 had severe screen tearing. I would have guessed Wayland had better support.

[–] pe1uca@lemmy.pe1uca.dev 5 points 1 week ago (5 children)

I don't think there are services like that, since usually this means deploying and destructing an instance, which takes a few minutes (if you just turn off the instance you still get billed).
Probably the best option would be to have a snapshot, which costs way less than the actual instance, and create from it each day or so yo run on the images since it was last destroyed.

This is kind of what I do with my media collection, I process it on my main machine with a GPU, and then just serve it from a low-power one with Jellyfin.

[–] pe1uca@lemmy.pe1uca.dev 5 points 1 week ago

IIRC this was already addressed and should be automatic.
There was an issue specifically mentioning GDPR and the devs implemented a way to automatically delete the data of an account within the given time.

It's not a GDPR request in itself, but AFAIK a normal delete account request should be compliant... INAL

[–] pe1uca@lemmy.pe1uca.dev 2 points 3 weeks ago (1 children)

Start by learning docker, you don't have to selfhost anything yet, just learn to run a container, specially to run automated stuff. Then learn to build the images and run docker compose.

Also you could start checking any form or infrastructure as code. I usually hear about ansible and nixos.
This helps having a way to redeploy your services in any hardware easily.

[–] pe1uca@lemmy.pe1uca.dev 6 points 3 weeks ago (1 children)

Does it apply it to all feeds? Or can it detect what feeds are actually Youtube ones?

[–] pe1uca@lemmy.pe1uca.dev 7 points 1 month ago (1 children)

Weird, it didn't ask using firefox and ublock origin.
I don't have all lists active tho.

[–] pe1uca@lemmy.pe1uca.dev 10 points 1 month ago (1 children)

Why do you need the files in your local?
Is your network that slow?

I've heard of multiple content creators which have their video files in their NAS to share between their editors, and they work directly from the NAS.
Could you do the same? You'll be working with music, so the network traffic will be lower than with video.

If you do this you just need a way to mount the external directory, either with rclone or with sshfs.


The disks on my NAS go to sleep after 10 minutes idle time and if possible I would prefer not waking them up all the time

I think this is a good strategy to not put additional stress in your drives (as a non-expert of NAS), but I've read the actual wear and tear of the drives is mostly during this process of spinning up and down. That's why NAS drives should be kept spinning all the time.
And drives specifically built for NAS setups are designed with this in mind.

[–] pe1uca@lemmy.pe1uca.dev 4 points 2 months ago* (last edited 2 months ago)

I use rclone and duplicati depending on the needs of the backup.

For long term I use duplicati, it has a GUI and you can upload it to several places (mines are spread between e2 and drive).
You configure the backend, password for encryption, schedule, and version retention.

rclone, with the crypt submodule, you use it to mount your backups as am external drive, so you need to manually handle the actual copy of the data into it, plus versioning and retention.

[–] pe1uca@lemmy.pe1uca.dev 2 points 2 months ago (1 children)

I can't give you the technical explanation, but it works.
My Caddyfile only something like this

@forgejo host forgejo.pe1uca
handle @forgejo {
	reverse_proxy :8000
}

and everything else has worked properly cloning via ssh with git@forgejo.pe1uca:pe1uca/my_repo.git

My guess is git only needs the host to resolve the IP and then connects to the port directly.

 

So, I'm selfhosting immich, the issue is we tend to take a lot of pictures of the same scene/thing to later pick the best, and well, we can have 5~10 photos which are basically duplicates but not quite.
Some duplicate finding programs put those images at 95% or more similarity.

I'm wondering if there's any way, probably at file system level, for the same images to be compressed together.
Maybe deduplication?
Have any of you guys handled a similar situation?

 

I'm using https://github.com/rhasspy/piper mostly to create some audiobooks and read some posts/news, but the voices available are not always comfortable to listen to.

Do you guys have any recommendation for a voice changer to process these audio files?
Preferably it'll have a CLI so I can include it in my pipeline to process RSS feeds, but I don't mind having to work through an UI.
Bonus points if it can process the audio streams.

 

I've only used ufw and just now I had to run this command to fix an issue with docker.
sudo iptables -I INPUT -i docker0 -j ACCEPT
I don't know why I had to run this to make curl work.

So, what did I exactly just do?
This is behind my house router which already has reject input from wan, so I'm guessing it's fine, right?

I'm asking since the image I'm running at home I was previously running it in a VPS which has a public IP and this makes me wonder if I have something open there without knowing :/

ufw is configured to deny all incoming, but I learnt docker by passes this if you configure the ports like 8080:8080 instead of 127.0.0.1:8080:8080. And I confirmed it by accessing the ip and port.

 

I started tinkering with frigate and saw the option to use a coral ai device to process the video feeds for object recognition.

So, I started checking a bit more what else could be done with the device, and everything listed in the site is related to human recognition (poses, faces, parts) or voice recognition.

In some part I read stable diffusion or LLMs are not an option since they require a lot of ram which these kind of devices lack.

What other good/interesting uses can these devices have? What are some of your deployed services using these devices for?

 

I have a few servers running some services using a custom domain I bought some time ago.
Each server has its own instance of caddy to handle a reverse proxy.
Only one of those servers can actually do the DNS challenge to generate the certificates, so I was manually copying the certificates to each other caddy instance that needed them and using the tls directive for that domain to read the files.

Just found there are two ways to automate this: shared storage, and on demand certificates.
So here's what I did to make it work with each one, hope someone finds it useful.

Shared storage

This one is in theory straight forward, you just mount a folder which all caddy instances will use.
I went through the route of using sshfs, so I created a user and added acls to allow the local caddy user and the new remote user to write the storage.

setfacl -Rdm u:caddy:rwx,d:u:caddy:rwX,o:--- ./
setfacl -Rdm u:remote_user:rwx,d:u:remote_user:rwX,o:--- ./
setfacl -Rm u:remote_user:rwx,d:u:remote_user:rwX,o:--- ./

Then on the server which will use the data I just mounted it

remote_user@<main_caddy_host>:/path/to/caddy/storage /path/to/local/storage fuse.sshfs noauto,x-systemd.automount,_netdev,reconnect,identityfile=/home/remote_user/.ssh/id_ed25519,allow_other,default_permissions,uid=caddy,gid=caddy 0 0

And included the mount as the caddy storage

{
	storage file_system /path/to/local/storage
}

On demand

This one requires a separate service since caddy can't properly serve the file needed to the get_certificate directive

We could run a service which reads the key and crt files and combines them directly from the main caddy instance, but I went to serve the files and combine them in the server which needs them.

So, in my main caddy instance I have this:
I restrict the access by my tailscale IP, and include the /ask endpoint required by the on demand configuration.

@certificate host cert.localhost
handle @certificate {
	@blocked not remote_ip <requester_ip>
	respond @blocked "Denied" 403

	@ask {
		path /ask*
		query domain=my.domain domain=jellyfin.my.domain
	}
	respond @ask "" 200

	@askDenied `path('/ask*')`
	respond @askDenied "" 404

	root * /path/to/certs
	@crt {
		path /cert.crt
	}
	handle @crt {
		rewrite * /wildcard_.my.domain.crt
		file_server
	}

	@key {
		path /cert.key
	}
	handle @key {
		rewrite * /wildcard_.my.domain.key
		file_server
	}
}

Then on the server which will use the certs I run a service for caddy to make the http request.
This also includes another way to handle the /ask endpoint since wildcard certificates are not handled with *, caddy actually asks for each subdomain individually and the example above can't handle wildcard like domain=*.my.domain.

package main

import (
	"io"
	"net/http"
	"strings"

	"github.com/labstack/echo/v4"
)

func main() {
	e := echo.New()

	e.GET("/ask", func(c echo.Context) error {
		if domain := c.QueryParam("domain"); strings.HasSuffix(domain, "my.domain") {
			return c.String(http.StatusOK, domain)
		}
		return c.String(http.StatusNotFound, "")
	})

	e.GET("/cert.pem", func(c echo.Context) error {
		crtResponse, err := http.Get("https://cert.localhost/cert.crt")
		if err != nil {
			return c.String(http.StatusInternalServerError, "")
		}
		crtBody, err := io.ReadAll(crtResponse.Body)
		if err != nil {
			return c.String(http.StatusInternalServerError, "")
		}
		defer crtResponse.Body.Close()
		keyResponse, err := http.Get("https://cert.localhost/cert.key")
		if err != nil {
			return c.String(http.StatusInternalServerError, "")
		}
		keyBody, err := io.ReadAll(keyResponse.Body)
		if err != nil {
			return c.String(http.StatusInternalServerError, "")
		}

		return c.String(http.StatusOK, string(crtBody)+string(keyBody))
	})

	e.Logger.Fatal(e.Start(":1323"))
}

And in the CaddyFile request the certificate to this service

{
	on_demand_tls {
		ask http://localhost:1323/ask
	}
}

*.my.domain {
	tls {
		get_certificate http http://localhost:1323/cert.pem
	}
}
 

Seems the SSD sometimes heats up and the content disappears from the device, mostly from my router, sometimes from my laptop.
Do you know what I should configure to put the drive to sleep or something similar to reduce the heat?

I'm starting up my datahoarder journey now that I replaced my internal nvme SSD.

It's just a 500GB one which I attached to my d-link router running openwrt. I configured it with samba and everything worked fine when I finished the setup. I just have some media files in there, so I read the data from jellyfin.

After a few days the content disappears, it's not a connection problem from the shared drive, since I ssh into the router and the files aren't shown.
I need to physically remove the drive and connect it again.
When I do this I notice the somewhat hot. Not scalding, just hot.

I also tried this connecting it directly to my laptop running ubuntu. In there the drive sometimes remains cool and the data shows up without issue after days.
But sometimes it also heats up and the data disappears (this was even when the data was not being used, i.e. I didn't configure jellyfin to read from the drive)

I'm not sure how I can be sure to let the ssd sleep for periods of time or to throttle it so it can cool off.
Any suggestion?

 

I want to have something similar to a google's nest hub to display different type of information, like weather, bus times, my own services information, photo gallery, etc.

It's not a problem if I have to manually write plugins for custom integrations.
It'll be better if it's meant to be shown in a web browser.

I remember there were some related to a screen for a digital mirror, or a kiosk screen, but I can't find a good one to selfhost and extends to my needs.

The ones I've found are focused on showing stats of deployed services and quick links to them.

 

All guides to deploy using docker mention typing your keys/credentials/secrets into the docker compose file, or use a .env or similar file, I'm wondering how secure is this and if there's a better option.

Also, this has the issue of having to get into the server to manage them, remembering which file has each credential.

Is there a selfhostable secrets manager? I've only found proprietary/paid ones for large infrastructures and I just need it for a couple of my servers/projects.

view more: next ›