iturnedintoanewt

joined 1 year ago
[–] iturnedintoanewt@lemm.ee 1 points 3 days ago (1 children)

Question...I started working on this. How would you go about opening an SSH or RDP port (on some random port such as 12345) on a local machine, in a way that I can reach it via the tailscale IP?

[–] iturnedintoanewt@lemm.ee 1 points 1 week ago* (last edited 1 week ago) (1 children)

...what support? They barely reply any queries people post in their google groups. If you go there you'll see most people will try to reach them either due to servers down (the main issue at hand) or login issues which never get fixed (the longest standing issue, better create a different new subdomain) from what I've seen. I've also tried repeatedly to reach them regarding changing the token access, but with no luck. It's a free service so I can't complain, but the only support you actually will get is from other users, and for some scenarios that's not quite enough.

EDIT: Oh wow right after posting this I just saw they actually replied regarding the SSO/tokens issue most people have (SSO failed due to the reddit snafu, you end up with just the token and no chance to do any further changes to your account again). This has been an ongoing issue for over two years, I just saw they finally replied (I think for the first time) a couple of weeks ago.

[–] iturnedintoanewt@lemm.ee 2 points 1 week ago (1 children)

Thanks. I'm seriously considering also a paid domain, so it's good to hear from your experience. I might go try some other free provider first though.

[–] iturnedintoanewt@lemm.ee 4 points 1 week ago* (last edited 1 week ago) (3 children)

Does it matter?

No, it does not change, but why is this something of concern? The problem is duckdns DOES NOT REPLY providing DNS replies, not to my own servers, but to people outside looking for my servers by typing their address. Duck fails to provide a response to those queries, and users get timeouts. I can frequently replicate this with either dig or nslookup, from different machines, either inside my network or at random connections.

I managed today to run certbot to register 2 new subdomains that yesterday consistently failed with a long timeout during THE WHOLE DAY. Today the same certbot command on the same server ran straight at the first attempt.

So...yeah. Unreliable.

[–] iturnedintoanewt@lemm.ee 1 points 1 week ago

Great...thanks. I'm going to look them up.

[–] iturnedintoanewt@lemm.ee 5 points 1 week ago* (last edited 1 week ago) (5 children)

Glad it works for you guys. Here it fails to respond at least once a week or so, and it can last one hour or more sometimes. It's unpredictable. And makes the server look buggy.

A sample for measure...there's a lot of these on reddit:

https://www.reddit.com/r/selfhosted/comments/1cyru6p/duckdns_dns_servers_down/

[–] iturnedintoanewt@lemm.ee 2 points 1 week ago* (last edited 1 week ago)

It will fail to resolve randomly, and then your services goes down. And you expend quite a while figuring out whatever might have failed until the typical "when in doubt, it's DNS" pops up. This also applies when you're trying to add/renew subdomains.

Just a sample...

https://lemmy.world/post/13565617

[–] iturnedintoanewt@lemm.ee 4 points 1 week ago

It seems to frequently stop responding.

 

Hi guys!

I'm considering moving away from duckdns, as it's becoming increasingly unreliable. I'd like to check some other free dynamic DNS alternatives (I'm open to suggestions!).

My idea would be to have the server run under two different domains, but both directing to the same services. Is this possible? What shoudl I change in nginx in order to answer to two different domains/names?

Thanks!

[–] iturnedintoanewt@lemm.ee 5 points 1 week ago (2 children)

The end result is the same though. First phone unlock is the one a bad actor can't get through.

[–] iturnedintoanewt@lemm.ee 10 points 1 week ago (2 children)

GrapheneOS is the easiest ROM install bar none. Get the en browser (needs to be chrome-based) to the install url, hook the phone cable, and let it run. It's super straightforward. It's not rooting though, you don't get root access by default.

[–] iturnedintoanewt@lemm.ee 116 points 1 week ago (41 children)

GrapheneOS also has this. Not sure stock android includes it.

[–] iturnedintoanewt@lemm.ee 8 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Extremely unlikely due to Kamal's actor rapey attitudes forcing his own on-screen death (and firing). I'm not complaining with the attempt at closing the main plots where they did, given the cards they were given. I'm happy they tried to give it a decent ending and closure.

I didn't find the final books so amazing compared to the rest of the series.

 

Hi guys!

I'm trying to create a WebDAV server that shares a NFS mount from my NAS. In short, I'm trying to create/share a backup folder in my NAS so my Graphene phone can run the backup.

How can I do this? All the guides mention about sharing /var/www/webdav and chrooting it. How can I share my own folder? Does it need to be /var/www/webdav? Can I share something else instead? Should I just link my NAS mount to /var/www/webdav?

Thanks!

 

Hi guys! I'm looking to monitor/control the power consumption of some old window-hanging aircon units, that don't really mind when the power is literally cut from the wall. I'd like to be able to see how much power they consume, and also being able to turn them on and off at the socket (the IR doesn't work all that well to begin with). I was checking about the Tapo P110M, but seems these are not sharing the power consumption offline, you need to register them in the app and they only do it through a Tapo account.

What alternatives do I have?

Important, I guess: As I live off an ex-UK colony here, we do have UK-like three pronged sockets, that's the form factor (Type G, I think?) I'd be needing.

 

Hi guys!

Back in the day I used to have a VM holding nginx and all the crap exposed...and I did set it up with fail2ban. I moved away from it, as the OS upgrade was turning messy, and rebuilt onto an LXC container. How should I use fail2ban/iptables in order to protect/harden my LXC container/server? Do the same conditions apply, or will I have any limitations/issues due to the container itself?

Thanks!

 

Hi guys! It's been a long while, and I still struggle with Deluge catching brand new releases of movies that just about everyone's downloading.

A bit of background, I have 1Gbps connection, and Deluge in headless mode (that's why I chose Deluge, for headless you get either Deluge or Transmission...AFAIK those are the only two supporting it).

So, whenever my -arr servers catch the latest release of the very latest movie or TV show, Deluge catches it, and faceplants it with a download error immediately. I can either "force check" or "resume". Either way (doesn't matter which), it will error again in a second or two. This struggle continues for a while of resume/error/resume, until it finally starts to download a larger chunk...for it to error again a minute or two later, after downloading several hundred MB. And then another section of constant errors. Finally, it will get stuck at the end at 99%, where it really needs a "force check" to find whatever data was corrupted, redownload that, and finish.

Any idea why this happens? Any way to fix/avoid it? I'm not sure deluge is connecting to fake seeders giving it corrupted data and it fails to catch/fix it. Any help would be very welcome. Thanks!

 

So...yeah. Looking at file size, it clearly beats older 264 or even 265. I don't mind if my server is going to have to transcode for most clients, I think the size difference in size might be worth it. But not sure which groups I could focus to look for these AV1 releases, seem they're quite scarce still?

 

Hi! I'm currently looking onto perhaps running Jellystat. But the instructions seem to be a bit...lacking? Is there a step by step guide on how to get it up and running?

Thanks!

24
submitted 4 months ago* (last edited 4 months ago) by iturnedintoanewt@lemm.ee to c/selfhosted@lemmy.world
 

Hi guys!

When I saw this tiny little guy, I had to go in and get it. And so I received it today. My first experience is...the software is a bit rough at the moment. And now I'm having trouble with the keyboard detection. It's no longer working, and I"m not sure what's wrong. Basically, it worked initially, but after I unplugged it to dump some isos onto it*, the USB keyboard emulation seems to no longer work.

And since I'm one of the very first users...I think have no documentation (yay). I see there's a Chinese forum where more people mention a USB keyboard issue, but I don't think this is sorted.

Anyone else tried it? How's your experiences so far? Any ideas how to fix the keyboard issues? Still, for all its initial wonkiness, I clearly see this as the future for a KVM device, instead of a full blown Raspberry Pi board, which I think is a bit overkill.

*: The 'full' version comes with an embedded 32GB microSD, of which 8GB is for the OS, but the remainder is a separate partition for ISOs...you connect it as a USB storage to a PC and drop your ISOs there. At the moment you don't seem to be able to mount a random file from your PC via the browser UI. Only ISO files it already has in its own storage.

 

Hi guys! So, I have Proton Mail, and this also gives me the Calendar. I love that I have a encrypted private calendar, but it bothers me that it doesn’t play well with any other app, as it’s not officially a “calendar” to Android. This bothers me, because I use GrapheneOS, with mostly no Google services, and I'd like my Gadgetbridge-connected smartwatch to be able to display calendar events, since they're not being shared with anyone else. But I can't, because Proton Calendar isn't really an Android Calendar. There’s a way in Proton to permanently share a link to your private calendar. In effect, it’s an up-to-date .ics file, that I believe needs to be checked/downloaded every time there’s an update. Is there a way to update this in Proton? Alternatively, I wouldn’t mind creating some caldav system that imported this, but not sure if there’s already any guide for it?

Thanks so much!

 

Hi guys! I'm having my first attempt at Immich (...and docker, since I'm at it). So I have successfully set it up (I think), and connected the phone and it started uploading. I have enabled foreground and background backup, and I have only chosen the camera album from my Pixel/GrapheneOS phone. Thing is, after a while (when the screen turns off for a while, even though the app is unrestricted in Android/GrapheneOS, or whenever changing apps...or whenever it feels like), the backup seems to start again from scratch, uploading again and again the first videos from the album (the latest ones, from a couple of days ago), and going its way until somewhere in December 2023...which is where at some point decides to go back and re-do May 2024. It's been doing this a bunch of times. I've seen mentioned a bunch of times that I should set client_max_body_size on nginx to something large like 5000MB. However in my case it's set to 0, which should read as unrestricted. It doesn't skip large videos of several hundreds megs, it does seem to go through the upload process...but then it keeps redoing them after a while.

Any idea what might be failing? Why does it keep restarting the backup? By the way, I took a screenshot of the backup a couple days ago, and both the backed up asset number and the remainder has kept the same since (total 2658, backup 179, remainder 2479). This is a couple of days now going through what I'd think is the same files over and over?

SOLVED: So it was about adding the client_max_body_size value to my nginx server. I thought I did, so I was ignoring this even though I saw it mentioned multiple times. Mine is set to value 0, not 50000M as suggested on other threads, but I thought it should work. But then again, it was in the wrong section, applying to a different service/container, not Immich. Adding it to Immich too (with 0, in my case, which should set it to "unlimited") worked immediately after restarting nginx service. Thanks everyone for all the follow ups and suggestions!

 

So, the issue at hand is, I have a Chromecast 4K with Jellyfin Android TV on it. And most of my library is x265/HEVC. But, whenever playing from this specific device, it will natively take HEVC, but with exoplayer library it plays kinda like a slideshow, at about 5-10FPS. Choosing VLC should be ok, and forcing a transcode will result in a perfectly playable x264 at 24-30-60FPS or whatever is needed. But x265 with the default exoplayer seems to be a struggle. Is there a way either in Jellyfin Android TV or in the server, to specifically disable x265 playback, but only on this device?

 

So..in a short sentence...the title. I have a server in a remote location which also happens to be under CGNAT. I only get to visit this location once a year at best, so if anything goes off...It stays off for the rest of that year until I can go and troubleshoot. I have a main location/home where everything works, I get a fixed IP and I can connect multiple services as desired. I'd like to make this so I could publish internal servers such as HA or similar on this remote location, and reach them in a way easy enough that I could install the apps to non-tech users and they could just use them through a normal URL. Is this possible? I already have a PiVPN running wireguard on the main location, and I just tested an LXC container from remote location, it connects via wireguard to the main location just fine, can ping/ssh machines correctly. But I can't reach this VPN-connected machine from the main location. Alternatively, I'm happy to listen to alternative solutions/ideas on how to connect this remote location to the main one somehow.

Thanks!

view more: next ›