Anonymouse

joined 1 year ago
[–] Anonymouse@lemmy.world 2 points 2 months ago

Perhaps I've been naieve.

[–] Anonymouse@lemmy.world 1 points 2 months ago (1 children)

I have local incremental backups and rsync to the remote. Doesn't syncthing have incremental also? You have a good point about syncing a destroyed disk to your offsite backup. I know S3 has some sort of protection, but haven't played with it.

[–] Anonymouse@lemmy.world 1 points 2 months ago (1 children)

I have tailscale mostly set up. What's the issue with USB drives? I've got a raspberry pi on the other end with a RO SD card so it won't go bad.

[–] Anonymouse@lemmy.world 2 points 2 months ago

This reminds me that I need alerts monitoring set up. ; -)

[–] Anonymouse@lemmy.world 2 points 2 months ago

I'll have to check this out.

[–] Anonymouse@lemmy.world 1 points 2 months ago

I attended some LUGs before covid and could see something like this being facilitated there. It also reminds me of the Reddit meetups that I never partook in.

[–] Anonymouse@lemmy.world 3 points 2 months ago

That's something that I hadn't considered!

[–] Anonymouse@lemmy.world 3 points 2 months ago

I wasn't aware of the untrusted setting. That sounds like a good option.

[–] Anonymouse@lemmy.world 3 points 2 months ago (2 children)

Yes. It's the "put a copy somewhere else" that I'm trying to solve for without a lot of cost and effort. So far, having a remote copy at a relative's is good for being off site and cost, but the amount of time to support it has been less than ideal since the Pi will sometimes become unresponsive for unknown reasons and getting the family member to reboot it "is too hard".

 

While reading many of the blogs and posts here about self hosting, I notice that self hosters spend a lot of time searching for and migrating between VPS or backup hosting. Being a cheapskate, I have a raspberry pi with a large disk attached and leave it at a relative's house. I'll rsync my backup drive to it nightly. The problem is when something happens, I have to walk them through a reboot or do troubleshooting over the phone or worse, wait until a holiday when we all meet.

What would a solution look like for a bunch of random tech nerds who happen to live near each other to cross host each other's offsite backups? How would you secure it, support it or make it resilient to bad actors? Do you think it could work? What are the drawbacks?

[–] Anonymouse@lemmy.world 1 points 3 months ago

Take some time and really analyze your threat model. There are different solutions for each of them. For example, protecting against a friend swiping the drives may be as simple as LUKS on the drive and a USB key with the unlock keys. Another poster suggested leaving the backup computer wide open but encrypting the files that you back up with symmetric or asymmetric, based on your needs. If you're hiding it from the government, check your local laws. You may be guilty until proven innocent in which case you need "plausible deniability" of what's on the drive. That's a different solution. Are you dealing with a well funded nation-state adversary? Maybe keying in the password isn't such a bad idea.

I'm using LUKS with mandos on a raspberry PI. I back up to a Pi at a friend's house over TailScale where the disk is wide open, but Duplicity will encrypt the backup file. My threat model is a run of the mill thief swiping the computers and script kiddies hacking in.

[–] Anonymouse@lemmy.world 15 points 3 months ago (3 children)

You're doing God's work!

Over my career, it's sad to see how the technical communications groups are the first to get cut because "developers should document their own code". No, most can't. Also, the lack of good documentation leads to churn in other areas. It's difficult to measure it, but for those in the know, it's painfully obvious.

[–] Anonymouse@lemmy.world 4 points 4 months ago

I'm not as enraged by this as most, but I think the true test will be to see if this feature is disabled by default in future releases. If they actually do listen to their users, that's better than any of the other big players.

I read a bit about the new "feature" and it seems to me that they're trying out a way to allow ad companies to know if their advertisement was effective in a way that also preserves the privacy of the user. I can respect that. I did shut it off, but am also less concerned because I have multiple advertisement removal tools, so this feature is irrelevant.

The fact that it's enabled by default isn't comforting, but who would actually turn this on if it were buried in about:config? In order to prove its effectiveness to promote a privacy respecting but advertisement friendly mechanism, this is what they felt that they had to do.

Of course, I could easily be all wrong about this and time will tell.

 

I had a super fast but small SSD and didn't know what to do with it, so I was playing with caching slow spinning LVM drives. It worked pretty good, but I got interrupted and came back a few weeks later to upgrade the OS. I forgot about the caching LVM, updated the packages in preparation for the OS upgrade, then rebooted. The LVM cache modules weren't in the initfs image and it didn't boot.

I should know better. I used to roll my own kernels since Slackware 1.0. I've had build initfs images for performance tweaks. Ugh!

Where's my rescue disk?

 

I haven't seen this posted yet here, but anybody self-hosting OwnCloud in a containerized environment may be exposing sensitive environment variables to the public internet. There may be other implications as well.

 

Is anybody using only IPv6 in their home lab? I keep running into weird problems where some services use only IPv6 and are "invisible" to everyone (I'm looking at you, Java!) I end up disabling IPv6 to force everything to the same protocol, but I started wondering, "why not disable IPv4 instead?" I'd have half as many firewall rules, routes and configurations. What are the risks?

 

Many of the posts I read here are about Docker. Is anybody using Kubernetes to manage their self hosted stuff? For those who've tried it and went back to Docker, why?

I'm doing my 3rd rebuild of a K8s cluster after learning things that I've done wrong and wanted to start fresh, but when enhancing my Docker setup and deciding between K8s and Docker Swarm, I decided on K8s for the learning opportunities and how it could help me at work.

What's your story?

view more: next ›