Burn1ngBull3t

joined 2 years ago
[–] Burn1ngBull3t@lemmy.world 2 points 1 month ago

Just got back on PlanetCrafter with my mate.

We are having a blast on the Moons update !

[–] Burn1ngBull3t@lemmy.world 15 points 1 month ago

Hell yeah, some DIY Perks on lemmy.

Great quality video as always, even though the setup might be cumbersome to add peripherals in the long term.

But still interesting !

[–] Burn1ngBull3t@lemmy.world 3 points 2 months ago

J’utilise Scaleway pour des services que j’utilise en perso. Ca fait tres bien le taff je trouve, sachant que j’ai eu l’occaz de faire du Azure et AWS aussi

Je trouve que c’est solide

[–] Burn1ngBull3t@lemmy.world 1 points 4 months ago (1 children)

Good suggestion actually, i’ll head back to MetalLB docs. Thanks !

[–] Burn1ngBull3t@lemmy.world 3 points 4 months ago (3 children)

Many issues this week:

  • Broke external-dns on my kube cluster because I updated my Pihole to v6
  • Thinking of a way to expose a game server externally (usually used CF tunnels for specific services, but couldn’t get it to work cause it’s TCP/UDP and not HTTP traffic)

But at least i got my Velero backups working on an private S3

[–] Burn1ngBull3t@lemmy.world 4 points 5 months ago (2 children)

I was looking to use ProtonVPN but since that story, i would prefer an alternative.

What do you guys use for VPN (i think i saw Mullvad in the replies ?)

[–] Burn1ngBull3t@lemmy.world 4 points 6 months ago (1 children)

Either tailscale or cloudflare tunnels are the most adapted solution as other comments said.

For tailscale, as you already set it up, just make sure you have an exit node where your services are. I had to do a bit of tinkering to make sure that the ips were resolved : its just an argument to the tailscale command.

But if you dont want to use tailscale because its to complicated to your partner, then cloudlfare tunnels is the other way to go.

How it works is by creating a tunnel between your services and cloudlare, kind of how a vpn would work. You usually use the cloudlfared CLI or directly throught Cloudflare's website to configure the tunnel. You should have a DNS imported to cloudflare by the way, because you have to do a binding such as : service.mydns.com -> myservice.local Cloudlfare can resolve your local service and expose it to a public url.

Just so you know, cloudlfare tunnels are free for some of that usage, however cloudlfare has the keys for your ssl traffic, so they in theory could have a look at your requests.

best of luck for the setup !

[–] Burn1ngBull3t@lemmy.world 2 points 6 months ago

Escape from Tarkov - 2500h Elite Dangerous - 800h Kerbal Space Program - 300h Satisfactory - 250h and still going up ^^

 

Hello !

We have been discussing at work about hosting (internally) some work related stories that we find funny.

I've been looking for tools to do that should be quite simple, and display one story at a time nothing fancy.

Couldn't find anything quite like that, was wodnering if you guys knew one ? If not, i might develop it then and share it.

Thanks !

[–] Burn1ngBull3t@lemmy.world 5 points 1 year ago* (last edited 1 year ago)

Hello @theit8514

You are actually spot on ^^

I did look in my exports file which was like so :/mnt/DiskArray 192.168.0.16(rw) 192.168.0.65(rw)

I added a localhost line in case: /mnt/DiskArray 127.0.0.1(rw) 192.168.0.16(rw) 192.168.0.65(rw)

It didn't solve the problem. I went to investigate with the mount command:

  • Will mount on 192.168.0.65: mount -t nfs 192.168.0.55:/mnt/DiskArray/mystuff/ /tmp/test

  • Will NOT mount on 192.168.0.55 (NAS): mount -t nfs 192.168.0.55:/mnt/DiskArray/mystuff/ /tmp/test

  • Will mount on 192.168.0.55 (NAS): mount -t nfs 127.0.0.1:/mnt/DiskArray/mystuff/ /tmp/test

The mount -t nfs 192.168.0.55 is the one that the cluster does actually. So i either need to find a way for it to use 127.0.0.1 on the NAS machine, or use a hostname that might be better to resolve

EDIT:

I was acutally WAY simpler.

I just added 192.168.0.55 to my /etc/exports file. It works fine now ^^

Thanks a lot for your help@theit8514@lemmy.world !

 

Hello !

I currently have a problem on my kubernetes cluster.

I have 3 nodes:

  • 192.168.0.16
  • 192.168.0.65
  • 192.168.0.55

I use a storage class nfs (sigs/nfs-subdir-external-provisioner) to use an NFS.

The NFS is actually set up on the 192.168.0.55 which is also a worker node then.

I noticed that i have problems mounting volumes when a pod is created on the 192.168.0.55 node. If its one of the other two, it mounts. (The error is actually a permission denied on the 192.168.0.55 node)

I would guess that something goes wrong when kube tries to mount to NFS since it’s on the same machine ?

Any idea on how i can fix this? Cheers !

 

Hello selfhosted !

Continuing my journey of setup up my home k3s cluster.

I’ve been asking myself if Longhorn might be overkill for my home cluster,here’s what i did:

3 machines running k3s each. One of them has a storage in Raid 5 and I dont want to use any storage from the other two.

Thing is, i had to configure replicas to 1 in longhorn for my pv to be green.

Hence my question, since data is already replicated in the array, shouldn’t I just use a NFS storage class instead?

Thanks !