SpiderUnderUrBed

joined 3 months ago
 

Like how on Debian's website, you can find their ISO's and other related files in this very simple file browser layout which looks kind of old but I want that, know any projects or way to set something like that up? The modern self-hosted stuff just does not seem simple enough, and both aesthetically and from a functional perspective I would like something like what debain does with their own files. I also want it to be reliable, for some reason, with both immich and nextcloud, a relative of mine was unable to download alot of photos without the download not even starting on Nextcloud, or it stopping 30% of the way on immich, if reliable downloads necessitate a desktop app with their own unique file exchanging protocol I would be ok with that too (willing to compromise with the desired aesthetic and minimalist design)

The ideal thing is the thing here: https://cdimage.debian.org/debian-cd/

[–] SpiderUnderUrBed@lemmy.zip 1 points 1 week ago* (last edited 1 week ago)

I dont think i ever set AV1 encoding inside OBS, looking at my settings, it just shows h.264:
Picture

[–] SpiderUnderUrBed@lemmy.zip 1 points 2 weeks ago (3 children)

I tried nvidia-offload, as I set up PRIME awhile ago, it didnt help, here is the logs, if its useful: https://pastebin.com/CiJ4Zyjw

Idk if OBS would actually respect the GPU being handed to it, or if it'll do something weird with screen capture, its weird per-gpu settings is not a option with OBS, if this is a OBS bug, i can file a bug report. Hopefully it can be resolved here.

[–] SpiderUnderUrBed@lemmy.zip 1 points 2 weeks ago* (last edited 2 weeks ago) (5 children)

I have the hyprland portals installed, and the kde ones, due to some issue I had to explicitly install them so idk if that will mess with the way applications handle it, assuming not, and yes I have two gpus, one dgpu, and one igpu, the dgpu is directly connected to my hdmi, does OBS stuggle with 2 gpus? still, that sounds like it would be a issue with capturing the monitor managed by my igpu. Not a reason to stop a second pipewire capture.

What logs do you need? I provided some from running OBS but i assume it isnt enough, what logs should I collect, or is there a flag i need to run with OBS

 

How come when I try to create a new obs screen, it is black, whether or not i toggle off the visibility on Screen Capture and how do i get it to show the capture settings, like which monitor, or what portion of the screen, to be clear, the! first capture works, for some reason no other capture i try to create is letting me configure or display anything

^ Image
https://pastebin.com/AzKCZ8Tt
^ Logs
https://imgur.com/a/K7pMA4p
^ Video
There is a chance this might be related to another issue I had, but I dont know a fix (I have to manually add what portals I want to install due to a bug, but I have the plasma portals so that should be enough?)

[–] SpiderUnderUrBed@lemmy.zip 4 points 2 weeks ago

Weird it's called clicker when you can do key presses too, but I'll check it out. It looks like it fulfills my use case

[–] SpiderUnderUrBed@lemmy.zip 2 points 2 weeks ago (1 children)

If it works on wayland then yes

[–] SpiderUnderUrBed@lemmy.zip 3 points 2 weeks ago (1 children)

What's the difference between this and ydotool

[–] SpiderUnderUrBed@lemmy.zip 3 points 2 weeks ago (4 children)

Unfortunately it ain't a gui

 

There is xclicker which is a flatpak app, but it only automate mouse clicks, but there is nothing for key presses, I am surprised I could not find anything on this, but is there any GUI for this? Also is this possible on a technical level (in flatpak especially, I dont know if apps can simulate key presses). I know of ydotool, but that uses root, also its not a gui

[–] SpiderUnderUrBed@lemmy.zip 9 points 4 weeks ago (1 children)

Would signal also work?

 

From both a technical perspective and if the maintainers of these anti-cheat will consider porting or re-writing kernel level anti-cheat to work on linux, is it possible? Do you think that the maintainers of kernel level anti-cheat will be adamant in not doing it, or that the kernel even supports it or will support it. I think that if it ever happens, there will be a influx of people moving to linux, or abandoning their duelboots, and that alot of people will hate that such a thing is available on linux.

 

Hello, I am looking for a alternative to HA Proxy, as the GUI options for it, are both third-party and not very good looking, also I just want to know about the alternatives, what I am looking in a high availability setup is the ability to detect if a server is offline, and route to other servers, as well as other HA goodies.

[–] SpiderUnderUrBed@lemmy.zip 21 points 1 month ago (2 children)

I feel like github should have verified repositories

 

Title, I am unsure if games are using my GPU or if using my CPU, or maybe my GPU through my CPU, I do not know, something is using my GPU, but I think its just KDE plasma, and I would like to know definitively how to find out

[–] SpiderUnderUrBed@lemmy.zip 1 points 2 months ago

Nevermind, fixed, this is what I tried applying, or maybe i should have waited for a bit and it might of worked, regardless, just incase its useful to anyone:

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
        }
        hosts /etc/coredns/NodeHosts {
            ttl 60
            reload 15s
            fallthrough
        }
        prometheus :9153
        forward . 1.1.1.1 1.0.0.1 8.8.8.8 8.8.4.4
        cache 30
        loop
        reload
        loadbalance
    }

The issue is solved now, thanks

[–] SpiderUnderUrBed@lemmy.zip 2 points 2 months ago (2 children)

Ok so, I think it was running on the wrong node and using thats resolv.conf which I did not update, but I am getting a new issue:

2025-05-02T21:42:30Z INF Starting tunnel tunnelID=72c14e86-612a-46a7-a80f-14cfac1f0764
2025-05-02T21:42:30Z INF Version 2025.4.2 (Checksum b1ac33cda3705e8bac2c627dfd95070cb6811024e7263d4a554060d3d8561b33)
2025-05-02T21:42:30Z INF GOOS: linux, GOVersion: go1.22.5-devel-cf, GoArch: arm64
2025-05-02T21:42:30Z INF Settings: map[no-autoupdate:true]
2025-05-02T21:42:30Z INF Environmental variables map[TUNNEL_TOKEN:*****]
2025-05-02T21:42:30Z INF Generated Connector ID: 7679bafd-f44f-41de-ab1e-96f90aa9cc34
2025-05-02T21:42:40Z ERR Failed to fetch features, default to disable error="lookup cfd-features.argotunnel.com on 10.90.0.10:53: dial udp 10.90.0.10:53: i/o timeout"
2025-05-02T21:43:30Z WRN Unable to lookup protocol percentage.
2025-05-02T21:43:30Z INF Initial protocol quic
2025-05-02T21:43:30Z INF ICMP proxy will use 10.60.0.194 as source for IPv4
2025-05-02T21:43:30Z INF ICMP proxy will use fe80::eca8:3eff:fef1:c964 in zone eth0 as source for IPv6

2025-05-02T21:42:40Z ERR Failed to fetch features, default to disable error="lookup cfd-features.argotunnel.com on 10.90.0.10:53: dial udp 10.90.0.10:53: i/o timeout"

kube-dns usually isnt supposed to give a i/o timeout when going to external domains, im pretty sure its supposed to forward it to another dns server, or do i have to configure that?

 

https://pastebin.com/gqPLwSFq

^ output of my resolv.conf and cloudflare logs

kube-system kube-dns ClusterIP 10.90.0.10 <none> 53/UDP,53/TCP,9153/TCP 2d15h

^ my service ip for kubedns

https://pastebin.com/BCBhh8aj

^ my cloudflare config

How come, despite there being no mention of 8.8.8.8 on my system, in any other dns file for kubedns, not in my resolv.conf, tunnels, is now, incorrectly, trying to use that, to resolve internal ips, it does not make any sense

I think internal DNS resolution is overall working fine, here is a example of me accessing traefik from one of my pods:

spiderunderurbed@raspberrypi:~/k8s $ kubectl exec -it wordpress-7767b5d9c4-qh59n -- curl traefik.default.svc.cluster.local 
404 page not found
spiderunderurbed@raspberrypi:~/k8s $ 

^ means traefik was accessed, it is accessed as its my ingress, and there is nothing about 8.8.8.8 in there, might be baked in my CF.

 

Title. I am in a k8s cluster, and I constantly get DNS issues in it, for some reason, it is using my resolv.conf, and since I have tailscale it tends to overwrite my resolv.conf, i don't think there is a way to fix it, also, I have multiple clusters and I don't know how to exactly use the proper names, I set up stuff like PowerDNS, but I want it so that, some of cloudflare tunnels queries goes through this nameserver, while some of it goes through other nameservers, the way k8s handles dns internally leads to some weird stuff where, if a DNS server says NXDOMAIN, it wont try the next one, and just general buggy behavior.

(to better explain, I dont have direct ips in my tunnel configuration, as I would have to change them often, I use DNS names, and intend to continue using DNS)

[–] SpiderUnderUrBed@lemmy.zip 10 points 2 months ago

??? He said he talked to the principal multiple times

 

So I need help with a split dns approach, or a direct fix, normally when running my tunnel on the simplest configuration I get this error:


Couldn't resolve SRV record &{region1.v2.argotunnel.com. 7844 1 1}: lookup region1.v2.argotunnel.com. on 10.43.0.10:53: read udp 172.16.91.156:54443->10.43.0.10:53: i/o timeout

When I tried to change the nameserver to cloudflare to make it accessible I get this error:

2025-04-07T10:06:38Z ERR  error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: dial tcp: lookup traefik on 1.1.1.1:53: no such host" connIndex=3 event=1 ingressRule=3 originService=http://traefik/
2025-04-07T10:06:38Z ERR Request failed error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: dial tcp: lookup traefik on 1.1.1.1:53: no such host" connIndex=3 dest=https://nextcloud.spidershomelab.xyz/index.php/204 event=0 ip=198.41.200.233 type=http
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tunnel
  labels:
    app: tunnel
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tunnel
  template:
    metadata:
      labels:
        app: tunnel
    spec:
      dnsPolicy: None
      dnsConfig:
        nameservers:
          - 1.1.1.1
          - 10.43.0.10
#        searches:
#          - default.svc.cluster.local
      hostNetwork: true
      containers:
        - name: tunnel
          image: cloudflare/cloudflared:latest
          args:
            - tunnel
            - --no-autoupdate
            - run
          env:
            - name: TUNNEL_TOKEN
              valueFrom:
                configMapKeyRef:
                  name: env
                  key: CLOUDFLARE_TUNNEL_TOKEN
      restartPolicy: Always

Anyone know why cf tunnels is asking the wrong DNS server? I know i specified 1.1.1.1 but it should have also asked kubedns as I specified its ip I do have to specify its nameserver or else it does not work, it wont be able to connect to their argotunnel domain without going through 1.1.1.1


kube-dns   ClusterIP   10.43.0.10   <none>        53/UDP,53/TCP,9153/TCP   12d

also its the correct ip I would like it, if you cant give direct advice, to try this deployment, and add a custom dns server that idk, configures it so that correct ip queries goes to 1.1.1.1 and the rest kubedns, i dried coredns, and other dns servers and I couldnt get anything to work I am trying the nameserver 1.1.1.1 because otherwise I get the error mentioned above. and no, I am not running a firewall nor anything that should block it outside of k8s, as it runs perfectly fine on the host.

 
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
  name: strip-first-prefix
  namespace: default
spec:
#  replacePathRegex:
#    regex: "^/[^/]+(.*)"
#    replacement: "$1"
  stripPrefix:
    prefixes:
      #- "/dashboard"
      #- "/api"
      - "/gitea"
      - "/wordpress"
      - "/vaultwarden"
      - "/pdns"
      - "/glance"
      - "/immich"

So I have a issue. whenever I accessed all of my services via 192.168.1.22/wordpress for example. it forwarded that /wordpress to the actual wordpress domain, leading to page not found, however when i strip the initial proefix, i can access the base page, however, when lets say wordpress wants any css or assets, it will look at 192.168.1.22/assets which wont work, so basically, I need a way for sort of, emulate the url paths, so it wont take actual queries to places that dont exist and tries to access resources the incorrect way, i know siteURL exists for WP, but i want a catchall solution which helps my other services.

view more: next ›