archomrade

joined 1 year ago
[–] archomrade@midwest.social 4 points 16 hours ago

Person A: Expresses a liberal perspective "Watch as this person calls me a Lib" Person B: "..... yes, that is common liberal perspective" Person A: "Called it"

[–] archomrade@midwest.social 3 points 2 days ago

I'm impressed that you can handle that many jellyfin users

[–] archomrade@midwest.social 11 points 2 days ago (2 children)

The range of sofistication in this thread is actually kind of breathtaking

[–] archomrade@midwest.social 3 points 2 days ago (1 children)

I was so close to asking what the hell that thing was

[–] archomrade@midwest.social 1 points 4 days ago

maybe it's because i've been watching too much of the office lately but I expected you to end this comment with:

[–] archomrade@midwest.social 4 points 4 days ago

But those reasons aren't nefarious

[–] archomrade@midwest.social 1 points 1 week ago

Downloaders can be prosecuted.

They wouldn't go after the users, just the domains and the host servers. Similar to shutting down TPB or other tracker site, they'd go after the site host. True enough, there wouldn't necessarily be risk to users of those sites, but if they escalated things enough (like if an authoritarian got elected and was so motivated....) they could start taking more severe punitive action. Who knows, they could amend the regulation to go after the users if they wanted - it's a dangerous precedent either way. Especially when the intent is to 'protect children', there's no limit to how far they might take it in the future.

Blocked servers are inaccessible to adults, too, which raises freedom of information issues.

I'm not familiar with Australian law but I don't think this really applies. Most countries with internet censorship laws don't have any guaranteed right to uncensored information. At least in the US, they don't have 'censorship' per se, but they do sometimes 'block' an offending site by seizing domains/servers/equipment, and they can force search engines de-list them if the offense is severe enough. If the server is beyond their reach, they can prosecute or sanction the person hosting the site to pressure them into compliance. I can imagine a social media site who refuses to age verify and that hosts pornographic content (cough cough lemmy cough cough) be pursued like a CSAM site.

Large scale piracy is illegal pretty much everywhere, meaning that the industry can go after the operators and get the servers offline. Not so here.

That doesn't mean they can't throw their weight around and bully self-hosters/small-time hobbyists and scare them into compliance. Any western country enacting a law like this could pressure their western trade partners to comply with enforcement efforts. And anyway it isn't necessarily about the practicality of enforcing the law, so much as giving prosecutors a long leash to make a lot of noise and scare small-time hobbyists out of hosting non compliant sites. Most people can't afford the headache, even if it isn't enforceable where they live.

[–] archomrade@midwest.social 10 points 1 week ago (2 children)

It gets banned/blocked, or sued for noncompliance for allowing Australian users without age verification. They'll play whack a mole for decades, just like they have been for P2P file sharing.

Like a lot of post-911 legislation, it's anti-privacy surveillance disguised as a way to 'protect the children'. It's absolute shit and we should absolutely be taking measures to anonymize our open source social media platforms further.

[–] archomrade@midwest.social 3 points 1 week ago

There's no editorial process, anyone can post anything (within the TOS).

A lot of people use it for personal vlogs and such. It might be easier to ask how it's meaningfully different from something like tiktok that makes it not social media.

[–] archomrade@midwest.social 1 points 1 week ago

Yea, no disagreement. I more am curious if the federated nature is what helps mitigate that risk, or if there is some other systemic distinction that has helped.

I also just don't know what the others were like long-term - did they peeter out? Would I realize it if lemmy was in the same decline?

[–] archomrade@midwest.social 2 points 1 week ago (1 children)

I'm not actually sure comments get sorted by vote tally by default here.

I've always just ignored downvotes - I know when my opinion is unpopular, I don't see the votes as validating. I'd be fine if there were no visible votes at all

[–] archomrade@midwest.social 6 points 1 week ago (2 children)

If i could do this without my wife noticing, I'd be golden.

Unfortunately, she took to lurking some reddit communities right as I was exiting

 

edit: a working solution is proposed by @Lifebandit666@feddit.uk below:

So you’re trying to get 2 instances of qbt behind the same Gluetun vpn container?

I don’t use Qbt but I certainly have done in the past. Am I correct in remembering that in the gui you can change the port?

If so, maybe what you could do is set up your stack with 1 instance in, go into the GUI and change the port on the service to 8000 or 8081 or whatever.

Map that port in your Gluetun config and leave the default port open for QBT, and add a second instance to the stack with a different name and addresses for the config files.

Restart the stack and have 2 instances.


Has anyone run into issues with docker port collisions when trying to run images behind a bridge network (i think I got those terms right?)?

I'm trying to run the arr stack behind a VPN container (gluetun for those familiar), and I would really like to duplicate a container image within the stack (e.g. a separate download client for different types of downloads). As soon as I set the network_mode to 'service' or 'container', i lose the ability to set the public/internal port of the service, which means any image that doesn't allow setting ports from an environment variable is stuck with whatever the default port is within the application.

Here's an example .yml:

services:
  gluetun:
    image: qmcgaw/gluetun:latest
    container_name: gluetun
    cap_add:
      - NET_ADMIN
    environment:
      - VPN_SERVICE_PROVIDER=mullvad
      - VPN_TYPE=[redacted]
      - WIREGUARD_PRIVATE_KEY=[redacted]
      - WIREGUARD_ADDRESSES=[redacted]
      - SERVER_COUNTRIES=[redacted]
    ports:
      - "8080:8080" #qbittorrent
      - "6881:6881"
      - "6881:6881/udp"
      - "9696:9696" # Prowlarr
      - "7878:7878" # Radar
      - "8686:8686" # Lidarr
      - "8989:8989" # Sonarr
    restart: always

  qbittorrent:
    image: lscr.io/linuxserver/qbittorrent:latest
    container_name: "qbittorrent"
    network_mode: "service:gluetun"
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=CST/CDT
      - WEBUI_PORT=8080
    volumes:
      - /docker/appdata/qbittorrent:/config
      - /media/nas_share/data:/data)

Declaring ports in the qbittorrent service raises an error saying you cannot set ports when using the service network mode. Linuxserver.io has a WEBUI_PORT environment variable, but using it without also setting the service ports breaks it (their documentation says this is due to CSRF issues and port mapping, but then why even include it as a variable?)

The only workaround i can think of is doing a local build of the image that needs duplication to allow ports to be configured from the e variables, OR run duplicate gluetun containers for each client which seems dumb and not at all worthwhile.

Has anyone dealt with this before?

 

Anyone else get this email from Leviton about their decora light switches and their changes to ToS expressly permitting them to collect and use behavioral data from your devices?

FUCK Leviton, long live Zigbee and Zwave and all open-sourced standards


My Leviton

At Leviton, we’re committed to providing an excellent smart home experience. Today, we wanted to share a few updates to our Privacy Policy and Terms of Service. Below is a quick look at key changes:

We’ve updated our privacy policy to provide more information about how we collect, use, and share certain data, and to add more information about our users’ privacy under various US and Canadian laws. For instance, Leviton works with third-party companies to collect necessary and legal data to utilize with affiliate marketing programs that provide appropriate recommendations. >As well, users can easily withdraw consent at any time by clicking the links below.

The updates take effect March 11th, 2024. Leviton will periodically send information regarding promotions, discounts, new products, and services. If you would like to unsubscribe from communications from Leviton, please click here. If you do not agree with the privacy policy/terms of service, you may request removal of your account by clicking this link.

For additional information or any questions, please contact us at dssupport@leviton.com.

Traduction française de cet email Leviton

Copyright © 2024 Leviton Manufacturing Co., Inc., All rights reserved. 201 North Service Rd. • Melville, NY 11747

Unsubscribe | Manage your email preferences

 

Pretend your only other hardware is a repurposed HP Prodesk and your budget is bottom-barrel

46
submitted 9 months ago* (last edited 9 months ago) by archomrade@midwest.social to c/linux@lemmy.ml
 

I'm currently watching the progress of a 4tB rsync file transfer, and i'm curious why the speeds are less than the theoretical read/write maximum speeds of the drives involved with the transfer. I know there's a lot that can effect transfer speeds, so I guess i'm not asking why my transfer itself isn't going faster. I'm more just curious what the bottlenecks could be typically?

Assuming a file transfer between 2 physical drives, and:

  • Both drives are internal SATA III drives with ~~5.0GB/s~~ ~~5.0Gb/s read/write~~ 210Mb/s (this was the mistake: I was reading the sata III protocol speed as the disk speed)
  • files are being transferred using a simple rsync command
  • there are no other processes running

What would be the likely bottlenecks? Could the motherboard/processor likely limit the speed? The available memory? Or the file structure of the files themselves (whether they are fragmented on the volumes or not)?

 
  • Edit- I set the machine to work last night testing memtester and badblocks (read only) both tests came back clean, so I assumed I was in the clear. Today, wanting to be extra sure, i ran a read-write badblocks test and watched dmesg while it worked. I got the same errors, this time on ata3.00. Given that the memory test came back clean, and smartctl came back clean as well, I can only assume the problem is with the ata module, or somewhere between the CPU and the ata bus. i'll be doing a bios update this morning and then trying again, but seems to me like this machine was a bad purchase. I'll see what options I have with replacement.

  • Edit-2- i retract my last statement. It appears that only one of the drives is still having issues, which is the SSD from the original build. All write interactions with the SSD produce I/O errors (including re-partitioning the drive), while there appear to be no errors reading or writing to the HDD. Still unsure what caused the issue on the HDD. Still conducting testing (running badblocks rw on the HDD, might try seeing if I can reproduce the issue under heavy load). Safe to say the SSD needs repair or to be pitched. I'm curious if the SD got damaged, which would explain why the issue remains after being zeroed out and re-written and why the HDD now seems fine. Or maybe multiple SATA ports have failed now?


I have no idea if this is the forum to ask these types of questions, but it felt a little like a murder mystery that would be a little fun to solve. Please let me know if this type of post is unwelcome and I will immediately take it down and return to lurking.

Background:

I am very new to linux. Last week I purchased a cheap refurbished headless desktop so I could build a home media server, as well as play around with vms and programming projects. This is my first ever exposure to linux, but I consider myself otherwise pretty tech-savvy (dabble in python scripting in my spare time, but not much beyond that).

This week, i finally got around to getting the server software installed and operating (see details of the build below). Plex was successfully pulling from my media storage and streaming with no problems. As i was getting the docker containers up, I started getting "not enough storage" errors for new installs. Tried purging docker a couple times, still couldn't proceed, so I attempted to expand the virtual storage in the VM. Definitely messed this up, and immediately Plex stops working, and no files are visible on the share anymore. To me, it looked as if it attempted taking storage from the SMB share to add to the system files partition. I/O errors on the OMV virtual machine for days.

Take two.

I got a new HDD (so i could keep working as I tried recovery on the SSD). I got everything back up (created a whole new VM for docker and OMV). Gave the docker VM more storage this time (I think i was just reckless with my package downloads anyway), made sure that the SMB share was properly mounted. As I got the download client running (it made a few downloads), I notice the OVM virtual machine redlining on memory from the proxmox window. Thought, (uh oh, i should fix that). Tried taking everything down so I could reboot the OVM with more memory allocation, but the shutdown process hung on the OVM. Made sure all my devices on the network were disconnected, then stopped the VM from the proxmox window.

On OVM reboot, i noticed all kinds of I/O errors on both the virtual boot drive and the mounted SSD. I could still see files in the share on my LAN devices, but any attempt to interact with the folder stalled and would error out.

I powered down all the VM's and now i'm trying to figure out where I went wrong. I'm tempted to just abandon the VM's and just install it all on a Ubuntu OS, but I like the flexibility of having the VM's to spin up new OS's and try things out. The added complexity is obviously over my head, but if I can understand it better I'll give it another go.

Here's the build info:

Build:

  • HP prodesk 600g1
  • intel i5
  • upgraded 32gb after-market DDR3 1600mhz Patriot Ram
  • KingFlash 250gb SSD
  • WD 4T SSD (originally NTFS drive from my windows pc with ~2T of data existing)
  • WD 4T HDD (bought this after the SSD corrupted, so i could get the server back up while i delt with the SSD)
  • 500Mbps ethernet connection

Hypervisor

  • Proxmox (latest), Ubuntu kernel
  • VM110: Ubuntu-22.04.3-live server amd64, OpenMediaVault 6.5.0
  • VM130: Ubuntu-22.04.3-live, docker engine, portainer
    • Containers: Gluetun, qBittorrent, Sonarr, Radarr, Prowlarr)
  • LCX101: Ubuntu-22.04.3, Plex Server
  • Allocations
  • VM110: 4gb memory, 2 cores (balooning and swap ON)
  • VM130: 30gb memory, 4 cores (ballooning and swap ON)

Shared Media Architecture (attempt 1)

  • Direct-mounted the WD SSD to VM110. Partitioned and formatted the file system inside the GUI, created a folder share, set permissions for my share user. Shared as an SMB/CIFS
  • bind-mounted the shared folder to a local folder in VM130 (/media/data)
  • passed the mounted folder to the necessary docker containers as volumes in the docker-compose file (e.g. - volumes: /media/data:/data, ect)

No shame in being told I did something incredibly dumb, i'm here to learn, anyway. Maybe just not learn in a way that destroys 6 months of dvd rips in the process ___

view more: next ›