corroded

joined 1 year ago
[–] corroded@lemmy.world 13 points 1 week ago (2 children)

It's actually surprising how much just having a person in the room can alter the temperature and humidity levels. In my master bathroom, I have my bathroom fan set to activate when the dew point reaches a certain level (I've found that dew point produces better results than just humidity); the idea is that the bathroom will be ventilated when someone takes a shower and for however long it takes for the humidity to dissipate after they're done. The funny thing is that every so often, I'll take an excessively long poop (lets me honest, I'm scrolling on my phone), and the fan will kick on. Just being in the bathroom will alter the dew point enough that it triggers the fan.

I also have a room that contains all my server/networking equipment. It's climate-controlled, and I'm constantly monitoring temperatures. The times that in the room working, I can see a noticeable spike in the temperature graph, even though the only variable that's changed is that there's a person in the room.

So my point is: OP might not have been having fun that night; it's entirely possible someone just came in and went to bed.

[–] corroded@lemmy.world 19 points 3 weeks ago (11 children)

Why is kernel-level anti-cheat even a thing?

If I was trying to prevent cheating, I'd hash the relevant game files, encrypt the values, and hard-code them into the executable. Then when the game is launched, calculated the hash of the existing files and compare to the saved values.

What is gained by running anti-cheat in kernel mode? I only play single-player games, so I assume I'm missing something.

[–] corroded@lemmy.world -4 points 1 month ago* (last edited 1 month ago) (13 children)

If you're sick with something that's non-transmissible, then it's on you to decide if you want to go to work or not.

If you're sick with something contagious, then I don't care who you are, you're a horrible excuse for a human being if you go to work.

[–] corroded@lemmy.world 4 points 1 month ago (1 children)

At least for me, the whole "made by devs for devs" isn't really the major downfall. It's the fact that it can't be trusted to remain functional in a dynamic environment. I like using the command line, but sometimes that's just not enough.

If I need a specific software package, I can download the source, compile it, along with the 100 of libraries that they chose not to include in the .tar.gz file, and eventually get it running.

However, when I do an "apt update" and it changes enough, then the binary I compiled earlier is going to stop working. Then I spend hours trying to recompile it along with it's dependencies, only to find that it doesn't support some obscure sub-version of a package that got installed along with the latest security updates.

In a static environment, where I will never change settings or install software (like my NAS), it's perfect. On my desktop PC, I just want it to work well enough so I can tinker with other things. I don't want to have to troubleshoot why Gnome or KDE isn't working with my video drivers when all I want to do is launch remote desktop so I can tinker with stuff on a server that I actually want to tinker with.

[–] corroded@lemmy.world 14 points 1 month ago* (last edited 1 month ago) (14 children)

I can only speak for myself, but I have always had bad luck with Linux on desktop. Something always breaks, isn't compatible, or requires a lengthy installation process involving compiling multiple libraries because no .deb or .rpm is available.

On servers, it's fantastic. If you count VMs, I have far more Linux installations than Windows. In general, I use Win10 LTSC for anything that requires a GUI and Ubuntu Server for anything that only needs CLI or hosts a web interface.

[–] corroded@lemmy.world 25 points 1 month ago (34 children)

Win10 LTSC still has quite a few years left.

[–] corroded@lemmy.world 12 points 1 month ago

Having my status turn yellow when I so much as look away from my screen is bad enough. I really hope this "feature" stays off.

[–] corroded@lemmy.world 8 points 1 month ago (11 children)

How does Teams give away your location? I've used it extensively, but I've never seen someone's location unless the enter it manually.

[–] corroded@lemmy.world 27 points 2 months ago (2 children)

When it comes to writing code, there is a huge difference between code that works and code that works *well." Lets say you're tasked with writing a function that takes an array of RGB values and converts them to grayscale. ChatGPT is probably going to give you two nested loops that iterate over the X and Y values, applying a grayscale transformation to each pixel. This will get the job done, but it's slow, inefficient, and generally not well-suited for production code. An experienced programmer is going to take into account possible edge cases (what if a color is out of the 0-255 bounds), apply SIMD functions and parallel algorithms, factor in memory management (do we need a new array or can we write back to the input array), etc.

ChatGPT is great for experienced programmers to get new ideas; I use it as a modern version of "rubber ducky" debugging. The problem is that corporations think that LLMs can replace experienced programmers, and that's just not true. Sure, ChatGPT can produce code that "works," but it will fail at edge cases and will generally be inefficient and slow.

[–] corroded@lemmy.world 1 points 2 months ago

There are really two reasons ECC is a "must-have" for me.

  • I've had some variant of a "homelab" for probably 15 years, maybe more. For a long time, I was plagued with crashes, random errors, etc. Once I stopped using consumer-grade parts and switched over to actual server hardware, these problems went away completely. I can actually use my homelab as the core of my home network instead of just something fun to play with. Some of this improvement is probably due to better power supplies, storage, server CPUs, etc, but ECC memory could very well play a part. This is just anecdotal, though.
  • ECC memory has saved me before. One of the memory modules in my NAS went bad; ECC detected the error, corrected it, and TrueNAS sent me an alert. Since most of the RAM in my NAS is used for a ZFS cache, this likely would have caused data loss had I been using non-error-corrected memory. Because I had ECC, I was able to shut down the server, pull the bad module, and start it back up with maybe 10 minutes of downtime as the worst result of the failed module.

I don't care about ECC in my desktop PCs, but for anything "mission-critical," which is basically everything in my server rack, I don't feel safe without it. Pfsense is probably the most critical service, so whatever machine is running it had better have ECC.

I switched from bare-metal to a VM for largely the same reason you did. I was running Pfsense on an old-ish Supermicro server, and it was pushing my UPS too close to its power limit. It's crazy to me that yours only pulled 40 watts, though; I think I saved about 150-175W by switching it to a VM. My entire rack contains a NAS, a Proxmox server, a few switches, and a couple of other miscellaneous things. Total power draw is about 600-650W, and jumps over 700W under a heavy load (file transfers, video encoding, etc). I still don't like the idea of having Pfsense on a VM, though; I'd really like to be able to make changes to my Proxmox server without dropping connectivity to the entire property. My UPS tops out at 800W, though, so if I do switch back to bare-metal, I only have realistically 50-75W to spare.

[–] corroded@lemmy.world 2 points 2 months ago (3 children)

Social media companies, adult websites, whatever, can try to find ways to block children from accessing their content, but kids will always find a way around it.

It's the parents' responsibility to control their children. I've said 1000 times, children don't need access to smartphones and tablets. A desktop PC or laptop with strict parental controls is adequate enough for school work, learning about technology, and some basic entertainment.

When a child is old enough to work and pay for a smartphone themselves, then they're old enough to have a smartphone. A prepaid flip phone with basic voice and SMS is more than enough for a 15-year-old.

[–] corroded@lemmy.world 4 points 2 months ago (2 children)

I have a few services running on Proxmox that I'd like to switch over to bare metal. Pfsense for one. No need for an entire 1U server, but running on a dedicated machine would be great.

Every mini PC I find is always lacking in some regard. ECC memory is non-negotiable, as is an SFP+ port or the ability to add a low-profile PCIe NIC, and I'm done buying off-brand Chinese crop on Amazon.

If someone with a good reputation makes a reasonably-priced mini PC with ECC memory and at least some way to accept a 10Gb DAC, I'll probably buy two.

 

This is more "home networking" than "homelab," but I imagine the people here might be familiar with what in talking about.

I'm trying to understand the logic behind ISPs offering asymmetrical connections. From a usage standpoint, the vast majority of traffic goes to the end-user instead of from the end-user. From a technical standpoint, though, it seems like it would be more difficult and more expensive to offer an asymmetrical connection.

While consumers may be connected via fiber, cable, DSL, etc, I assume that the ISP has a number of fiber links to "the internet." Those links are almost surely some symmetrical standard (maybe 40 or 100Gb). So if they assume that they can support 1000 users at a certain download speed, what is the advantage of limiting the upload? If their incoming trunks can support 1000 users at 100Mb download, shouldn't it also support 1000 users at 100Mb upload since the trunks themselves are symmetrical?

Limiting the upload speed to a different rate than download seems like it would just add a layer of complexity. I don't see a financial benefit either; if their links are already saturated for download, reducing upload speed doesn't help them add additional users. Upload bandwidth doesn't magically turn into download bandwidth.

Obviously there's some reason for this, but I can't think of one.

 

A few months ago, I upgraded all my network switches. I have a 16-port SFP+ switch and a 1GB switch (LAGG to the SPF+ with two DACs). These work perfectly, and I'm really happy with the setup so far.

My main switch ties into a remote switch in another building over a 10Gb fiber line, and this switch ties into another switch of the same model (on a different floor) over a Cat6e cable. These switches are absolute garbage: https://www.amazon.com/gp/product/B084MH9P8Q

I should have known better than to buy a cheap off-brand switch, but I had hoped that Zyxel was a decent enough brand that I'd be okay. Well, you get what you pay for, and that's $360 down the toilett. I constantly have dropped connections, generally resulting in any attached devices completely losing network connectivity, or if I'm lucky, dropping down to dial-up speeds (I'm not exaggerating). The only way to fix it is to pull the power cable to the switch. Even under virtually no load, the switch gets so hot that it's painful to touch. Judging from the fact that my connection is far more stable when the switch is sitting directly in front of an air conditioner, that tells me just about all I need to know.

I'm trying to find a pair of replacement switches, but I'm really striking out. I have two ancient Dell PowerConnect switches that are rock solid, but they're massive, they sound like jet engines, and they use a huge amount of power. Since these are remote from my homelab and live in occupied areas, they just won't work. All I need is a switch that has:

  • At least 2 SFP+ ports (or 1 SFP+ port for fiber and a 10Gb copper port)
  • At least 4 1Gb ports (or SFP ports; I have a pile of old 1GB SFP adapters)
  • Management/VLAN capability Everything I find online is either Chinese white-label junk or is much larger than what I need. A 16-port SFP+ switch would work, but I'd never use most of the ports, and I'd be wasting a lot of money on overkill hardware. As an example, one of these switches is in my home office; it exists solely so I have a connection between my server rack, two PCs, and a single WAP. I am never going to need another LAN connection in my home office; any hardware is going to go in the server rack, but I do need 10GB connectivity on at least one of those PCs.

Does anyone have a suggestion for a small reliable switch that has a few SFP+ ports, is made by a reputable brand, and isn't a fire hazard?

 

This isn't strictly "homelab" related, but I'm not sure if there's a better community to post it.

I'm curious what kind of real-world speeds everyone is getting over their wireless network. I was testing tonight, and I'm getting a max of 250Mbit down/up on my laptop. I have 4 Unifi APs, each set to 802.11ac/80Mhz, and my laptop supports 2x2 MIMO. Testing on my phone (Galaxy S23) gives basically the exact same result.

The radio spectrum around me is ideal for WiFi; on 5Ghz, there is no AP in close enough range for me to detect. With an 80Mhz channel width, I can space all 4 of my APs so that there's no interference (using a non-DFS channel for testing, btw).

Am I wasting my time trying to chase higher speeds with my current setup? What kind of speeds are you getting on your WiFi network?

 

I've noticed recently that my network speed isn't what I would expect from a 10Gb network. For reference, I have a Proxmox server and a TrueNAS server, both connected to my primary switch with DAC. I've tested the speed by transferring files from the NAS with SMB and by using OpenSpeedTest running on a VM in Proxmox.

So far, this is what my testing has shown:

  • Using a Windows PC connected directly to my primary switch with CAT6: OpenSpeedTest shows around 2.5-3Gb to Proxmox, which is much slower than I'd expect. Transferring a file from my NAS hits a max of around 700-800MB (bytes, not bits), which is about what I'd expect given hard drive speed and overhead.
  • Using a Windows VM on Proxmox: OpenSpeedTest shows around 1.5-2Gb, which is much slower than I would expect. I'm using VirtIO network drivers, so I should realistically only be limited by CPU; it's all running internally in Proxmox. Transferring a file from my NAS hits a max of around 200-300MB, which is still unacceptably slow, even given the HDD bottleneck and SMB overhead.

The summary I get from this is:

  • The slowest transfer rate is between two VMs on my Proxmox server. This should be the fastest transfer rate.
  • Transferring from a VM to a bare-metal PC is significantly slower than expected, but better than between VMs.
  • Transferring from my NAS to a VM is faster than between two VMs, but still slower than it should be.
  • Transferring from my NAS to a bare-metal PC gives me the speeds I would expect.

Ultimately, this shows that the bottleneck is Proxmox. The more VMs involved in the transfer, the slower it gets. I'm not really sure where to look next, though. Is there a setting in Proxmox I should be looking at? My server is old (two Xeon 2650v2); is it just too slow to pass the data across the Linux network bridge at an acceptable rate? CPU usage on the VMs themselves doesn't get past 60% or so, but maybe Proxmox itself is CPU-bound?

The bulk of my network traffic is coming in-and-out of the VMs on Proxmox, so it's important that I figure this out. Any suggestions for testing or for a fix are very much appreciated.

 

I just set up a local instance of Invidious. I created an account, exported my YouTube subscriptions, and imported them into Invidious. The first time I tried, it imported 5 subscriptions out of 50 or so. The second time I tried, it imported 9.

Thinking there might be a problem with the import function, I decided to manually add each subscription. Every time I click "Subscribe," the button will switch to "Unsubscribe," then immediately switch back to "Subscribe." If I look at my subscriptions, it was never added.

My first thought was a problem with the PostgreSQL database, but that wouldn't explain why some subscriptions work when I import them.

I tried rebooting the container, and it made no difference. I'm running Invidious in a Ubuntu 22.04 LXC container in Proxmox. I installed it manually (not with Docker). It has 100GB of HDD space, 4 CPU cores, and 8GB of memory.

What the hell is going on?

 

The majority of my homelab consists of two servers: A Proxmox hypervisor and a TrueNAS file server. The bulk of my LAN traffic is between these two servers. At the moment, both servers are on my "main" VLAN. I have separate VLANs for guests and IoT devices, but everything else lives on VLAN2.

I have been considering the idea of creating another VLAN for storage, but I'm debating if there is any benefit to this. My NAS still needs to be accessible to non-VLAN-aware devices (my desktop PC, for instance), so from a security standpoint, there's not much benefit; it wouldn't be isolated. Both servers have a 10Gb DAC back to the switch, so bandwidth isn't really a factor; even if it was, my switch is still only going to switch packets between the two servers; it's not like it's flooding the rest of my network.

Having a VLAN for storage seems like it's the "best practice," but since both servers still need to be accessible outside the VLAN, the only benefit I can see is limiting broadcast traffic, and as far as I know (correct me if I'm wrong), SMB/NFS/iSCSI are all unicast.

 

I have a decent amount of video footage that I'd like to share with friends and family. My first thought was Youtube, but this is all home videos that I really don't want to share publicly.

A large portion of my video footage is 4k/60, so I'm ideally looking for a solution where I can send somebody a link, and it gives a "similar to Youtube" experience when they click on the link. And by "similar to Youtube," I mean that the player automatically adjusts the video bitrate and resolution based on their internet speed. Trying to explain to extended family how to lower the bitrate if the video starts buffering isn't really an option. It needs to "just work" as soon as the link is clicked; some of the individuals I'd like to share video with are very much not technically inclined.

I'd like to host it on my homelab, but my internet connection only has a 4Mbit upload, which is orders of magnitude lower than my video bitrate, so I'm assuming I would need to either use a 3rd-party video hosting service or set up a VPS with my hosting software of choice.

Any suggestions? I prefer open-source self-hosted software, but I'm willing to pay for convenience.

view more: next ›