corroded

joined 2 years ago
[–] corroded@lemmy.world 63 points 2 weeks ago (10 children)

The last I read, de minimis still applied. I didn't know until now that was done with.

As an avid collector of vinyl records: FUCK! I've got no problem sending $50 to a European artist who's selling a limited run of records out of their living room. Hell, if it's an artist I really like, I'll spend $70. I'm not about to spend $70 and the artist get half of it.

Spending ludicrous amounts of cash of 12-inch pieces of plastic is totally fine with me, but I want my money going to the artist who's making the music I love, not a government I voted against.

[–] corroded@lemmy.world 12 points 3 weeks ago (1 children)

Improve your what and do what? I have no idea what that means.

[–] corroded@lemmy.world 17 points 1 month ago* (last edited 1 month ago)

I know what one of the three words in the title actually mean. If you want to know what a word means, you consult a dictionary. If people are actually using these words, it kind of makes sense to add their definitions.

[–] corroded@lemmy.world 3 points 1 month ago (1 children)

So many people completely miss the mark when it comes to AI and coding. It's great for code reviews on code you wrote yourself, and it can be handy when you're developing code for a domain you don't have much experience in.

What it is not good for is writing code on its own. Not if you want your code to be efficient, or performant, work correctly, or even compile.

[–] corroded@lemmy.world 82 points 1 month ago (27 children)

If you don't want your conversations to be public, how about you don't tick the checkbook that says "make this public." This isn't OpenAI's problem, its an idiot user problem.

[–] corroded@lemmy.world 9 points 1 month ago (4 children)

What's the deal with gaming videos? Do game streamers tend to be Nazis? Seems like a strange place to push right-wing propaganda.

[–] corroded@lemmy.world 22 points 1 month ago (5 children)

This makes me think that the Starlink system is very poorly designed. I know there are hundreds of satellites, and a large number of base stations.

Even if a large chunk of the satellites were taken out and a few base stations failed, shouldn't the system keep working, just over a different path?

This sounds very much not like a hardware failure, but more like somebody fucked up.

[–] corroded@lemmy.world -5 points 2 months ago

The biggest problem I have with what this woman did is that she used bear mace. Should have been a handgun, or even a machete, or perhaps a flamethrower.

[–] corroded@lemmy.world 16 points 2 months ago (2 children)

I really don't understand this. What does the Army gain by commissioning tech execs as reserve officers? Wouldn't it be far more effective to just hire their companies as contractors? Or commission high-level engineers as officers. A tech exec's skillet is running a company. Sure, offer commissions to their most skilled employees, but to the execs themselves, why?

[–] corroded@lemmy.world 60 points 3 months ago (2 children)

Not really. While I don't have the exact numbers, the output of an infrared LED is no higher (usually) than an LED in the visible range. My security cameras have an array of 10 or so LEDs.

So looking at a security camera would be roughly equivalent to staring at a light bulb.

[–] corroded@lemmy.world 10 points 4 months ago (2 children)

Why? If everyone does poorly, everyone should fail, provided the opportunity to learn was there.

[–] corroded@lemmy.world -1 points 4 months ago (6 children)

This has always seemed overblown to me. If students want to cheat on their coursework, who cares? As long as exams are given in a controlled environment, it's going to be painfully obvious who actually studied the material and who had ChatGPT do it for them. Re-taking a course is not going to be fun or cheap.

Maybe I'm oversimplifying this, but it feels like proctored testing solves the entire problem.

 

This is more "home networking" than "homelab," but I imagine the people here might be familiar with what in talking about.

I'm trying to understand the logic behind ISPs offering asymmetrical connections. From a usage standpoint, the vast majority of traffic goes to the end-user instead of from the end-user. From a technical standpoint, though, it seems like it would be more difficult and more expensive to offer an asymmetrical connection.

While consumers may be connected via fiber, cable, DSL, etc, I assume that the ISP has a number of fiber links to "the internet." Those links are almost surely some symmetrical standard (maybe 40 or 100Gb). So if they assume that they can support 1000 users at a certain download speed, what is the advantage of limiting the upload? If their incoming trunks can support 1000 users at 100Mb download, shouldn't it also support 1000 users at 100Mb upload since the trunks themselves are symmetrical?

Limiting the upload speed to a different rate than download seems like it would just add a layer of complexity. I don't see a financial benefit either; if their links are already saturated for download, reducing upload speed doesn't help them add additional users. Upload bandwidth doesn't magically turn into download bandwidth.

Obviously there's some reason for this, but I can't think of one.

 

A few months ago, I upgraded all my network switches. I have a 16-port SFP+ switch and a 1GB switch (LAGG to the SPF+ with two DACs). These work perfectly, and I'm really happy with the setup so far.

My main switch ties into a remote switch in another building over a 10Gb fiber line, and this switch ties into another switch of the same model (on a different floor) over a Cat6e cable. These switches are absolute garbage: https://www.amazon.com/gp/product/B084MH9P8Q

I should have known better than to buy a cheap off-brand switch, but I had hoped that Zyxel was a decent enough brand that I'd be okay. Well, you get what you pay for, and that's $360 down the toilett. I constantly have dropped connections, generally resulting in any attached devices completely losing network connectivity, or if I'm lucky, dropping down to dial-up speeds (I'm not exaggerating). The only way to fix it is to pull the power cable to the switch. Even under virtually no load, the switch gets so hot that it's painful to touch. Judging from the fact that my connection is far more stable when the switch is sitting directly in front of an air conditioner, that tells me just about all I need to know.

I'm trying to find a pair of replacement switches, but I'm really striking out. I have two ancient Dell PowerConnect switches that are rock solid, but they're massive, they sound like jet engines, and they use a huge amount of power. Since these are remote from my homelab and live in occupied areas, they just won't work. All I need is a switch that has:

  • At least 2 SFP+ ports (or 1 SFP+ port for fiber and a 10Gb copper port)
  • At least 4 1Gb ports (or SFP ports; I have a pile of old 1GB SFP adapters)
  • Management/VLAN capability Everything I find online is either Chinese white-label junk or is much larger than what I need. A 16-port SFP+ switch would work, but I'd never use most of the ports, and I'd be wasting a lot of money on overkill hardware. As an example, one of these switches is in my home office; it exists solely so I have a connection between my server rack, two PCs, and a single WAP. I am never going to need another LAN connection in my home office; any hardware is going to go in the server rack, but I do need 10GB connectivity on at least one of those PCs.

Does anyone have a suggestion for a small reliable switch that has a few SFP+ ports, is made by a reputable brand, and isn't a fire hazard?

 

This isn't strictly "homelab" related, but I'm not sure if there's a better community to post it.

I'm curious what kind of real-world speeds everyone is getting over their wireless network. I was testing tonight, and I'm getting a max of 250Mbit down/up on my laptop. I have 4 Unifi APs, each set to 802.11ac/80Mhz, and my laptop supports 2x2 MIMO. Testing on my phone (Galaxy S23) gives basically the exact same result.

The radio spectrum around me is ideal for WiFi; on 5Ghz, there is no AP in close enough range for me to detect. With an 80Mhz channel width, I can space all 4 of my APs so that there's no interference (using a non-DFS channel for testing, btw).

Am I wasting my time trying to chase higher speeds with my current setup? What kind of speeds are you getting on your WiFi network?

 

I've noticed recently that my network speed isn't what I would expect from a 10Gb network. For reference, I have a Proxmox server and a TrueNAS server, both connected to my primary switch with DAC. I've tested the speed by transferring files from the NAS with SMB and by using OpenSpeedTest running on a VM in Proxmox.

So far, this is what my testing has shown:

  • Using a Windows PC connected directly to my primary switch with CAT6: OpenSpeedTest shows around 2.5-3Gb to Proxmox, which is much slower than I'd expect. Transferring a file from my NAS hits a max of around 700-800MB (bytes, not bits), which is about what I'd expect given hard drive speed and overhead.
  • Using a Windows VM on Proxmox: OpenSpeedTest shows around 1.5-2Gb, which is much slower than I would expect. I'm using VirtIO network drivers, so I should realistically only be limited by CPU; it's all running internally in Proxmox. Transferring a file from my NAS hits a max of around 200-300MB, which is still unacceptably slow, even given the HDD bottleneck and SMB overhead.

The summary I get from this is:

  • The slowest transfer rate is between two VMs on my Proxmox server. This should be the fastest transfer rate.
  • Transferring from a VM to a bare-metal PC is significantly slower than expected, but better than between VMs.
  • Transferring from my NAS to a VM is faster than between two VMs, but still slower than it should be.
  • Transferring from my NAS to a bare-metal PC gives me the speeds I would expect.

Ultimately, this shows that the bottleneck is Proxmox. The more VMs involved in the transfer, the slower it gets. I'm not really sure where to look next, though. Is there a setting in Proxmox I should be looking at? My server is old (two Xeon 2650v2); is it just too slow to pass the data across the Linux network bridge at an acceptable rate? CPU usage on the VMs themselves doesn't get past 60% or so, but maybe Proxmox itself is CPU-bound?

The bulk of my network traffic is coming in-and-out of the VMs on Proxmox, so it's important that I figure this out. Any suggestions for testing or for a fix are very much appreciated.

 

I just set up a local instance of Invidious. I created an account, exported my YouTube subscriptions, and imported them into Invidious. The first time I tried, it imported 5 subscriptions out of 50 or so. The second time I tried, it imported 9.

Thinking there might be a problem with the import function, I decided to manually add each subscription. Every time I click "Subscribe," the button will switch to "Unsubscribe," then immediately switch back to "Subscribe." If I look at my subscriptions, it was never added.

My first thought was a problem with the PostgreSQL database, but that wouldn't explain why some subscriptions work when I import them.

I tried rebooting the container, and it made no difference. I'm running Invidious in a Ubuntu 22.04 LXC container in Proxmox. I installed it manually (not with Docker). It has 100GB of HDD space, 4 CPU cores, and 8GB of memory.

What the hell is going on?

 

The majority of my homelab consists of two servers: A Proxmox hypervisor and a TrueNAS file server. The bulk of my LAN traffic is between these two servers. At the moment, both servers are on my "main" VLAN. I have separate VLANs for guests and IoT devices, but everything else lives on VLAN2.

I have been considering the idea of creating another VLAN for storage, but I'm debating if there is any benefit to this. My NAS still needs to be accessible to non-VLAN-aware devices (my desktop PC, for instance), so from a security standpoint, there's not much benefit; it wouldn't be isolated. Both servers have a 10Gb DAC back to the switch, so bandwidth isn't really a factor; even if it was, my switch is still only going to switch packets between the two servers; it's not like it's flooding the rest of my network.

Having a VLAN for storage seems like it's the "best practice," but since both servers still need to be accessible outside the VLAN, the only benefit I can see is limiting broadcast traffic, and as far as I know (correct me if I'm wrong), SMB/NFS/iSCSI are all unicast.

 

I have a decent amount of video footage that I'd like to share with friends and family. My first thought was Youtube, but this is all home videos that I really don't want to share publicly.

A large portion of my video footage is 4k/60, so I'm ideally looking for a solution where I can send somebody a link, and it gives a "similar to Youtube" experience when they click on the link. And by "similar to Youtube," I mean that the player automatically adjusts the video bitrate and resolution based on their internet speed. Trying to explain to extended family how to lower the bitrate if the video starts buffering isn't really an option. It needs to "just work" as soon as the link is clicked; some of the individuals I'd like to share video with are very much not technically inclined.

I'd like to host it on my homelab, but my internet connection only has a 4Mbit upload, which is orders of magnitude lower than my video bitrate, so I'm assuming I would need to either use a 3rd-party video hosting service or set up a VPS with my hosting software of choice.

Any suggestions? I prefer open-source self-hosted software, but I'm willing to pay for convenience.

view more: next ›