corroded

joined 2 years ago
[–] corroded@lemmy.world 16 points 6 days ago (2 children)

I really don't understand this. What does the Army gain by commissioning tech execs as reserve officers? Wouldn't it be far more effective to just hire their companies as contractors? Or commission high-level engineers as officers. A tech exec's skillet is running a company. Sure, offer commissions to their most skilled employees, but to the execs themselves, why?

[–] corroded@lemmy.world 60 points 1 month ago (2 children)

Not really. While I don't have the exact numbers, the output of an infrared LED is no higher (usually) than an LED in the visible range. My security cameras have an array of 10 or so LEDs.

So looking at a security camera would be roughly equivalent to staring at a light bulb.

[–] corroded@lemmy.world 10 points 1 month ago (2 children)

Why? If everyone does poorly, everyone should fail, provided the opportunity to learn was there.

[–] corroded@lemmy.world -1 points 1 month ago (6 children)

This has always seemed overblown to me. If students want to cheat on their coursework, who cares? As long as exams are given in a controlled environment, it's going to be painfully obvious who actually studied the material and who had ChatGPT do it for them. Re-taking a course is not going to be fun or cheap.

Maybe I'm oversimplifying this, but it feels like proctored testing solves the entire problem.

[–] corroded@lemmy.world 160 points 1 month ago (3 children)

As the article mentions, this isn't a security "feature," it's anti-competetive. The worst part is that Nextcloud isn't even really in competition with Google. Setting up a Nextcloud server isn't hard, but it's not a trivial task. Sharing it outside your local network also requires a bit of skill, especially if done securely. That is to say, Nextcloud users probably tend to be more tech-savvy.

The people using Nextcloud aren't going to suddenly decide to switch over to Google Drive. I'll get it from FDroid before I downgrade to Google Drive. If that wasn't an option, I'd set up an FTP server or even WebDAV.

[–] corroded@lemmy.world 13 points 2 months ago (3 children)

I totally get how this would be useful in imaging systems, but I'm not understanding how it applies to communications.

The only thing I can think is perhaps carrying more modes through a multimode fiber? I never understood amplifier bandwidth to be a limiting factor, though.

What communications systems use a wide bandwidth of light (300nm is a LOT) into a single amplifier?

[–] corroded@lemmy.world 11 points 2 months ago (4 children)

Windows 10 IoT LTSC has support until 2032. Just saying...

[–] corroded@lemmy.world 3 points 3 months ago (18 children)

Isn't dying poor a good thing? I don't want to live poor, but you can't take it with you. I'd ideally spend my last dollar right before taking my last breath.

[–] corroded@lemmy.world 1 points 4 months ago

I believe you're correct. I didn't realize that I had my containers set to privileged. That would explain why I've never had issues with mounting shares.

[–] corroded@lemmy.world 1 points 4 months ago (1 children)

I'm sorry, I think I gave you bad information. I have my containers set to unprivileged=no. I forgot about the "double negative" in how that flag was described.

So apparently my containers are privileged, so I don't think I've ever tried to do what you are doing.

[–] corroded@lemmy.world 1 points 4 months ago* (last edited 4 months ago) (5 children)

I'm leaving this here for continuity, but don't follow what I said here. I have my containers set as privileged. I was wrong.

I have a server that runs Proxmox and a server that runs TrueNAS, so a very similar setup to yours. As long as your LXC is tied to a network adapter that has access to your file server (it almost certainly is unless you're using multiple NICs and/or VLANs), you should be able to mount shares inside your LXC just like you do on any other Linux machine.

Can you ping your fileserver from inside the container? If so, then the issue is with the configuration in the container itself. Privileged or unprivileged shouldn't matter here. How are you trying to mount the CIFS share?

Edit: I see that you're mounting the share in Proxmox and mapping it to your container. You don't need to do this. Just mount it in the container itself.

[–] corroded@lemmy.world 8 points 4 months ago (1 children)

I feel like the vast majority of people just want to log onto Chat GPT and ask their questions, not host an open source LLM themselves. I suppose other organizations could host Deepseek, though.

Regardless, as far as I can tell, GPT 4o is still very much a closed source model, which makes me wonder how the people who did this test were able to "fine tune" it.

 

This is more "home networking" than "homelab," but I imagine the people here might be familiar with what in talking about.

I'm trying to understand the logic behind ISPs offering asymmetrical connections. From a usage standpoint, the vast majority of traffic goes to the end-user instead of from the end-user. From a technical standpoint, though, it seems like it would be more difficult and more expensive to offer an asymmetrical connection.

While consumers may be connected via fiber, cable, DSL, etc, I assume that the ISP has a number of fiber links to "the internet." Those links are almost surely some symmetrical standard (maybe 40 or 100Gb). So if they assume that they can support 1000 users at a certain download speed, what is the advantage of limiting the upload? If their incoming trunks can support 1000 users at 100Mb download, shouldn't it also support 1000 users at 100Mb upload since the trunks themselves are symmetrical?

Limiting the upload speed to a different rate than download seems like it would just add a layer of complexity. I don't see a financial benefit either; if their links are already saturated for download, reducing upload speed doesn't help them add additional users. Upload bandwidth doesn't magically turn into download bandwidth.

Obviously there's some reason for this, but I can't think of one.

 

A few months ago, I upgraded all my network switches. I have a 16-port SFP+ switch and a 1GB switch (LAGG to the SPF+ with two DACs). These work perfectly, and I'm really happy with the setup so far.

My main switch ties into a remote switch in another building over a 10Gb fiber line, and this switch ties into another switch of the same model (on a different floor) over a Cat6e cable. These switches are absolute garbage: https://www.amazon.com/gp/product/B084MH9P8Q

I should have known better than to buy a cheap off-brand switch, but I had hoped that Zyxel was a decent enough brand that I'd be okay. Well, you get what you pay for, and that's $360 down the toilett. I constantly have dropped connections, generally resulting in any attached devices completely losing network connectivity, or if I'm lucky, dropping down to dial-up speeds (I'm not exaggerating). The only way to fix it is to pull the power cable to the switch. Even under virtually no load, the switch gets so hot that it's painful to touch. Judging from the fact that my connection is far more stable when the switch is sitting directly in front of an air conditioner, that tells me just about all I need to know.

I'm trying to find a pair of replacement switches, but I'm really striking out. I have two ancient Dell PowerConnect switches that are rock solid, but they're massive, they sound like jet engines, and they use a huge amount of power. Since these are remote from my homelab and live in occupied areas, they just won't work. All I need is a switch that has:

  • At least 2 SFP+ ports (or 1 SFP+ port for fiber and a 10Gb copper port)
  • At least 4 1Gb ports (or SFP ports; I have a pile of old 1GB SFP adapters)
  • Management/VLAN capability Everything I find online is either Chinese white-label junk or is much larger than what I need. A 16-port SFP+ switch would work, but I'd never use most of the ports, and I'd be wasting a lot of money on overkill hardware. As an example, one of these switches is in my home office; it exists solely so I have a connection between my server rack, two PCs, and a single WAP. I am never going to need another LAN connection in my home office; any hardware is going to go in the server rack, but I do need 10GB connectivity on at least one of those PCs.

Does anyone have a suggestion for a small reliable switch that has a few SFP+ ports, is made by a reputable brand, and isn't a fire hazard?

 

This isn't strictly "homelab" related, but I'm not sure if there's a better community to post it.

I'm curious what kind of real-world speeds everyone is getting over their wireless network. I was testing tonight, and I'm getting a max of 250Mbit down/up on my laptop. I have 4 Unifi APs, each set to 802.11ac/80Mhz, and my laptop supports 2x2 MIMO. Testing on my phone (Galaxy S23) gives basically the exact same result.

The radio spectrum around me is ideal for WiFi; on 5Ghz, there is no AP in close enough range for me to detect. With an 80Mhz channel width, I can space all 4 of my APs so that there's no interference (using a non-DFS channel for testing, btw).

Am I wasting my time trying to chase higher speeds with my current setup? What kind of speeds are you getting on your WiFi network?

 

I've noticed recently that my network speed isn't what I would expect from a 10Gb network. For reference, I have a Proxmox server and a TrueNAS server, both connected to my primary switch with DAC. I've tested the speed by transferring files from the NAS with SMB and by using OpenSpeedTest running on a VM in Proxmox.

So far, this is what my testing has shown:

  • Using a Windows PC connected directly to my primary switch with CAT6: OpenSpeedTest shows around 2.5-3Gb to Proxmox, which is much slower than I'd expect. Transferring a file from my NAS hits a max of around 700-800MB (bytes, not bits), which is about what I'd expect given hard drive speed and overhead.
  • Using a Windows VM on Proxmox: OpenSpeedTest shows around 1.5-2Gb, which is much slower than I would expect. I'm using VirtIO network drivers, so I should realistically only be limited by CPU; it's all running internally in Proxmox. Transferring a file from my NAS hits a max of around 200-300MB, which is still unacceptably slow, even given the HDD bottleneck and SMB overhead.

The summary I get from this is:

  • The slowest transfer rate is between two VMs on my Proxmox server. This should be the fastest transfer rate.
  • Transferring from a VM to a bare-metal PC is significantly slower than expected, but better than between VMs.
  • Transferring from my NAS to a VM is faster than between two VMs, but still slower than it should be.
  • Transferring from my NAS to a bare-metal PC gives me the speeds I would expect.

Ultimately, this shows that the bottleneck is Proxmox. The more VMs involved in the transfer, the slower it gets. I'm not really sure where to look next, though. Is there a setting in Proxmox I should be looking at? My server is old (two Xeon 2650v2); is it just too slow to pass the data across the Linux network bridge at an acceptable rate? CPU usage on the VMs themselves doesn't get past 60% or so, but maybe Proxmox itself is CPU-bound?

The bulk of my network traffic is coming in-and-out of the VMs on Proxmox, so it's important that I figure this out. Any suggestions for testing or for a fix are very much appreciated.

 

I just set up a local instance of Invidious. I created an account, exported my YouTube subscriptions, and imported them into Invidious. The first time I tried, it imported 5 subscriptions out of 50 or so. The second time I tried, it imported 9.

Thinking there might be a problem with the import function, I decided to manually add each subscription. Every time I click "Subscribe," the button will switch to "Unsubscribe," then immediately switch back to "Subscribe." If I look at my subscriptions, it was never added.

My first thought was a problem with the PostgreSQL database, but that wouldn't explain why some subscriptions work when I import them.

I tried rebooting the container, and it made no difference. I'm running Invidious in a Ubuntu 22.04 LXC container in Proxmox. I installed it manually (not with Docker). It has 100GB of HDD space, 4 CPU cores, and 8GB of memory.

What the hell is going on?

 

The majority of my homelab consists of two servers: A Proxmox hypervisor and a TrueNAS file server. The bulk of my LAN traffic is between these two servers. At the moment, both servers are on my "main" VLAN. I have separate VLANs for guests and IoT devices, but everything else lives on VLAN2.

I have been considering the idea of creating another VLAN for storage, but I'm debating if there is any benefit to this. My NAS still needs to be accessible to non-VLAN-aware devices (my desktop PC, for instance), so from a security standpoint, there's not much benefit; it wouldn't be isolated. Both servers have a 10Gb DAC back to the switch, so bandwidth isn't really a factor; even if it was, my switch is still only going to switch packets between the two servers; it's not like it's flooding the rest of my network.

Having a VLAN for storage seems like it's the "best practice," but since both servers still need to be accessible outside the VLAN, the only benefit I can see is limiting broadcast traffic, and as far as I know (correct me if I'm wrong), SMB/NFS/iSCSI are all unicast.

 

I have a decent amount of video footage that I'd like to share with friends and family. My first thought was Youtube, but this is all home videos that I really don't want to share publicly.

A large portion of my video footage is 4k/60, so I'm ideally looking for a solution where I can send somebody a link, and it gives a "similar to Youtube" experience when they click on the link. And by "similar to Youtube," I mean that the player automatically adjusts the video bitrate and resolution based on their internet speed. Trying to explain to extended family how to lower the bitrate if the video starts buffering isn't really an option. It needs to "just work" as soon as the link is clicked; some of the individuals I'd like to share video with are very much not technically inclined.

I'd like to host it on my homelab, but my internet connection only has a 4Mbit upload, which is orders of magnitude lower than my video bitrate, so I'm assuming I would need to either use a 3rd-party video hosting service or set up a VPS with my hosting software of choice.

Any suggestions? I prefer open-source self-hosted software, but I'm willing to pay for convenience.

view more: next ›