marauding_gibberish142

joined 1 month ago

Just lol at Synology trying to do an Nvidia

[–] marauding_gibberish142@lemmy.dbzer0.com 6 points 4 days ago (3 children)

There's plenty of N100/N350 motherboards with 6 SATA ports on AliExpress, grab them while you can

[–] marauding_gibberish142@lemmy.dbzer0.com 28 points 4 days ago (3 children)

Synology is like Ubiquity in the self-hosted community: sure it's self-hosted, but it's definitely not yours. End of the day you get to deal with their decisions.

Terramaster lets you run your own OS on their machine. That's basically what a homelabber wants: a good chassis and components. I couldn't see a reason to buy a Synology after Terramaster and Ugreen started ramping out their product lines which let you run whatever OS you wanted. Synology at this point is for people who either don't know what they're doing or want to remain hands-off with storage management (which is valid; you don't want to do more work when you get home for work). Unfortunately, such customers are now out in the lurch, so TrueNAS or trust some other company to hold your data safe.

Alpine isn't exactly fortified either. It needs some work too. Ideally you'd use a deblobbed kernel with KSPP and use MAC, harden permissions, install hardened_malloc. I don't recall if there's CIS benchmarks or STIGs for Alpine but those are very important too. These are my basic steps for hardening anything. But Alpine has the advantage of being lean from the start. Ideally you'd compile your packages with hardened flags like on Gentoo but for a regular container and VM host that might be too much (or not - depends on your appetite for this stuff).

[–] marauding_gibberish142@lemmy.dbzer0.com 3 points 4 days ago (2 children)

I'm looking at buildbot

[–] marauding_gibberish142@lemmy.dbzer0.com 22 points 4 days ago* (last edited 4 days ago) (2 children)

I don't get it. Where is the idea that "Fedora focuses on security" coming from? Fedora requires an equivalent amount of work like other distros to harden it.

I personally use Alpine because I trust busybox to have less attack surface than normal Linux utils

I wish they did. I can't believe non-profits are suing each other.

Oh I get it. Auto-pull the repos to the master nodes' local storage for if something bad happens, and when that does, use the automatically pulled (and hopefully current) code to fix what broke.

Good idea

[–] marauding_gibberish142@lemmy.dbzer0.com 1 points 5 days ago (1 children)

Well it's a tougher question to answer when it's an active-active config rather than a master slave config because the former would need minimum latency possible as requests are bounced all over the place. For the latter, I'll probably set up to pull every 5 minutes, so 5 minutes of latency (assuming someone doesn't try to push right when the master node is going down).

I don't think the likes of Github work on a master-slave configuration. They're probably on the active-active side of things for performance. I'm surprised I couldn't find anything on this from Codeberg though, you'd think they have already solved this problem and might have published something. Maybe I missed it.

I didn't find anything in the official git book either, which one do you recommend?

[–] marauding_gibberish142@lemmy.dbzer0.com 1 points 5 days ago* (last edited 5 days ago) (3 children)

Thanks for the comment. There's no special use-case: it'll just be me and a couple of friends using it anyway. But I would like to make it highly available. It doesn't need to be 5 - 2 or 3 would be fine too but I don't think the number would change the concept.

Ideally I'd want all servers to be updated in real-time, but it's not necessary. I simply want to run it like so because I want to experience what the big cloud providers run for their distributed git services.

Thanks for the idea about update hooks, I'll read more about it.

Well the other choice was Reddit so I decided to post here (Reddit flags my IP and doesn't let me create an account easily). I might ask on a couple of other forums too.

Thanks

This is a fantastic comment. Thank you so much for taking the time.

I wasn't planning to run a GUI for my git servers unless really required, so I'll probably use SSH. Thanks, yes that makes the part of the reverse proxy a lot easier.

I think your idea of having a designated "master" (server 1) and having rolling updates to the rest of the servers is a brilliant idea. The replication procedure becomes a lot easier this way, and it also removes the need for the reverse-proxy too! - I can just use Keepalived, set up weights to make one of them the master and corresponding slaves for failover. It also won't do round-robin so no special stuff for sticky sessions! This is great news from the perspective of networking for this project.

Hmm, you said to enable pushing repos to the remote git repo instead of having it pull? I was going create a wireguard tunnel and have it accessible from my network for some stuff but I guess it makes sense.

Thanks again for the wonderful comment.

 

Edit: it seems like my explanation turned out to be too confusing. In simple terms, my topology would look something like this:

I would have a reverse proxy hosted in front of multiple instances of git servers (let's take 5 for now). When a client performs an action, like pulling a repo/pushing to a repo, it would go through the reverse proxy and to one of the 5 instances. The changes would then be synced from that instance to the rest, achieving a highly available architecture.

Basically, I want a highly available git server. Is this possible?


I have been reading GitHub's blog on Spokes, their distributed system for Git. It's a great idea except I can't find where I can pull and self-host it from.

Any ideas on how I can run a distributed cluster of Git servers? I'd like to run it in 3+ VMs + a VPS in the cloud so if something dies I still have a git server running somewhere to pull from.

Thanks

 

Is there some sort of comprehensive guide on hardening RHEL clones like Alma and Rocky?

I have read Madaidan's blog, and I plan to go through CIS policies, Alma and Rocky documentation and other general stuff like KSPP, musl, LibreSSL, hardened_malloc etc.

But I feel like this is not enough and I will likely face problems that I cannot solve. Instead of trying to reinvent the wheel by myself, I thought I'd ask if anyone has done this before so I can use their guide as a baseline. Maybe there's a community guide on hardening either of these two? I'd contribute to its maintenance if there is one.

Thanks.

 

The problem is simple: consumer motherboards don't have that many PCIe slots, and consumer CPUs don't have enough lanes to run 3+ GPUs at full PCIe gen 3 or gen 4 speeds.

My idea was to buy 3-4 computers for cheap, slot a GPU into each of them and use 4 of them in tandem. I imagine this will require some sort of agent running on each node which will be connected through a 10Gbe network. I can get a 10Gbe network running for this project.

Does Ollama or any other local AI project support this? Getting a server motherboard with CPU is going to get expensive very quickly, but this would be a great alternative.

Thanks

 

Sorry for being such a noob. My networking is not very strong, thought I'd ask the fine folks here.

Let's say I have a Linux box working as a router and a dumb switch (I.e. L2 only). I have 2 PCs that I would like to keep separated and not let them talk to each other.

Can I plug these two PCs into the switch, configure their interfaces with IPs from different subnets, and configure the relevant sub-interfaces and ACLs (to prevent inter-subnet communication through the router) on the Linux router?

What I'm asking is; do I really need VLANs? I do need to segregate networks but I do not trust the operating systems running on these switches which can do L3 routing.

If you have a better solution than what I described which can scale with the number of computers, please let me know. Unfortunately, networking below L3 is still fuzzy in my head.

Thanks!

 

It's been a while since I visited this topic, but a few years back, Xen (and by extension XCP-NG) was better known for security whilst KVM (and thus Proxmox) was considered for better performance (yes, I've heard of the rumours of AWS moving to KVM from Xen for some appliances).

I would like to ask the community about the security measures you've taken to harden the default PROXMOX and XCP-NG installations. Have you run the CIS benchmarks and performed hardening that way? Did you enable 2FA?

I'm also interested in people who run either of these in production: what steps did you take? Did you patch the Debian base (for PVE)/Fedora base (I think, for XCP)?

Thank you for responding!

 

This is coming from a general perspective of wanting more privacy and seeing news of Mozilla creating an email service "which will definitely not train AI on your email". Sure Mozilla, whatever you say.

Rant aside, here's my question: is it possible to store all of your email on your own infrastructure (VPS or even NAS at home) and simply using an encrypted relay to send emails out to the public internet? My idea is that this removes the problems of keeping your IP whitelisted from the consumer, but the email provider doesn't actually hold your emails. This means your emails remain completely in your control, but you don't have to worry about not being able to send emails to other people as long as your storage backend is alive.

I don't know much about email to comment on what this would take. I think something similar is already possible with an SMTP relay from most email providers, but the problem is that my email also resides on their servers. I don't like that. I want my email to live on my servers alone.

Do you think this is possible? Does any company already do this?

Thanks

49
Consumer GPUs to run LLMs (lemmy.dbzer0.com)
submitted 2 weeks ago* (last edited 2 weeks ago) by marauding_gibberish142@lemmy.dbzer0.com to c/selfhosted@lemmy.world
 

Not sure if this is the right place, if not please let me know.

GPU prices in the US have been a horrific bloodbath with the scalpers recently. So for this discussion, let's keep it to MSRP and the lucky people who actually managed to afford those insane MSRPs + managed to actually find the GPU they wanted.

Which GPU are you using to run what LLMs? How is the performance of the LLMs you have selected? On an average, what size of LLMs are you able to run smoothly on your GPU (7B, 14B, 20-24B etc).

What GPU do you recommend for decent amount of VRAM vs price (MSRP)? If you're using the TOTL RX 7900XTX/4090/5090 with 24+ GB of RAM, comment below with some performance estimations too.

My use-case: code assistants for Terraform + general shell and YAML, plain chat, some image generation. And to be able to still pay rent after spending all my savings on a GPU with a pathetic amount of VRAM (LOOKING AT BOTH OF YOU, BUT ESPECIALLY YOU NVIDIA YOU JERK). I would prefer to have GPUs for under $600 if possible, but I want to also run models like Mistral small so I suppose I don't have a choice but spend a huge sum of money.

Thanks


You can probably tell that I'm not very happy with the current PC consumer market but I decided to post in case we find any gems in the wild.

 

I'm looking at quad port 2.5Gbe Intel PCIe cards. These cards seem to be mostly x4 physically (usually PCIe gen 3) whilst I have a PCIe Gen4 X1 slot, which is more the theoretical bandwidth that the card can support. The card needs at the most PCIE Gen 3 X2 == PCIE Gen 4 X1 in terms of bandwidth.

How do I fit the card into a PCIe x1 slot? Won't it lose performance if all the pins are not connected to the physical PCIe connector? Is there a PCIe x1 riser that the community likes that is somewhat affordable?

Thanks

 

This is not a troll post. I'm genuinely confused as to why SELinux gets so much of hate. I have to say, I feel that it's a fairly robust system. The times when I had issues with it, I created a custom policy in the relevant directory and things were fixed. Maybe a couple of modules here and there at the most. It took me about 15 minutes max to figure out what permissions were being blocked and copy the commands from. Red Hat's guide.

So yeah, why do we hate SELinux?

 

I would understand if Canonical want a new cow to milk, but why are developers even agreeing to this? Are they out of their minds?? Do they actually want companies to steal their code? Or is this some reverse-uno move I don't see yet? I cannot fathom any FOSS project not using the AGPL anymore. It's like they're painting their faces with "here, take my stuff and don't contribute anything back, that's totally fine"

view more: next ›