this post was submitted on 27 Jan 2025
225 points (97.1% liked)

Selfhosted

41554 readers
591 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Tl;dr

I have no idea what I’m doing, and the desire for a NAS and local LLM has spun me down a rabbit hole. Pls send help.

Failed Attempt at a Tl;dr

Sorry for the long post! Brand new to home servers, but am thinking about building out the setup below (Machine 1 to be on 24/7, Machine 2 to be spun up only when needed for energy efficiency); target budget cap ~ USD 4,000; would appreciate any tips, suggestions, pitfalls, flags for where I’m being a total idiot and have missed something basic:

Machine 1: TrueNAS Scale with Jellyfin, Syncthing/Nextcloud + Immich, Collabora Office, SearXNG if possible, and potentially the *arr apps

On the drive front, I’m considering 6x Seagate Ironwolf 8TB in RAIDz2 for 32TB usable space (waaay more than I think I’ll need, but I know it’s a PITA to upgrade a vdev so trying to future-proof), and I am thinking also want to add in an L2ARC cache (which I think should be something like 500GB-1TB m.2 NVMe SSD); I’d read somewhere that back of the envelope RAM requirements were 1GB RAM to 1TB storage (though the TrueNAS Scale hardware guide definitely does not say this, but with the L2ARC cache and all of the other things I’m trying to run I probably get to the same number), so I’d be looking for around 48GB (though I am under the impression that using an odd number of DIMMs isn’t great for performance, so that might bump up to 64GB across 4x16GB?); I’m ambivalent on DDR4 vs. 5 (and unless there’s a good reason not to, would be inclined to just use DDR4 for cost), but am leaning ECC, even though it may not be strictly necessary

Machine 2: Proxmox with LXC for Llama 3.3, Stable Diffusion, Whisper, OpenWebUI; I’d also like to be able to host a heavily modded Minecraft server (something like All The Mods 9 for 4 to 5 players) likely using Pterodactyl

I am struggling with what to do about GPUs here; I’d love to be able to run the 70b Llama 3.3, it seems like that will require something like 40-50GB VRAM to run comfortably at a minimum, but I’m not sure the best way to get there; I’ve seen some folks suggest 2x3090s is the right balance of value and performance, but plenty of other folks seem to advocate for sticking with the newer 4000 architecture (especially with the 5000 series around the corner and the expectation prices might finally come down); on the other end of the spectrum, I’ve also seen people advocate for going back to P40s

Am I overcomplicating this? Making any dumb rookie mistakes? Does 2 machines seems right for my use cases vs. 1 (or more than 2?)? Any glaring issues with the hardware I mentioned or suggestions for a better setup? Ways to better prioritize energy efficiency (even at the risk of more cost up front)? I was targeting something like USD 4,000 as a soft price cap across both machines, but does that seem reasonable? How much of a headache is all of this going to be to manage? Is there a light at the end of the tunnel?

Very grateful for any advice or tips you all have!


Hi all,

So sorry again for the long post. Just including a little bit of extra context here in case it’s useful about what I am trying to do (I feel like this is the annoying part of an online recipe where you get a life story instead of the actual ingredient list; I at least tried to put that first in this post.) Essentially I am a total noob, but have spent the past several months lurking on forums, old Reddit and Lemmy threads, and have watched many hours of YouTube videos just to wrap my head around some of the basics of home networking, and I still feel like I know basically nothing. But I felt like I finally got to the point where I felt that I could try to articulate what I am trying to do with enough specificity to not be completely wasting all of your time (I’m very cognizant of Help Vampires and definitely do not want to be one!)

Basically my motivation is to move away from non-privacy respecting services and bring as much in-house as possible, but (as is frequently the case), my ambition has far outpaced my skill. So I am hopeful that I can tap into all of your collective knowledge to make sure I can avoid any catastrophic mistakes I am likely to blithely walk myself into.

Here are the basic things I am trying to accomplish with this setup:

• A NAS with a built in media server and associated apps
• Phone backups (including photos) 
• Collaborative document editing
• A local ChatGPT 4 replacement 
• Locally hosted metasearch
• A place to run a modded Minecraft server for myself and a few friends

The list in the tl;dr represent my best guesses for the write software and (partial) hardware to get all of these done. Based on some of my reading, it seemed that a number of folks recommend running TrueNAS baremetal as opposed to in ProxMox for when there is an inevitable stability issue, and that got me thinking more about how it might be valuable to split out these functions across two machines, one to hand heavier workloads when needed but to be turned off when not (e.g. game server, all local AI), and a second machine to function as a NAS with all the associated apps that would hopefully be more power efficient and run 24/7.

There are two things that I think would be very helpful to me at this point:

  1. High level feedback on whether this strategy sounds right given what I am trying to accomplish. I feel like I am breaking the fundamental Keep It Simple Stupid rule and will likely come to regret it.
  2. Any specific feedback on the right hardware for this setup.
  3. Any thoughts about how to best select hardware to maximize energy efficiency/minimize ongoing costs while still accomplishing these goals.

Also, above I mentioned that I am targeted around USD 4,000, but I am willing to be flexible on that if spending more up front will help keep ongoing costs down, or if spending a bit more will lead to markedly better performance.

Ultimately, I feel like I just need to get my hands on something and start screwing things up to learn, but I’d love to avoid any major costly screw ups before I just start ordering parts, thus writing up this post as a reality check before I do just that.

Thanks so much if you read this far down the post, and for all of you who share any thoughts you might have. I don’t really have folks IRL I can talk to about these sorts of things, so I am extremely grateful to be able to reach out to this community. -------

Edit: Just wanted to say a huge thank you to everyone who shared their thoughts! I posted this fully expecting to get no responses and figured it was still worth doing just to write out my plan as it stood. I am so grateful for all of your thoughtful and generous responses sharing your experience and advice. I have to hop offline now, but look forward to responding to any comments I haven’t had a chance to turn to tomorrow. Thanks again! :)

top 50 comments
sorted by: hot top controversial new old
[–] IllNess@infosec.pub 42 points 2 days ago (2 children)

Reading the title and looking at the thumbnail, I was thinking, "sure I'll do a good deed and help out a noob." Then I read your post and I realized you know what you're doing better than me.

HomerInBushes.gif

[–] sunzu2@thebrainbin.org 7 points 2 days ago

OP sharing decent DD tbh

source: i am regarded

[–] libretech@reddthat.com 7 points 2 days ago (1 children)

Thank you for this! Honestly maybe it's just been all of the Youtubers I watch but I constantly feel like I have no idea about how to make things work (and also, to be fair, basically everything I wrote is just me reading what other people who seem to know what they're talking about think and then trying to fit all the pieces together. I sort of feel like a money at a typewriter in that way.) Really appreciate you commenting though! It's given me a little more confidence :)

[–] LandedGentry@lemmy.zip 10 points 2 days ago* (last edited 2 days ago) (1 children)

It’s easy to feel like you know nothing when A) there’s seemingly infinite depth to a skill and B) there are so many options.

You’re letting perfection stop you from starting my dude. Dive in!

[–] libretech@reddthat.com 6 points 2 days ago (1 children)

Thank you! I think I am just at the "Valley of Despair" portion of the Dunning-Kruger effect lol, but the good news is that it's hopefully mostly up from here (and as you say, a good finished product is infinitely better than a perfect idea).

[–] freebee@sh.itjust.works 7 points 2 days ago (1 children)

Honestly why not just use an old laptop you have laying around to test 1 or 2 of your many project/ideas and see how it goes, before going 4000 $ deep.

[–] libretech@reddthat.com 1 points 1 day ago

This is definitely good advice. I tend to run my laptops into the ground before I replace them, but a lot of the feedback here has made me think experimenting with something much less expensive first is probably the right move instead of trying to do everything all at once (so that when I inevitably screw up, it at least won't be a $4k screw up.) But thanks for the sanity check!

[–] Krill@feddit.uk 1 points 1 day ago

Pretty sure truenas scale can host everything you want so you might only want one server. Use Epyc for the pcie lanes, and a fractal design r7 XL and you could even escape needing a rack mount if you wanted. Use a pcie to m.2 adapter and you could easily host apps on them on a mirrored pool and use a special vdev to speed up the HDD storage pool

The role of the proxmox server would essentially be filled by apps and/or VM you could turn on or off as needed.

[–] possiblylinux127@lemmy.zip 11 points 2 days ago (1 children)

$4,000 seems like a lot to me. Then again, my budget was like $200.

I would start by setting yourself a smaller budget. Learn with cheaper investments before you screw up big. Obviously $200 is probably a bit low but you could build something simple for around $500. Focus on upgrade ability. Once you have a stable system up skill and reflect on what you learned. Once you have a bit more knowledge build a second and third system and then complete a Proxmox cluster. It might be overkill but having three nodes gives a lot of flexibility.

One thing I will add. Make sure you get quality enterprise storage. Don't cheap out since the lower tier drives will have performance issues with heavier workloads. Ideally you should get enterprise SSD's.

[–] Tablaste@linux.community 2 points 2 days ago (1 children)

I did a double take at that $4000 budget as well! Glad I wasn't the only one.

[–] libretech@reddthat.com 1 points 1 day ago

You are both totally right. I think I anchored high here just because of the LLM stuff I am trying to get running at around a GPT4 level (which is what I think it will take for folks in my family to actually use it vs. continuing to pass all their data to OpenAI) and it felt like it was tough to get there without spending an arm and a leg on GPUs alone. But I think my plan is now to start with the NAS build, which I should be able to accomplish without spending a crazy amount and then building out iteratively from there. As you say, I'd prefer to screw up and make a $500 mistake vs. a multiple thousand dollar one. Thanks for the sanity check!

[–] DaGeek247@fedia.io 12 points 2 days ago (1 children)

I know most of the less expensive used hardware is going to be server-shaped/rackmount. Don't go for it unless you have a garage or shed that you can stuff them in. They put out jet-engine levels of noise and require god tier soundproofing in order to quiet them. The ones that are advertised as quiet are quiet as compared to other server hardware.

You can grab an epyc motherboard that is ATX and will do all you want, and can then move it to a rackmount later if you end up going that way.

The NVIDIA launch has been a bit of a paper one. I don't expect the prices of anything else to adjust down, rather the 5090 may just end up adjusting itself up. This may change over time, but the next couple of months aren't likely to have major deals worth holding out for.

[–] libretech@reddthat.com 5 points 2 days ago

Thanks for this! The jet engine sound level and higher power draw were both what made me a little wary of used enterprise stuff (plus jumping from never having a home server straight to rack mounted felt like flying a little too close to the sun). And thanks also for the epyc rec; based on other comments it sounds like maybe pairing that with dual 3090s is the most cost effective option (especially because I fear you're right on prices not being adjusted downward; not sure if the big hit Nvidia took this morning because of DeepSeek might change things but I suppose that ultimately unless underlying demand drops, why would they drop their prices?) Thanks again for taking the time to respond!

[–] Waryle@jlai.lu 5 points 2 days ago (1 children)

ZFS Raid Expansion has been released days ago in OpenZFS 2.3.0 : https://www.cyberciti.biz/linux-news/zfs-raidz-expansion-finally-here-in-version-2-3-0/

It might help you with deciding how much storage you want

[–] libretech@reddthat.com 1 points 1 day ago

Woah, this is big news!! I'd been following some of the older articles talking about this being pending, but had no idea it just released, thanks for sharing! Will just need to figure out how much of a datahoarder I'm likely to become, but it might be nice to start with fewer than 6 of the 8TB drives and expand up (though I think 4 drives is the minimum that makes sense; my understanding is also that energy consumption is roughly linear with number of drives, though that could be very wrong, so maybe I've even start with 4x a 10-12TB drive if I can find them for a reasonable price). But thanks for flagging this!

[–] cm0002@lemmy.world 9 points 2 days ago (3 children)

Stick with DDR4 ECC for a server environment, if you want to not be limited to 70b models, id dump more money in trying to snag more GPUs, otherwise you'd probably be fine with the 3000 series as long as you meet vRAM requirements

Have you considered secondary variables? Where are you going to run this? If you're running it in your house this is going to be noisy and power hungry. What room are you running it in? What's the amperage of the lines going to the outlets there? Is your house older? It's probably a 20 amp on a shared circuit and really easy to overload and cause a fire

This is what happens when you overload a homes circuit lines

[–] aberrate_junior_beatnik@midwest.social 10 points 2 days ago (2 children)

How did the breaker not trip on that? It had one job

[–] cm0002@lemmy.world 7 points 2 days ago (3 children)

The way the electrician explained it to me at the time was that I didn't technically exceed 20 AMPs but I was running close to it for sustained long periods of time heating up the wire in the wall and outlet slowly melting it over time until it finally buckled causing a small fire and then tripping the breaker

[–] Andres4NY@social.ridetrans.it 7 points 2 days ago (1 children)

@cm0002 @aberrate_junior_beatnik That looks like a 15A receptacle (https://www.icrfq.net/15-amp-vs-20-amp-outlet/). If it was installed on a 20A circuit (with a 20A breaker and wiring sized for 20A), then the receptacle was the weak point. Electricians often do this with multiple 15A receptacles wired together for Reasons (https://diy.stackexchange.com/questions/12763/why-is-it-safe-to-use-15-a-receptacles-on-a-20-a-circuit) that I disagree with for exactly what your picture shows. That said, overloading it is not SUPER likely to cause a fire - just destroy the outlet and appliance plugs.

[–] cm0002@lemmy.world 2 points 2 days ago

Makes sense, this was also years ago so small details are being forgotten, could have also been a 15 or possibly 20. It was one circuit split between 2 rooms, which was the norm apparently for the time it was built in the early 80s (and not a damn thing was ever upgraded, including the outlets)

It was also a small extinguisher handleable fire, but it was enough to be scary AF LMAO

load more comments (2 replies)
[–] bradd@lemmy.world 2 points 2 days ago

This could also be caused by a bad connection or poor contact between the wire and the receptacle. Notice the side is melted, where the terminal screws would be, thats where the heat would be generated. When you put a load on it and electrons have to jump the gap it arcs and generates heat. Load is also a factor, on this receptacle or any downstream, but the melting on the side might be caused by arcing.

load more comments (2 replies)
[–] TseseJuer@lemmy.world 5 points 2 days ago (5 children)
load more comments (5 replies)
[–] Blisterexe@lemmy.zip 5 points 2 days ago (1 children)

you seem pretty on track, and being broke i haven't looked at the expensive stuff you're considering, so i can't give you any value tips.

However, i would like to point out that if you're just going to be hosting minecraft game servers crafty controller is a much easier to setup&use tool than pterodactyl

load more comments (1 replies)
[–] Estebiu@lemmy.dbzer0.com 3 points 2 days ago (3 children)

For llama 70B I'm using an rtx a6000; slightly older but it does the job magnificently with hers 48gb of vram.

[–] libretech@reddthat.com 2 points 1 day ago (1 children)

Wow, that sounds amazing! I think that GPU alone would probably exceed my budget for the whole build lol. Thanks for sharing!

[–] Estebiu@lemmy.dbzer0.com 1 points 7 hours ago

You can still run smaller models on cheaper gpus, no need for the greatest gpu ever. Btw, I use it for other things too, not only LLMs

[–] sntx@lemm.ee 4 points 2 days ago (2 children)

I'm also on p2p 2x3090 with 48GB of VRAM. Honestly it's a nice experience, but still somewhat limiting...

I'm currently running deepseek-r1-distill-llama-70b-awq with the aphrodite engine. Though the same applies for llama-3.3-70b. It works great and is way faster than ollama for example. But my max context is around 22k tokens. More VRAM would allow me more context, even more VRAM would allow for speculative decoding, cuda graphs, ...

Maybe I'll drop down to a 35b model to get more context and a bit of speed. But I don't really want to justify the possible decrease in answer quality.

[–] Estebiu@lemmy.dbzer0.com 1 points 7 hours ago

Uhh, a lot of big words here. I mostly just play around with it.. Never used LLMs for anything more serious than a couple of test, so I don't even know now many tokens can my setup generate..

[–] libretech@reddthat.com 2 points 1 day ago

This is exactly the sort of tradeoff I was wondering about, thank you so much for mentioning this. I think ultimately I would probably align with you in prioritizing answer quality over context length (but it sure would be nice to have both!!) I think my plan for now based on some of the other comments is to go ahead with the NAS build and keep my eyes peeled for any GPU deals in the meantime (though honestly I am not holding my breath). Once I've proved to myself I can something stable without burning the house down, I'll on something more powerful for the localLLM. Thanks again for sharing!

[–] bradd@lemmy.world 3 points 2 days ago (1 children)

I'm running 70b on two used 3090 and an a6000 nvlink. I think i got these for $900ea, and maybe $200 for the nvlink. Also works great.

[–] libretech@reddthat.com 2 points 1 day ago

Thanks for sharing! Will probably try to go this route once I get the NAS squared away and turn back to localLLMs. Out of curiosity, are you using the q4_k_m quantization type?

[–] atzanteol@sh.itjust.works 3 points 2 days ago* (last edited 2 days ago) (2 children)

Am I overcomplicating this?

I fear that you may be overthinking things a bit. For a home server I wouldn't worry about things like min/maxing memory to storage sizes. If you're new to this then sizing can be tricky.

For a point of reference - I'm running a MD RAID5 with 4TiB x 4 disks (12TiB usable) on an old Dell PowerEdge T110 with 8GiB of RAM. It's a file server (NFS) that does little else (just a bind9 server and dhcpd). I've had disks fail in the RAID but I've never had a 2 disk failure in 10+ years. I always keep my fileserver separate so that I can keep it simple and stable since everything else depends on it. I also do my backups to and from it so it's a central place for all storage.

That's just a file-server. I have 3 proxmox servers of widely variable stats from acquired machines.. An old System76 laptop with 64GiB RAM (and NVidia 1070 GTX that is used by Jellyfin), a Lenovo Thinkserver with 16GiB RAM, and an old Dell Z740 with 128GiB RAM (long story).

None of these servers are speed demons by any current standards, but they support a variety of VMs comfortably (home assistant, jellyfin, web sever, DNS, DHCP, a 3 node microk8s cluster running searxng, subsonic, a docker registry etc.)

RAM has always mattered more to me for servers. The laptop is the most recent and has 8 cores, the Lenovo only has 4.

Could things be faster? Sure. Do they perform "well enough for me?" Absolutely. I'm not as worried about low-power as you seem to be but my point is that you can get away with pretty modest hardware for MOST of the types of things you're looking to do.

The AI one is the thing to worry about - that could be your entire budget. VRAM is king for LLMs and gets pricey quick. My personal laptop's NVidia 3070 with 8GiB VRAM runs models that fit in that low amount of memory just fine. But I'm restricted to models that fit...

load more comments (2 replies)
[–] ArbiterXero@lemmy.world 3 points 2 days ago* (last edited 2 days ago) (8 children)

Given the price of P40’s on eBay vs the price you can get 3090’s for, fuck the P40’s, in rocking quad 3090’s and they kick ass.

Also, Pascale is the OLDEST hardware supported…….. for how long?

Also, you’ll want to look for strange specific things to host multiple 3090’s etc.. on your motherboard You want a lot of pcie lanes from your chip and board. You want above 4g decoding (fairly common in newer hardware)

load more comments (8 replies)
[–] calamityjanitor@lemmy.world 3 points 2 days ago (1 children)

Would you consider making the LLM/GPU monster server as a gaming desktop? Depends on how you plan to use it, you could have a beast gaming PC than can do LLM/stable diffusion stuff when not gaming. You can install loads of AI stuff on windows, arguably easier.

[–] libretech@reddthat.com 3 points 2 days ago (2 children)

This is a great point and one I sort of struggled with tbh; I think you're right that if I built it out as a gaming PC I would probably use Windows (not to say I am not very excited about the work Steam is doing for Linux gaming, it's just hard to beat the native OS). I was leaning toward a Linux build for the server form though just to try to embrace a bit more FOSS (and because I am still a little shocked that Microsoft could propose the Recall feature with a straight face). Maybe I could try a gaming setup that uses some flavor of Linux as a base, though then I am not sure I take advantage of the ability to use the AI stuff easier. Will definitely think more on it though, thanks for raising this!

[–] zox@lemmy.world 3 points 2 days ago

That's the solution I take. I use Proxmox for a Windows VM which runs Ollama. That VM can then be used for gaming in the off chance a LLM isn't loaded. It usually is. I use only one 3090 due to the power load of my two servers on top of my [many] HDDs. The extra load of 2 isn't something I want to worry about.

I point to that machine through LiteLLM* which is then accessed through nginx which allows only Local IPs. Those two are in a different VM that hosts most of my docker containers.

*I found using Ollama and Open WebUI causes the model to get unloaded since they send slightly different calls. LiteLLM reduces that variance.

load more comments (1 replies)
[–] AdrianTheFrog@lemmy.world 2 points 2 days ago* (last edited 2 days ago) (1 children)

for high vram ai stuff it might be worth waiting and seeing how the 24gb b580 variant is

Intel has a bunch of translation layer sort of stuff though that I think generally makes it easy to run most CUDA ai things on it, but I'm not sure if common ai software supports multi gpu with it though

IDK how cash limited you are but if it's just the vram you need and not necessarily the tokens/sec it should be a much better deal when it releases

Not entirely related but I have a full half hourly shapshotted computer backup going to a large HDD in my home server using Kopia, its very convenient and you don't need to install anything on the server except a large drive and the ability to use ssh/sftp (or another method, it supports several). It supports many compression formats and also avoids storing duplicate data. I haven't needed to use it yet, but I imagine it could become very useful in the future. I also have the same set up in the cli on the server, largely so I can roll back in case some random person happens upon it and decides to destroy everything in my Minecraft server (which is public and doesn't have a whitelist...). It's pretty easy to set up and since it can back up over the internet, its something you could easily use for a whole family.

My home server (with a bunch of used parts plus a computer from the local university surplus store) was probably about ~170$ in total (i7 6700, 16gb ddr4, 256gb ssd, 8tb hdd) and is enough to host all of the stuff I have (very light modded MC with geyser, a gitlab instance, and the backup) very easily, but it is very much not expandable (the case is quite literally tiny and I don't have space to leave it open, I could get a pcie storage controller but the psu is weak and there aren't many sata ports), probably not all that future proof either, and definitely isn't something I would trust to perform well with AI models.

this (sold out now) is the hdd I got, I did a lot of research and they're supposed to be super reliable. I was worried about noise, but after getting one I can say that as long as it isn't within 4 feet of you you'll probably never hear it.

Anyways, it's always nice to really do something the proper way and have something fully future proof, but if you just need to host a few light things you can probably cheap out on the hardware and still get a great experience. It's worth noting that a normal Minecraft server, backups, and a document editor for example are all things that you can run on a Raspberry Pi if you really wanted to. I have absolutely no experience using a NAS, metasearch, or heavy mods however, those might be a lot harder to get fast for all I know.

[–] libretech@reddthat.com 1 points 1 day ago (1 children)

Thank you so much for all of this! I think you're definitely right that probably starting smaller and trying a few things out is more sensible. At least for now I think I am going to focus on putting something together for the lower-hanging fruit by focusing on the NAS build first and then build up to local AI once I have something stable (but I'll definitely be keeping an eye out for GPU deals in the meantime, so thanks for mentioning the B580 variant, it wasn't on my radar at all as an option). But I think the thread has definitely given me confidence that splitting things out that way makes sense as a strategy (I had been concerned when I first wrote it out that not planning out everything all at once was going to cause me to miss some major efficiency, but I feel like it turns out that self-hosting is more like gardening than I thought in that it sort of seems to grow organically with one's interest and resources over time; sort of sounds obvious in retrospect, but I was definitely approaching this more rigidly initially). And thank you for the HDD rec! I think the Exos are the level above the Ironwolf Pro I mentioned, so will definitely consider them (especially if they come back online for a reasonable price at serverpartdeals or elsewhere). Just out of curiosity, what are you using for admin on your MC server? I had heard of Pterodactyl previously, but another commenter mentioned CraftyController as a bit easier to work with. Thank you again for writing all of this up, it's super helpful!

[–] AdrianTheFrog@lemmy.world 1 points 1 day ago

I'm just using basic fabric stuff running through a systemd service for my MC server. It also basically just has every single performance mod I could find and nothing else (as well as geyser+floodgate) so there isn't all that much admin stuff to do. I set up RCON (I think it's called) to send commands from my computer but I just set up everything through ssh. I haven't heard of either pterodactyl or crafty controller, I'll check those out!

load more comments
view more: next ›