- ALWAYS avoid partial upgrades, lest you end up bricking your system: https://wiki.archlinux.org/title/System_maintenance#Partial_upgrades_are_unsupported
- The Arch Wiki is your best friend. You can also use it offline, take a look at
wikiman
: https://github.com/filiparag/wikiman - It doesn't hurt to have the LTS kernel installed as a backup option (assuming you use the standard kernel as your chosen default) in case you update to a newer kernel version and a driver here or there breaks. It's happened to me on Arch a few times. One of them completely borked my internet connection, the other one would freeze any game I played via WINE/Proton because I didn't have resize BAR enabled in the BIOS. Sometimes switching to the LTS kernel can get around these temporary hiccups, at least until the maintainers fix those issues in the next kernel version.
- The AUR is not vetted as much as the main package repositories, as it's mostly community-made packages. Don't install AUR packages you don't 100% trust. Always check the PKGBUILD if you're paranoid.
LittleBobbyTables
Agreed, and one particular example I can think of is Terraria's Steam Workshop tools. If I try and publish a texture pack using the Linux-native version of the game, it crashes, but when I use the Windows version of the game via Proton, it works just fine. Not sure if the developers have gotten around to fixing this yet.
Edit: Now that I think of it, it's a similar story with Half-Life 2 now that they added Steam Workshop support for its 20th anniversary. Crashes on native, works fine under Proton.
Yep, been self-hosting it locally for a while now. To put simply, I archive anything that is within my personal realm of interest that I believe has a chance to be deleted, and is important to keep a copy of. It could be troubleshooting tips for specific tech issues, things that may be under threat of takedown, or maybe just an article I like and want a local copy of. It's a wonderful tool.
A friendly reminder to everyone to check out ArchiveBox if you're looking for a self-hosted archiving solution. I've been using it for a while now and it works great; it can be a little rough around the edges at times, but I think it's a wonderful tool. It's allowed me to continue saving pages during the Internet Archive's outage.
Sorry for the late response, but yes, I believe you can. There is an option in the config called allow_public_upload
which can be changed to true or false.
YaCy, Mwmbl, Alexandria, Stract, Marginalia to name a few.
I would try what the other commenter here said first. If that doesn't fix your issue, I would try using the Forge version of WebUI (a fork of that WebUI with various memory optimizations, native extensions and other features): https://github.com/lllyasviel/stable-diffusion-webui-forge. This is what I personally use.
I use a 6000-series GPU instead of a 7000-series one, so the setup may be slightly different for you, but I'll walk you through what I did for my Arch setup.
Me personally, I skipped that Wiki section on AMD GPUs entirely and it seems the WebUI still respects and utilizes my GPU just fine. Simply running the webui.sh
file will do most of the heavy lifting for you (you can see in the webui.sh
file that it uses specific configurations and ROCm versions for different AMD GPU series like Navi 2 and 3)
- Git clone that repo,
git clone https://github.com/lllyasviel/stable-diffusion-webui-forge stable-diffusion-webui
(thestable-diffusion-webui
directory name is important,webui.sh
's script seems to reference that directory name specifically) - From my experience it seems
webui.sh
andwebui-user.sh
are in the wrong spot, make symlinks to them so the symlinks are at the same level as thestable-diffusion-webui
directory you created:ln stable-diffusion-webui/webui.sh webui.sh
(ditto forwebui-user.sh
) - Edit the
webui-user.sh
file. You don't really have to change much in here, but I would recommendexport COMMANDLINE_ARGS="--theme dark"
if you want to save your eyes from burning. - Here's where things get a bit tricky: You will have to install Python 3.10, there is warnings that newer versions of Python will not work. I tried running the script with Python 3.12 and it failed trying to grab specific pip dependencies. I use the AUR for this; use
yay -S python310
orparu -S python310
or whatever method you use to install packages from the AUR. Once you do that, editwebui-user.sh
so thatpython_cmd
looks like this:python_cmd="python3.10"
- Run the
webui.sh
file:chmod u+x webui.sh
, then./webui.sh
- Setup will take a while, it has to download and install all dependencies (including a model checkpoint, which is multiple gigabytes in size). If you notice it errors out at some points, try deleting the entire
venv
directory from within thestable-diffusion-webui
directory and running the script again. This actually worked in my case, not really sure what went wrong... - After a while, the webUI will launch. If it doesn't automatically open your browser, then you can check the console for the URL, it's usually
http://127.0.0.1:7860
. Select the proper checkpoint in the top left, write down a test prompt and hopefully it should be pretty speedy, considering your GPU.
Yes, I torrent on the same machine where all my personal stuff is. The biggest reason for this is that I don’t have a dedicated machine to torrent 24/7, though I’d definitely like to set that up at some point. I like being able to seed niche torrents to those who need them, and a machine seeding 24/7 would definitely help with that. Also having easy simple access to the downloaded files is always a plus, but there’s a myriad of ways to do this over a local network (pretty sure some torrenting clients even have an option to torrent over LAN).
My torrent client is bound to my VPN’s network interface, and my VPN has a killswitch as well, so I’m not paranoid that things will suddenly leak. Been running this setup for months now without issues.
zellij attach --create-background
Nice; this was the only thing preventing me from making a full switch from tmux to zellij.
I think that is completely normal. I run Arch on my main desktop, OpenSUSE Tumbleweed on my laptop and Debian on any and all servers I host. And I think they all work wonderfully. Even outside of these distros, I can still see the use case for many other distros. I think many popular distros each have a specific goal in mind and they execute it well.
I'm not sure how familiar you are with computers in general, but I think the best way to explain Docker is to explain the problem it's looking to solve. I'll try and keep it simple.
Imagine you have a computer program. It could be any program; the details aren't important. What is important, though, is that the program runs perfectly fine on your computer, but constantly errors or crashes on your friend's computer.
Reproducibility is really important in computing, especially if you're the one actually programming the software. You have to be certain that your software is stable enough for other people to run without issues.
Docker helps massively simplify this dilemma by running the program inside a 'container', which is basically a way to run the same exact program, with the same exact operating system and 'system components' installed (if you're more tech savvy, this would be packages, libraries, dependencies, etc.), so that your program will be able to run on (best-case scenario) as many different computers as possible. You wouldn't have to worry about if your friend forgot to install some specific system component to get the program running, because Docker handles it for you. There is nuance here of course, like CPU architecture, but for the most part, Docker solves this 'reproducibility' problem.
Docker is also nice when it comes to simply compiling the software in addition to running it. You might have a program that requires 30 different steps to compile, and messing up even one step means that the program won't compile. And then you'd run into the same exact problem where it compiles on your machine, but not your friend's. Docker can also help solve this problem. Not only can it dumb down a 30-step process into 1 or 2 commands for your friend to run, but it makes compiling the code much less prone to failure. This is usually what the
Dockerfile
accomplishes, if you ever happen to see those out in the wild in all sorts of software.Also, since Docker puts things in 'containers', it also limits what resources that program can access on your machine (but this can be very useful). You can set it so that all the files it creates are saved inside the container and don't affect your 'host' computer. Or maybe you only want to give permission to a few very specific files. Maybe you want to do something like share your computer's timezone with a Docker container, or prevent your Docker containers from being directly exposed to the internet.
There's plenty of other things that make Docker useful, but I'd say those are the most important ones--reproducibility, ease of setup, containerization, and configurable permissions.
One last thing--Docker is comparable to something like a virtual machine, but the reason why you'd want to use Docker over a virtual machine is much less resource overhead. A VM might require you to allocate gigabytes of memory, multiple CPU cores, even a GPU, but Docker is designed to be much more lightweight in comparison.