this post was submitted on 22 Dec 2023
30 points (85.7% liked)

AI Generated Images

7187 readers
133 users here now

Community for AI image generation. Any models are allowed. Creativity is valuable! It is recommended to post the model used for reference, but not a rule.

No explicit violence, gore, or nudity.

This is not a NSFW community although exceptions are sometimes made. Any NSFW posts must be marked as NSFW and may be removed at any moderator's discretion. Any suggestive imagery may be removed at any time.

Refer to https://lemmynsfw.com/ for any NSFW imagery.

No misconduct: Harassment, Abuse or assault, Bullying, Illegal activity, Discrimination, Racism, Trolling, Bigotry.

AI Generated Videos are allowed under the same rules. Photosensitivity warning required for any flashing videos.

To embed images type:

“![](put image url in here)”

Follow all sh.itjust.works rules.


Community Challenge Past Entries

Related communities:

founded 1 year ago
MODERATORS
 

And how much do they cost? And how do you like them?

you are viewing a single comment's thread
view the rest of the comments
[–] meteokr@community.adiquaints.moe 11 points 11 months ago (2 children)

I use stable diffusion with automatic1111's webui ran locally with an AMD GPU. I use the card for gaming and encoding too, so the cost for just AI is basically free. The webui is excellent, and I learn about new things it can do every time I use it. Setting it up took some time, but nothing beyond what I am familiar with. I do loathe that so much data science/AI stuff is python based, because python's dependency management is an unruly beast, but oh well.

[–] tal@lemmy.today 2 points 11 months ago* (last edited 11 months ago) (1 children)

because python’s dependency management is an unruly beast

Note that Automatic1111 is, by default, set up to run in a venv -- a sort of little isolated Python install -- so it won't smack into the system packages, at any rate.

I think that the current version of Automatic1111 -- I'm running off the dev branch -- also pulled down the appropriate ROCm pytorch that AMD wants into its little venv, but I'm pretty sure that I recall needing to manually install that some months back, on an older version of Automatic1111. Other than that, I don't think that I had to do anything significant with Python packages; there's just a script that one runs to launch the package, and it also automatically downloads anything it needs the first time.

[–] meteokr@community.adiquaints.moe 1 points 11 months ago

by default, set up to run in a venv

It does, but since I'm running inside a container, I disable that behavior, and run it as a user package. Some extensions also require additional libraries, but they don't pull the correct ROCm dependencies and I have to modify part of the install scripts to manually define the correct versions.

The main webui code is excellent, even if sometimes the documention is out of step because of how fast everything moves. Its the extensions that are not always to the same level of quality that make fiddling with python dependencies a bit of extra work.

[–] AnonStoleMyPants@sopuli.xyz 1 points 11 months ago (1 children)

You use it on Linux? I have used it in Windows (6700xt) and it is slow af (2 it/s or even in s/it range), apparently it should be a lot faster in Linux but haven't tested it.

[–] meteokr@community.adiquaints.moe 3 points 11 months ago* (last edited 11 months ago) (1 children)

I run it in a container on a NixOS host yes, eventually I'll learn how to do it in a flake but my nix skills aren't quite there yet. EDIT: I use a 6900xt, and some quick runs I did give me roughly 10 it/s. Which feels reasonably fast, only a couple seconds per image.

[–] Reverendender@sh.itjust.works 3 points 11 months ago (1 children)

I recognize that those are words and numbers…

[–] meteokr@community.adiquaints.moe 5 points 11 months ago (1 children)

No worries, I'll link to some Arch Wiki stuff to help explain. Containers are a very cool system for isolating environments. Similar to how python uses VENV to contain all the dependencies for a python program, containers let you have a full environment beyond just the python stuff. I use podman to actually run the container on my computer. You use a Containerfile, to define what you want this environment to look like, and docker/podman does all the hard work for you, by making an image file that holds the whole thing in one place separate from our real OS.

This is my start script.

#!/usr/bin/env bash

podman run -it --rm --name stablediff2 -p 7860:7860 \
-e COMMANDLINE_ARGS="--api --listen --port 7860 --enable-insecure-extension-access --medvram-sdxl --cors-allow-origins *" \
        --device /dev/dri:dev/dri \
        --device /dev/kfd:/dev/kfd \
        -v ./models:/dockerx/stable-diffusion-webui/models:z \
        -v ./repos:/dockerx/stable-diffusion-webui/repositories:z \
        -v ./extensions:/dockerx/stable-diffusion-webui/extensions:z \
        -v ./embeddings:/dockerx/stable-diffusion-webui/embeddings:z \
        -v ./outputs:/dockerx/stable-diffusion-webui/outputs:z \
        -v ./inputfiles:/dockerx/stable-diffusion-webui/inputfiles:z
        localhost:stablediffusion:latest

This is just telling podman to start the container, give it an actual terminal to connect to, remove the container if it stops running, give it a name, and tell it what ports it can run on.

podman run -it --rm --name stablediff2 -p 7860:7860

These are the arguments passed to the webui start script itself, mostly for my own convenience. The medvram-sdxl is not required, since my card has enough vram, but then I can't be doing anything else with it. So I sacrifice a bit of generation speed for more free memory for the rest of my computer. I'm running this locally, so insecure extension access also doesn't matter since I'm the only one using this, just makes installing extensions from the webui directly.

-e COMMANDLINE_ARGS="--api --listen --port 7860 --enable-insecure-extension-access --medvram-sdxl --cors-allow-origins *" \

These are just the device files that correspond to my GPU, so that the container has access to it. Without this, the container would only have access to CPU based generation. Everything else is just the folders that holds my models, extensions etc. You have to give the container exactly what you want it to, because its isolated away from your normal files unless you tell it otherwise.

--device /dev/dri:dev/dri \
--device /dev/kfd:/dev/kfd \

This is iterations per second, I believe. It's basically a measure of how fast stablediffusion is is running a particular generation of an image. It lets people compare performance across different software and hardware configurations.

10 it/s

NixOS is the name of the GNU/Linux operating system I'm using, similar to how MacOS is different than Windows, NixOS is another type of operating system. I've only been using it for a few months, but its extremely cool. Before that I mostly used Debian and Fedora, but the main difference between NixOS and them is that you can define you whole OS as a configuration files, and then the tools it's designed around build your system for you. So instead of say, installing a program, opening it up and going into settings and changing everything to be how you like it. You can instead just make a file that lists everything the way you want it from the start, and Nix installs the program and sets it all up all in one go. It has a pretty big learning curve, and its features are numerous that I have yet to take full advantage of them. Probably not the best to start with if you are new to GNU/Linux systems, but once you see the benefits of why it does things differently, its awesome.

Hopefully that explains most of the words I used. Pardon my formatting, as I don't know markdown very well and I think I separated everything okay. :)

[–] Reverendender@sh.itjust.works 3 points 11 months ago

Wow, this is fascinating. Looks like I will need to devote some time to this.