this post was submitted on 15 Aug 2024
10 points (100.0% liked)

AI Generated Images

7178 readers
120 users here now

Community for AI image generation. Any models are allowed. Creativity is valuable! It is recommended to post the model used for reference, but not a rule.

No explicit violence, gore, or nudity.

This is not a NSFW community although exceptions are sometimes made. Any NSFW posts must be marked as NSFW and may be removed at any moderator's discretion. Any suggestive imagery may be removed at any time.

Refer to https://lemmynsfw.com/ for any NSFW imagery.

No misconduct: Harassment, Abuse or assault, Bullying, Illegal activity, Discrimination, Racism, Trolling, Bigotry.

AI Generated Videos are allowed under the same rules. Photosensitivity warning required for any flashing videos.

To embed images type:

“![](put image url in here)”

Follow all sh.itjust.works rules.


Community Challenge Past Entries

Related communities:

founded 1 year ago
MODERATORS
 

Hey, all:

I'm pretty new to AI image creation. I've mucked around in Dalle and gotten some decent results but recently found Fal.ai.

My question is related to Lora models. While get how to call them generally ex: lora:dark_fantasy:1.3, I can't seem to get them to work at all here

https://fal.ai/models/fal-ai/flux-general

There is an overwhelming amount of websites and models and blah blah out there and I could really use a crash course in how to use these models.

Also, how do I train my own?

you are viewing a single comment's thread
view the rest of the comments
[–] Scipitie@lemmy.dbzer0.com 4 points 3 months ago (1 children)

No offense intended, possible that I miss read your experience level:

I hear a user asking developer questions. Either you go the route of using the publicly available services (dalle and Co) or you start digging into hosting the models yourself. The page you linked hosts trained models to use in your own contexts, not for a "click button and it works".

As a starting point for image generation self hosting I suggest https://github.com/AUTOMATIC1111/stable-diffusion-webui.

For the training part, I'll be very blunt: if you don't indent to spend five to six digit sums on hardware or processing power, forget it. And even then you'd need the raw training data to pull it of.

Perhaps what you want to do use fine tune a pretrained model, that's something I only have a. It of experience in LLMs thohfn(and even there I don't have the hardware to get beyond a personal proof of concept).

[–] Track_Shovel@slrpnk.net 3 points 3 months ago (2 children)

Yeah, I'm pretty much thinking using pre-trained models to generate images in SD. I'm hugely inexperienced at all of this, and very lost. There are a bunch of devs/power users who have gone to the nines on writing this stuff and guiding other devs/super users but for a schmoe like me, it may as well be in Sanskrit.

A starting point on 'hey, here's how to use SD, and a few models it has' would be very much appreciated. I've been trying to get lora models working but no luck

[–] Ziggurat@sh.itjust.works 2 points 3 months ago

Do you have a PC with a gaming GPU? If yes, you can run comfyui, automatic1111 or easy diffusion on your PC. It's not that complicated (but require to be familiar with installing/configuring a software. Which considering lemmy's audience is most likely the case)

If not there is tons of online services, some free (with restrictions), some with subscription, some with paying image credits. And you can check the faq of each services for details. Not all online SD services let you use lora, don't ask me why

[–] Scipitie@lemmy.dbzer0.com 1 points 3 months ago

I'd try chat gpt for that! :)

But to give you a very brief rundown. If you have no experience in any of these aspects and are self learning you should expect a long rampup phase! Perhaps there is an easier route but I'm not familiar with it if there is.

First, familiarize yourself with server setups. If you only want to host this you won't have to go into the network details but it could become a cause for error at one point so be warned! The usual tip here is to get yourself familiar enough with docker that you can read and understand docker compose files. The de facto standard for self hosting are linux machines but I have read of people who used Macos and even windows successfully.

One aspect quite unique to themodel landscape is the hardware requirements. As much as it hurts my nvidia despicing heart at this point in time they are the de facto monopolist. Get yourself a card with 12GB VRAM or more (everything below will be painful if you get things running at all. I've tried and pulled or smaller models on a 8GB card but experienced a lot of waiting time and crashes). Read a bit about Cuda on your chosen OS and what their drivers need.

Once you can understand this whole port, container, path mapping and environment variable things.

Then it's going to the github page linked, following their guide and starting a container. Loading models is actually the easier part once you have the infrastructure running.