GlowHuddy

joined 1 year ago
[–] GlowHuddy@lemmy.world 14 points 7 months ago (1 children)

Could be both of those things as well.

[–] GlowHuddy@lemmy.world 13 points 8 months ago (1 children)

Yeah, I'm currently using that one, and I would happily stick with it, but it seems just AMD hardware isn't up to par with Nvidia when it comes to ML

Just take a look at the benchmarks for stable diffusion:

[–] GlowHuddy@lemmy.world 4 points 8 months ago (1 children)

Now I'm actually considering that one as well. Or I'll wait a generation I guess, since maybe by then Radeon will at least be comparable to NVIDIA in terms of compute/ML.

Damn you NVIDIA

[–] GlowHuddy@lemmy.world 7 points 8 months ago (6 children)

Yeah, was just reading about it and it kind of sucks, since one of the main reasons I wanted to go Wayland was multi-monitor VRR and I can see it is also an issue without explicit sync :/

74
submitted 8 months ago* (last edited 8 months ago) by GlowHuddy@lemmy.world to c/linux@lemmy.ml
 

I have currently a RX 6700XT and I'm quite happy with it when it comes to gaming and regular desktop usage, but was recently doing some local ML stuff and was just made aware of huge gap NVIDIA has over AMD in that space.

But yeah, going back to NVIDIA (I used to run 1080) after going AMD... seems kinda dirty for me ;-; Was very happy to move to AMD and be finally be free from the walled garden.

I thought at first to just buy a second GPU and still use my 6700XT for gaming and just use NVIDIA for ML, but unfortunately my motherboard doesn't have 2 PCIe slots I could use for GPUs, so I need to choose. I would be able to buy used RTX 3090 for a fair price, since I don't want to go for current gen, because of the current pricing.

So my question is how is NVIDIA nowadays? I specifically mean Wayland compatibility, since I just recently switched and would suck to go back to Xorg. Other than that, are there any hurdles, issues, annoyances, or is it smooth and seamless nowadays? Would you upgrade in my case?

EDIT: Forgot to mention, I'm currently using GNOME on Arch(btw), since that might be relevant

[–] GlowHuddy@lemmy.world 4 points 10 months ago

Interesting thought, maybe it's a mix of both of those factors? I mean, I remember using AI to work with images a few years back when I was still studying. It was mostly detection and segmentation though. But generation seems like a natural next step.

But definitely improving image generation doesn't suffer a lack of funding and resources nowadays.

[–] GlowHuddy@lemmy.world 10 points 10 months ago (2 children)

I mean, we didn't choose it directly - it just turns out that's what AI seems to be really good at. Companies firing people because it is 'cheaper' this way(despite the fact, that the tech is still not perfect), is another story tho.