Yeah, I'm currently using that one, and I would happily stick with it, but it seems just AMD hardware isn't up to par with Nvidia when it comes to ML
Just take a look at the benchmarks for stable diffusion:
Yeah, I'm currently using that one, and I would happily stick with it, but it seems just AMD hardware isn't up to par with Nvidia when it comes to ML
Just take a look at the benchmarks for stable diffusion:
Now I'm actually considering that one as well. Or I'll wait a generation I guess, since maybe by then Radeon will at least be comparable to NVIDIA in terms of compute/ML.
Damn you NVIDIA
Yeah, was just reading about it and it kind of sucks, since one of the main reasons I wanted to go Wayland was multi-monitor VRR and I can see it is also an issue without explicit sync :/
Interesting thought, maybe it's a mix of both of those factors? I mean, I remember using AI to work with images a few years back when I was still studying. It was mostly detection and segmentation though. But generation seems like a natural next step.
But definitely improving image generation doesn't suffer a lack of funding and resources nowadays.
I mean, we didn't choose it directly - it just turns out that's what AI seems to be really good at. Companies firing people because it is 'cheaper' this way(despite the fact, that the tech is still not perfect), is another story tho.
Could be both of those things as well.