this post was submitted on 09 Dec 2024
71 points (96.1% liked)
Technology
59963 readers
3503 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
AMD gpus are just as good as Nvidia CPUs. Intel ones suck, but they're in the market, too. Gaming market is irrelevant to Nvidia and AMD's market cap and profits.
Nvidia's main advantage is their proprietary CUDA software, which makes it so the majority of AI software only runs on Nvidia CPUs, and is incompatible with AMD or Intel CPUs.
Exactly: too many people confuse the monopoly aspect with the consumer gaming stuff, which isn't even pocket change at this point.
CUDA and AI are the whales in the room, and nVidia has a stranglehold on that market and should be investigated.
(Even though, IMO, this is because AMD did their usual shitty job of software, and basically gave the market away.)
Yes, AMD completely overslept here and their ROCm is much inferior. But at least regulators can force NVIDIA to open their CUDA library and at least have some translation layers like ZLUDA.
Even though I think they will play the same card like Microsoft obfuscating and making it very confusing to hinder the portability.
I don't believe there's anything stopping AMD from re-implementing the CUDA APIs; In fact, I'm pretty sure this is exactly what HIP is for, even though it's not 100% automatic. AMD probably can't link against the CUDA libraries like cuDNN and cuBLAS, but I don't know that it would be useful to do that anyway since I'm fairly certain those libraries have GPU-specific optimizations. AMD makes their own replacements for them anyway.
IMO, the biggest annoyance with ROCm is that the consumer GPU support is very poor. On CUDA you can use any reasonably modern NVIDIA GPU and it will "just work." This means if you're a student, you have a reasonable chance of experimenting with compute libraries or even GPU programming if you have an NVIDIA card, but less so if you have an AMD card.