wewbull

joined 2 years ago
[–] wewbull@feddit.uk 4 points 5 months ago

It's the coal they're burning.

[–] wewbull@feddit.uk 9 points 5 months ago* (last edited 5 months ago) (6 children)

UK for comparison (Average over year)

GW %
Coal 0.18 0.6
Gas 8.31 27.7
Solar 1.52 5.1
Wind 9.36 31.1
Hydroelectric 0.41 1.4
Nuclear 4.36 14.5
Biomass 2.15 7.1

Edit: Imports are the remainder

[–] wewbull@feddit.uk 5 points 6 months ago

enabling unmodified CUDA applications to run on AMD GPUs at near-native performance, the ZLUDA atop AMD HIP code was made available and open-source following the end of the AMD contract

Trouble is... HIP doesn't support all of AMDs GPUs. It's only 7900s in the consumer line-up.

[–] wewbull@feddit.uk 65 points 6 months ago (25 children)

Not sure I'd be trusting Musk's communication network at this point. Especially not in Ukraine.

[–] wewbull@feddit.uk 6 points 6 months ago (1 children)

Matrix math is just stupid for whatever you pipe through it. It does the input, and gives an output.

Indeed.

That is exactly what all these "NPU" co processing cores are about from AMD, Intel, and to a further subset Amazon and Google on whatever they're calling their chips now. They are all about an input and output for math operations as fast as possible.

Yes, they are all matrix math accelerators, and none of which have any FPGA aspects.

[–] wewbull@feddit.uk 10 points 6 months ago* (last edited 6 months ago) (4 children)

I know exactly what they are. I design CPUs for a living, use FPGAs to emulate them, and have worked on GPUs and many other ASICs in the past.

FPGAs can accelerate certain functions, yes, but neural net evaluation is basically massive matrix multiplies. That's something that GPUs are already highly optimised for. Hence, why I asked what circuit you'd put on the FPGA. Unless you can accelerate the algorithmic evaluation by several orders of magnitude the inefficiency of FPGAs Vs ASICs will cripple you.

[–] wewbull@feddit.uk 12 points 6 months ago (7 children)

I don't really see how FPGA has a role to play here. What circuit are you going to put on it. If it's tensor multipliers, even at low precision, a GPU will be an order of magnitude faster just on clock speed, and another in terms of density.

What we've got right now has almost nothing to do with python, and everything to do with the compute density of GPUs crossing a threshold. FPGAs are lower density and slower.

[–] wewbull@feddit.uk 7 points 6 months ago

Well Nvidia and AMD will get their money (and possibly some other investors money too) straight back in equipment purchases.

[–] wewbull@feddit.uk 17 points 6 months ago

And LLMs themselves.

[–] wewbull@feddit.uk 1 points 6 months ago

They're not dead. Not yet. I think the next 6 months will be interesting for them.

The current batch of stories seems to be coming from disclosures made during a tribunal case over the unfair dismissal of the CEO at the time. I'l think there's a lot of pearl clutching going on in the reporting here. Their IP just isn't on the same scale as companies like Nvidia and AMD so I don't know how they could possibly be much dirty laundry here.

[–] wewbull@feddit.uk 6 points 6 months ago

Thanks for coming back and letting others know what your solution was.

[–] wewbull@feddit.uk 11 points 6 months ago (2 children)

Boot from a USB stick with a Live environment on it. See if you get the same issue.

view more: ‹ prev next ›