wewbull

joined 1 year ago
[–] wewbull@feddit.uk 4 points 2 months ago (2 children)

Memory ownership isn't the only source of vulnerabilities. It's a big issue, sure, but don't think rust code is invulnerable.

[–] wewbull@feddit.uk 4 points 2 months ago (1 children)

That's disengenuous though.

  • We're not forcing you to learn rust. We'll just place code in your security critical project in a language you don't know.

  • Rust is a second class citizen, but we feel rust is the superior language and all code should eventually benefit from it's memory safety.

  • We're not suggesting that code needs to be rewritten in rust, but the Linux kernel development must internalise the need for memory safe languages.

No other language community does what the rust community does. Haskellers don't go to the Emacs project and say "We'd like to write Emacs modules, but we think Haskell is a much nicer and safer functional language than Lisp, so how about we add the capability of using Haskell and Lisp?". Pythonistas didn't add Python support to Rails along side Ruby.

Rusties seem to want to convert everyone by Trojan horsing their way into communities. It's extremely damaging, both to those communities and to rust itself.

[–] wewbull@feddit.uk 1 points 2 months ago (1 children)

The question was "How do you define GPL compatible?". The answer to that question has nothing to do with code being split between files. Two licenses are incompatible if they can't both apply at the same time to the same thing.

[–] wewbull@feddit.uk 5 points 2 months ago (3 children)

...because they are incompatible licenses.

[–] wewbull@feddit.uk 3 points 2 months ago (5 children)

Not under a license which prohibits also licensing under the GPL. i.e. it has no conditions beyond what the GPL specifies.

[–] wewbull@feddit.uk 3 points 2 months ago

Glad it's not just my mind that saw a suicide pact in the making.

[–] wewbull@feddit.uk 4 points 3 months ago

My experience is that AMDs virtual memory system for VRAM is buggy and those bugs cause kernel crashes. A few tips:

  1. If running both cards is overstressing your PSU you might be suffering from voltage drops when your GPU draws maximum power. I was able to run games absolutely fine on my previous PSU, but running diffusion models caused it to collapse. Try just a single card to see if it helps stability.

  2. Make sure your kernel is as recent as possible. There have been a number of fixes in the 6.x series, and I have seen stability go up. Remember: docker images still use your host OS kernel.

  3. If you can, disable the desktop (e.g. systemctl isolate multi-user.target, and run the web gui over the network to another machine. If you're running ComfyUI, that means adding --listen to the command line options. It's normally the desktop environment that causes the crashes when it tries to access something in VRAM that has been swapped to normal RAM to make room for your models. Giving the whole GPU to the one task boosts stability massively. It's not the desktop environment's fault. The GPU driver should handle the situation.

  4. When you get a crash, often it's just that the GPU has crashed and not the machine (Won't be true of a power supply issue). sshing in and shutting down cleanly can save your filesystems the trauma of a hard reboot. If you don't have another machine, grab a ssh client for your phone like Juice SSH on android. (Not affiliated. It just works for me)

  5. Using rocm-smi to reset the card after a crash might bring things back, but not always. Obviously you have to do this over the network as your display has gone.

  6. Be aware of your VRAM usage (amdgpu_top) and try to avoid overcommitting it. It sucks, but if you can avoid swapping VRAM everything goes better. Low memory modes on the tools can help. ComfyUI has --low-vram for example and it more aggressively removes things from VRAM when it's finished using them. Slows down generations a bit, but better than crashing.

With this I've been running SDXL on a 8GB RX7600 pretty successfully (~1s per iteration). I've been thinking about upgrading but I think I'll wait for the RX8000 series now. It's possible the underlying problem is something with the GPU hardware as AMD are definitely improving things with software changes, but not solving it once and for all. I'm also hopeful that they will upgrade the VRAM across the range. The 16GB 7600XT says to me that they know <16GB isn't practical anymore, so the high-end also has to go up, right?

[–] wewbull@feddit.uk 5 points 3 months ago

It's meant to be that malloc fails and the application handles it.

Trouble is applications are written expecting it to never fail.

[–] wewbull@feddit.uk 1 points 3 months ago

Training data is the source. Not the 20 lines of python that get supplied with a model.

[–] wewbull@feddit.uk 2 points 3 months ago

A generative AIs only purpose is to generate "works". So it's only purpose in consuming "work" is to use them as reference. It exists to produce derivative works. Therefore the person feading the original work into the machine is the one making the choice on how that work will be used.

A human can consume a "work" for no other reason but to admire it, be entertained by it, be educated by it, to evoke an emotion and finally to produce another work based on it. Here the consumer of the work is the one deciding how it will be used. They are the ones responsible.

[–] wewbull@feddit.uk 0 points 3 months ago (1 children)

I would disagree, because I don't see the research into AI as something of value to preserve.

[–] wewbull@feddit.uk 6 points 3 months ago

Breaking back.

view more: ‹ prev next ›