FaceDeer

joined 8 months ago
[–] FaceDeer@fedia.io 7 points 5 months ago (3 children)

Except it's not a bribe. It's entirely above-board, the money they're paying is a fine. They're not "getting out of a ticket", they're paying the ticket.

[–] FaceDeer@fedia.io 4 points 5 months ago

I think you've misinterpreted. The trial is going ahead, it's just not going to be a jury trial because the only thing the jury would be there for is to determine damages. Since Google is preemptively paying the full fine that the prosecution was asking for there'd be literally nothing for the jury to do there. It'd be a complete waste of time. The trial will instead be a bench trial, decided by a judge alone.

They weren't "getting out of" anything with this payment. The "hell" that Google was facing was exactly the fine that they paid.

[–] FaceDeer@fedia.io 3 points 5 months ago (7 children)

Did you read the article this thread is about? The sub-headline is:

The tool will be opt-in, so Copilot+ PCs won’t screenshot your activity without permission.

[–] FaceDeer@fedia.io 3 points 5 months ago

Windows Update is automatic by default.

[–] FaceDeer@fedia.io 76 points 5 months ago* (last edited 5 months ago) (4 children)

Sounds a bit unusual, but not unfair - Google just preemptively paid all of the damages that the government was seeking in this particular case, which is the only thing the jury would have been needed to determine. So having a jury would be a complete waste of the jury's time. The rest of the case would be up to the judge anyway.

If the prosecutor thinks they could get more now maybe they should have asked for more earlier. I think this may have been a miscalculation on the prosecution's side.

[–] FaceDeer@fedia.io 5 points 5 months ago

Frankly, this is one of the areas I'm most looking forward to seeing what integrated AI can do for Windows. A couple of months ago I was having trouble with getting my printer to work and what I ended up doing was taking a screenshot of the printer settings and pasting the literal image of the screen into Bing Chat to ask it what I was doing wrong. It was able to parse my settings out of the image and figured out what I needed to change to make the printer work.

Having a troubleshooting AI like that that can actually "read" the entire state of my machine would be great.

[–] FaceDeer@fedia.io 25 points 5 months ago (3 children)

Don't be so sure. This forum is a bubble, 99% of Windows users have never heard of this feature in the first place let alone any of the details about how it works.

[–] FaceDeer@fedia.io -1 points 5 months ago (1 children)

We had a good solid enraged mob going here, and Microsoft is ruining it! The bastards!

[–] FaceDeer@fedia.io 3 points 5 months ago (2 children)

I've actually had those troubleshooters work for me several times in recent years. Mostly fixing networking issues.

[–] FaceDeer@fedia.io 15 points 5 months ago (1 children)

Aside from it not really working, though.

Glaze attempts to "poison" AI training by using adversarial noise to trick AIs into perceiving it as something that it's not, so that when a description is generated for the image it'll be incorrect and the AI will be trained wrong. There are a couple of problems with this, though. The adversarial noise is tailored to specific image recognition AIs, so it's not future-proof. It also isn't going to have an impact on the AI unless a large portion of the training images are "poisoned", which isn't the case for typical training runs with billions of images. And it's relatively fragile against post-processing, such as rescaling the image, which is commonly done as an automatic part of preparing data for training. It also adds noticeable artefacts to the image, making it look a bit worse to the human eye as well.

There's a more recent algorithm called Nightshade, but I'm less familiar with its details since it got a lot less attention that Glaze and IIRC the authors tried keeping some of its details secret so that AI trainers couldn't develop countermeasures. There was a lot of debate over whether it even worked in the first place, since it's not easy to test something like this when there's little information about how it functions and training a model just to see if it breaks is expensive. Given that these algorithms have been available for a while now but image AIs keep getting better I think that shows that whatever the details it's not having the desired effect.

Part of the reason why Cara's probably facing such financial hurdles is that it's computationally expensive to apply these things. They were also automatically running "AI detectors" on images, which are expensive and unreliable. It's an inherently expensive site to run even if they were doing it efficiently.

IMO they would have been much better served just adding "No AI-generated images allowed" to their ToS and relying on their users to police themselves and each other. Though given the witch-hunts I've seen and the increasing quality of AI art itself I don't think that would really work for very long either.

[–] FaceDeer@fedia.io 20 points 5 months ago (3 children)

I get the sense that a federated image hosting/sharing system would be counter to their goals, that being to lock away their art from AI trainers. An AI trainer could just federate with them and they'd be sending their images over on a silver platter.

Of course, any site that's visible to humans is also visible to AIs in training, so it's not really any worse than their current arrangement. But I don't think they want to hear that either.

view more: ‹ prev next ›