This is what Tumblr did too after they banned porn. It couldn't tell the difference between the Sahara Desert and boobs.
EldritchFeminity
Affinity is also on Windows - at least Designer is. I've been using it for a couple of years now.
That's what I was thinking. Deep fakes have existed since photo manipulation was invented, and Adobe hasn't cared one iota about it before. The only reason I can see for them to care now is if they think they can get in legal trouble for what people create with their products.
That's what I was thinking. Apart from the porn locked up in the Disney vault, big companies aren't in the business of making porn. And the companies that do aren't going to be interested in deep fakes. The people who are using Photoshop to create porn are small fries to Adobe. Deep fake porn has been around as long as photo manipulation has, and Adobe hasn't cared before.
Bearing that in mind, I don't think this policy has anything to do with AI deep fakes or porn. I think it's more likely to be some new revenue source, like farming data for LLM training or something. They could go the Tumblr route and use AI to censor content, but considering Tumblr couldn't tell the difference between the Sahara Desert and boobs, I think that's one fuck up with a major company away from being litigation hell. The only reason that I think would make sense for Adobe to do this because of deep fakes is if they believe that governments are going to start holding them liable for the content people make with their products.
What do you mean by "at the corporate/software level"? What corporations are drawing furry porn?
They're not threatened by its potential. They, like artists, are threatened by management who think that LLMs are good enough today to replace part or all of their staff.
There was a story from earlier this year of a company that owns 12-15 different gaming news outlets who fired about 80% of their writing staff and journalists - replacing 100% of their staff at the majority of the outlets with LLMs and leaving a skeleton crew at the rest.
What you're seeing isn't some slant trying to discredit LLMs. It's the results of management who are using them wrong.
Yet another case of the medical industry not caring one iota about women and women's ability to identify what is going on with their own bodies. The number of times I've heard of doctors dismissing women's pain and issues makes me want to scream.
My favorite part about that is, if we have to fact-check its answers with a secondary source, why wouldn't we just skip the AI and go to the other source first?
Not that the people making this stuff nor the people who believe them in blindly trusting its answers think of that, of course.
It's conjecture based on evidence from the way previous companies have handled AI data as well as the way Microsoft themselves generally handle things.
I'd rather prepare for the corporate greed and be pleasantly surprised than be disappointed when Microsoft does something that will negatively impact their userbase in the name of profits again (or MAUs or whatever else looks good on the quarterly report).
Gods, I hope you're right. I hope it's so bad that it scares every other AI company. Because they get away with this kind of crap all the time with no repercussions, since your average person doesn't have the money to bring them to court over it.
The same could be said about Windows 11 since it demands a TPM chip. Not that I'm complaining, since all I had to do was disable the chip to keep 11 away for good.
I think you're right that they were Apple only for a few years at least.