FaceDeer

joined 8 months ago
[–] FaceDeer@fedia.io 8 points 5 months ago

One of the important features of Mastodon is that you can choose what your feed is. Everyone's feed has an algorithm determining what's in it even if it's just a simple "list the posts of everyone I've subscribed to in chronological order."

If someone else wants to see a feed of content that is curated and sorted in a different way, why get angry at them? They're not forcing you to see that feed.

[–] FaceDeer@fedia.io 5 points 5 months ago* (last edited 5 months ago)

Now that everyone's no longer waiting in anticipation of SD3 perhaps we'll start seeing diversification of attention to other models.

[–] FaceDeer@fedia.io 10 points 5 months ago

There are a lot of fine-tunes of earlier Stable Diffusion models (SD1.5 and SDXL) that are better than this, and will continue to see refinement for some time yet to come. Those were released with more permissive licenses so they've seen a lot of community work built on them.

[–] FaceDeer@fedia.io 56 points 5 months ago (3 children)

There's a reason that artists in training often practice by drawing nudes, even if they don't intend for that to be the main subject of their art. If you don't know what's going on under the clothing you're going to have a hard time drawing humans in general.

[–] FaceDeer@fedia.io 10 points 5 months ago (6 children)

It sounds like they weren't "being fed into an AI model" as in being used as training material, they were just being evaluated by an AI model. However...

Have you spent more than 4 seconds on Mastodon and noticed their (our?) general attitude towards AI?

Yeah, the general attitude of wild witch-hunts and instant zero-to-11 rage at the slightest mention of it. Doesn't matter what you're actually doing with AI, the moment the mob thinks they scent blood the avalanche is rolling.

It sounds like Maven wants to play nice, but if the "general attitude" means that playing nice is impossible why should they even bother to try?

[–] FaceDeer@fedia.io 3 points 5 months ago (1 children)

You're on slrpnk.net, I assume it's not implementing any of this stuff. As long as you don't sign up for Maven I don't see how this is going to affect you.

[–] FaceDeer@fedia.io 13 points 5 months ago

Looks like it.

In addition to pulling in posts, the import process seems to be running AI sentiment analysis to add tags and relational data after content reaches Maven’s servers. This is a core part of Maven’s product: instead of follows or likes, a model trains itself on its own data in an attempt to surface unique content algorithmically.

But of course, that news doesn't give the reader those lovely rage endorphins or draw clicks.

This is the Fediverse, having the content we post get spread around to other servers is the whole point of all this. Is this a face-eating leopard situation? People are genuinely surprised and upset that the stuff we post here is ending up being shown in other places?

There is one thing I see here that raises my eyebrows:

Even more shocking is the revelation that somehow, even private DMs from Mastodon were mirrored on their public site and searchable. How this is even possible is beyond me, as DM’s are ostensibly only between two parties, and the message itself was sent from two hackers.town users.

But that sounds to me like a hackers.town problem, it shouldn't be sending out private DMs to begin with.

[–] FaceDeer@fedia.io 4 points 5 months ago* (last edited 5 months ago)

Exactly, which is why I've objected in the past to calling Google Overview's mistakes "hallucinations." The AI itself is performing correctly, it's giving an accurate overview of the search result it's being told to create an overview for. It's just being fed incorrect information.

[–] FaceDeer@fedia.io 2 points 5 months ago

If they get kicked out of the Russian market then those extensions wouldn't be available there anyway.

[–] FaceDeer@fedia.io 7 points 5 months ago

We're not, though. The word "enshittification" was coined to describe a very specific kind of shittiness, not just a general "I don't like this development."

Now that the word is being used in the more general sense, though, we've lost a useful way of referring to just that very specific kind of shittiness. We already had plenty of ways to say "I don't like this development" so this is a net loss for the descriptiveness of language.

[–] FaceDeer@fedia.io 29 points 5 months ago (2 children)

The problem with AI hallucinations is not that the AI was fed inaccurate information, it's that it's coming up with information that it wasn't fed in the first place.

As you say, this is a problem that humans have. But I'm not terribly surprised these AIs have it because they're being built in mimicry of how aspects of the human mind works. And in some cases it's desirable behaviour, for example when you're using an AI as a creative assistant. You want it to come up with new stuff in those situations.

It's just something you need to keep in mind when coming up with applications.

[–] FaceDeer@fedia.io 6 points 5 months ago

I would expect that Apple has hired some of those experts and they told him.

view more: ‹ prev next ›