lvxferre

joined 10 months ago
[–] lvxferre@mander.xyz 6 points 1 month ago (1 children)

I like this piece. Well-thought, and well laid out.

I do believe that mods getting weathered, as OP outlined, is part of the issue. I'm not sure on good ways to solve this, but introducing a few barriers of entry here and there might alleviate it. We just need to be sure that those barriers actually sort good newbies in and bad newbies out, instead of simply locking everyone out. Easier said than done.

Another factor is that moderator work grows faster than community size; you get more threads, each with more activity, users spend more time in your community, they're from more diverse backgrounds so more likely to disagree, forest fires spread faster so goes on. This is relevant here because communities nowadays tend to be considerably bigger than in the past; and, well, when you got more stuff to do, you tend to do things in a sloppier way.

You can recruit more mods, of course; but mod team size is also a problem, as it's harder to get everyone in the same page and enforce rules consistently. If one mod is rather lax and another is strict, you get some people getting away doing worse than someone else who got banned, and that makes the whole mod team look powertripping and picking favourites, when it isn't. (I'm not sure on how to solve this problem besides encouraging people to migrate to smaller communities, once they feel like the ones that they are in are too big.)

[–] lvxferre@mander.xyz 5 points 1 month ago

I think that it would be theoretically possible with a modified client. But in practice you'd filter a lot of genuinely active users out, and still let a lot of those suspicious accounts in. Sadly I think that blocking them individually is a better approach, even if a bit more laborious.

On a lighter note, this sort of user isn't a big deal here in Lemmy. It's simply more efficient to manipulate a larger userbase, like Twitter or Reddit.

[–] lvxferre@mander.xyz 21 points 1 month ago

You're right. And IMO they should be legally banned from doing so - because the people who signed up for this crap agreed with 23 and Me's ToS, not with someone else's.

But, well... as you said, capitalism going to capitalism. The "right thing to do" is often out of the table of options.

[–] lvxferre@mander.xyz 30 points 1 month ago (3 children)

I have a relative who considered doing this test. I'm glad that the family talked him out of it. (Surprisingly enough, not just me.)

Anyway, my [hopefully not "hot"] take: for most part the data should be destroyed, as it involves private matters. If there's data that cannot be reasonably associated with an individual or well-defined group of individuals, perhaps it could be released into the public domain, but I'm not sure on that.

[–] lvxferre@mander.xyz 15 points 1 month ago

That's a better way, I agree.

[–] lvxferre@mander.xyz 36 points 1 month ago (2 children)

The fun part isn't even what Apple said - that the emperor is naked - but why it's doing it. It's nice bullet against all four of its GAFAM competitors.

[–] lvxferre@mander.xyz 2 points 1 month ago

It’ll likely turn out that the more dispassionate people in the middle, who are neither strongly for nor against it, will be the ones who had the most accurate view on it.

I believe that some of the people in the middle will have more accurate views on the subject, indeed. However, note that there are multiple ways to be in the "middle ground", and some are sillier than the extremes.

For example, consider the following views:

  1. That LLMs are genuinely intelligent, but useless.
  2. That LLMs are dumb, but useful.

Both positions are middle grounds - and yet they can't be accurate at the same time.

[–] lvxferre@mander.xyz 1 points 1 month ago* (last edited 1 month ago)

Here's a simple test showing lack of logic skills of LLM-based chatbots.

  1. Pick some public figure (politician, celebrity, etc.), whose parents are known by name, but not themselves public figures.
  2. Ask the bot of your choice "who is the [father|mother] of [public person]?", to check if the bot contains such piece of info.
  3. If the bot contains such piece of info, start a new chat.
  4. In the new chat, ask the opposite question - "who is the [son|daughter] of [parent mentioned in the previous answer]?". And watch the bot losing its shit.

I'll exemplify it with ChatGPT-4o (as provided by DDG) and Katy Perry (parents: Mary Christine and Maurice Hudson).

Note that step #3 is not optional. You must start a new chat; plenty bots are able to retrieve tokens from their previous output within the same chat, and that would stain the test.

Failure to consistently output correct information shows that those bots are unable to perform simple logic operations like "if A is the parent of B, then B is the child of A".

I'll also pre-emptively address some ad hoc idiocy that I've seen sealions lacking basic reading comprehension (i.e. the sort of people who claims that those systems are able to reason) using against this test:

  • "Ackshyually the bot is forgerring it and then reminring it. Just like hoominz" - cut off the crap.
  • "Ackshyually you wouldn't remember things from different conversations." - cut off the crap.
  • [Repeats the test while disingenuously = idiotically omitting step 3] - congrats for proving that there's a context window and nothing else, you muppet.
  • "You can't prove that it is not smart" - inversion of the burden of the proof. You can't prove that your mum didn't get syphilis by sharing a cactus-shaped dildo with Hitler.
[–] lvxferre@mander.xyz 1 points 1 month ago (1 children)

I still fail to see how people expect LLMs to reason. It’s like expecting a slice of pizza to reason. That’s just not what it does.

This text provides a rather good analogy between people who think that LLMs reason and people who believe in mentalists.

[–] lvxferre@mander.xyz 4 points 1 month ago

But are they competent enough for that?

Fuck - you're right, they aren't.

Nevermind my conjecture then, it's probably as you said.

[–] lvxferre@mander.xyz 3 points 1 month ago* (last edited 1 month ago) (2 children)

Right, because a hacker getting vengeance for those abuses totally isn’t the narrative people would prefer.

Maybe, in the short term. But as people feel like the vengeance was successful, the topic gets its emotional conclusion. Then the focus shifts from how that leak popped up to the contents of the leak:

  • code and map editors for really old (more than a decade old) games
  • tidbits of info that might excite people about new games

Of course, I might be 100% wrong, and the leak might be actually the result of someone getting undue access to that content, or some insider getting pissed and leaking the info that they had at hand. I just think that Nintendo+GF+TPC are scummy enough to forge being leaked for their own benefit.

[–] lvxferre@mander.xyz 90 points 1 month ago* (last edited 1 month ago) (1 children)

As I mentioned in another thread, about the same topic:

First Zendesk dismissed the report. Then as hackermondev (the hunter) contacted Zendesk's customers, the issue "magically" becomes relevant again, so they reopen the report and boss the hunter around to not disclose it with the affected parties.

Hackermondev did the morally right thing - from his PoV it was clear that Zendesk wasn't giving a flying fuck, so he contacted the affected parties.

All this "ackshyually it falls outside the scope of the hunt" boils down to a "not our problem lol". When you know that your services/goods have a flaw caused by a third party not doing the right thing (mail servers not dropping spoofed mails), and you can reasonably solve the flaw through your craft, not doing so is irresponsible. Doubly true if it the flaw is related to security, as in this case.

I'm glad that Zendesk likely lost way more than the 2k that they would've paid hackermondev for the hunt. And also that hackermondev got many times over that value from the affected companies.

view more: ‹ prev next ›