thats fair I suppose, though in practice id assume that making a whole bunch of individual instances is probably more difficult than making a much of accounts on one instance that you control, and thus vote manipulation in this manner should have a higher barrier to entry?
CarbonIceDragon
If one wanted to ensure that external content is still easily visible, one could always have things set up so that posts on local communities only appears in local and subscribed, and only posts from outside appear in all (though it might need to be renamed to better fit such a layout I suppose)
Honestly, I've begun to think the upvote/downvote model is a bad fit for the fediverse in general:
*Different instances have different rules around it, and in some cases (for example, an instance disabling downvoting) this might give a modest advantage in the sorting for content on that instance
*Instances have to trust votes by other instances, and while an obvious manipulation could be defederated, that has to be noticed first
*Votes are more publicly visible than on a place like reddit, potentially leading to something like a downvote being a catalyst for incivility towards the downvoter by whoever posted something
Honestly what I would do with Lemmy voting is just make vote counts mostly not federate. Have instances send a single up, down, or neither vote depending on if the net number on their insurance passes a certain up or downvote threshold, just so people on private instances have something to sort by, and have the score of a post or comment otherwise just go off of whatever the users within an instance vote. Then, an individual instance could have whatever rules or restrictions on voting it wanted, without worry over if that gets its votes drowned out by the wider network or seen as vote manipulation.
I think that the general idea of artificial intelligence in education hold some promise, in the sense that if you could construct a machine that can do much of the work of a teacher, it should enable kids to be taught in an individual way currently only possible for those rich enough to afford a private tutor, and such a machine would be labeled as an AI of some kind. The trouble is, like with so many other things AI, that our AI technology just doesn't seem to be up to the task, and probably just won't be without some new approach. We have AI just smart enough for people to try to do all the things that one could use an AI for, but not smart enough for the AI to actually do the job well.
Tbf one if the use cases for display technologies with high pixel density is vr headsets.
Well no, but I was more referring to the general statement than the notion that it applies to musk. Having the self awareness to see the harm in what he's done and those he's supported doesn't seem particularly in character for him in any event.
I mean, if someone was to argue about that topic they could probably examine like, Oskar Schindler or John Rabe or such, but that's besides the point I suppose.
Something I do wonder about these laws: could a person self-hosting a private fedi instance that only they have an account on, argue that they meet age verification requirements by virtue of personally knowing the age of the only user? Or at that point would the whole network of federated servers count as the "platform" rather than the instance?
While I don't think this scenario likely, something that I can't help but thinking when this sort of statement comes up is, well, how do we know what it's doing isn't thinking? Like, I get that it's ultimately just using a bunch of statistics to predict the next word or token or whatever, but my understanding was that we have fairly limited knowledge of how our own consciousness and thinking works, and so I keep getting the nagging feeling of "what if what our brains are doing is similar somehow, using a physical system with statistical effects to predict stuff about the world, and that's what thinking ultimately is?"
While I expect that it probably isn't and that creating proper agi will require something fundamentally more complicated than what we've been doing with these language models and such, the fact that I can't prove that to my own satisfaction makes me very uneasy about them, considering what the ethical ramifications of being wrong about it might be.
Probably because they didnt go throught the government, which takes a long time to move on anything, and just put pressure on some profit seeking corporations that just want to get a bother to go away, but which also unfortunately have been put in a position of practical power equal to some types of legislation.
I think the reasoning is something like this: these companies employ such call center employees for a reason, either they legally have to for one reason or another or they've determined that in some way, it is more profitable to have the capacity for people to call them than not. If the call centers are swamped, then they still cost the company money, but their benefit to the company is reduced, because the "real" calls can't get through in a timely fashion. As such, it's in the company's interest to avoid having people spam them, and if the policy those people want changed won't really cost the company anything to change, then just doing that might be the most profitable option for them.
Hmm, that is fair. I would suggest making votes not federate at all in that case, except doing that would make single person or very small instances effectively be limited to sorting by new