You misrepresent or misunderstood my argument
Flumpkin
Comrade pinko barbie!
There’s no such thing as 100% objective morality.
Maybe not, maybe there is an infinity of variation of objective morality. There will always be broken people with pathologies like sociopathy or narcissism that wouldn't agree. But the vast majority, like 95% of people would agree for example on the universal human rights - at least if they had the rights and freedoms to express themselves and the education to understand and not be brainwashed. Basically given the options of a variety of moralities and the right circumstances (safety/not in danger, modicum of prosperity, education) you would get an overwhelming consensus on a large basis of human rights or "truths". The argument would be that just because a complex machine is forever running badly, that there still can be an inherent objective ideal of how it should run, even if perfection isn't desirable or the machine and ideal has to be constantly improved.
There is another way to argue for a moral starting point: A civilization that is on the way to annihilate itself is "doing something wrong" - because any ideology or morality that argues for annihilation (even if that is not the intention, but the likely outcome) is at the very least nonsensical since it destroys meaning itself. You cannot argue for the elimination of meaning without using meaning itself, and after the fact it would have shown that your arguments were meaningless. So any ideology or philosophy that "accidentally" leads to extermination is nonsensical at least to a degree. There would still be an infinity of possible configurations for a civilization that "works" in that sense, but at least you can exclude another infinity of nonsense.
"Who watches the watchers" is of course the big practical problem because any system so far has always been corrupted over time - objectively perverted from the original setup and intended outcome. But that does not mean that it cannot be solved or at least improved. A basic problem is that those who desire power/money above all else and prioritize and focus solely on the maximization of those two are statistically most likely to achieve it. That is adapted or natural sociopathy. We do not really have much words or thoughts about this and completely ignore it in our systems. But you could design government systems that rely on pure random sampling of the population (a "randocracy"). This could eliminate many of the political selection filtering and biases and manipulation. But there seems very little discussion on how to improve our democracies.
Another rather hypothetical argument could come from scientific observation of other intelligent (alien) civilizations. Just like certain physical phenomena like stars, planets, organic life are naturally emergent from physical laws, philosophical and moral laws could naturally emerge from intelligent life (e.g. curiosity, education, rules to allow stability and advancement). Unfortunately it would take a million years for any scientific studies on that to conclude.
Nick Bostrom talks a bit about the idea of a singleton here, but of course there be dragons too.
It is quite possible that it's too late now, or practically impossible to advance our social progress because of the current overwhelming forces at work in our civilization.
Hmm. It would definitely had helped if you could reply with emoticons like "lol" to classify jokes, not just with thumbs up.
Advances in AI could then also tweak the content sorting so that people are always kept in the optimal engagement mood. I mean they try to do that now.
Not sure what you're trying to say either, but fascist speech using lies is fascist recruitment. That is why autonomous anti-fascism is right to disrupt fascist recruitment events in universities. Because the state or moderates care more about maintaining order. So you have to disrupt the recruiting by any means.
So if your argument is that "sunlight is the best disinfectant" then no, it definitely isn't. There is historical evidence.
Ideally the AI can actually learn to differentiate unhinged vs reasonable posts. To learn if a post is progressive, libertarian or fascist. This could be used for evil of course, but it could also help stem the tide of bots or fascists brigading or Russia's or China's troll farms or all the special interests trying to promote their shit. Instead of tracing IPs you could have the AI actually learn how to identify networks of shitposters.
Obviously this could also be used to suppress legitimate dissenters. But the potential to use this for good on e.g. lemmy to add tags to posts and downrate them could be amazing.
Maybe that is what we need to do. "Decide" on certain moral questions based on best scientific data and our values and sound arguments and then stop debating them. Unless new scientific evidence challenges those moral edicts.
Somehow we keep going round in circles as a civilization.
There is nothing to keep you from using factors of 1024 (except he slightly ludicrous prefix "kibi" and "mebi"), but other than low level stuff like disc sectors or bios where you might want to use bit logic instead of division it's rather rare. I too started in the time when division op was more costly than bit level logic.
I'd argue that any user facing applications are better off with base 1000, except by convention. Like a majority of users don't know or care or need to care what bits or bytes do. It's programmers that like the beauty of the bit logic, not users. @mb_@lemm.ee
You forgot the journalists who frame narratives and the intellectuals who secrete the ideology that makes it all possible.
Oh wow it has eye tracking! I have high hopes for that feature.
But what I really want is to see and use my keyboard in VR and have an optimized desktop environment to pull up some text or document website quickly. I felt a bit trapped the last time I used VR and had to refer to documentations.
There is a very interesting documentary called "Professor Marston and the Wonder Women" and how they created her in 1940 as a feminist super hero.
William Moulton Marston, a psychologist already famous for inventing the polygraph, struck upon an idea for a new kind of superhero, one who would triumph not with fists or firepower, but with love. "Fine," said Elizabeth. "But make her a woman."
Not even girls want to be girls so long as our feminine archetype lacks force, strength, and power. Not wanting to be girls, they don't want to be tender, submissive, peace-loving as good women are. Women's strong qualities have become despised because of their weakness. The obvious remedy is to create a feminine character with all the strength of Superman plus all the allure of a good and beautiful woman.
I'm not arguing for "one single 100% objective morality". I'm arguing for social progress - maybe towards one of an infinite number of meaningful, functioning moralities that are objectively better than what we have now. Like optimizing or approximating a function that we know has no precise solution.
And "objective" can't mean some kind of ground truth by e.g. a divine creator. But you can have objective statistical measurements for example about happiness or suffering, or have an objective determination if something is likely to lead to extinction or not.