Because you said that was not the point of the article and I asked you to clarify why you think it wasn't. But never mind. This is going nowhere.
RedstoneValley
Why not? If the starting point of the article is that we can't design interfaces based on our elitist 5 percenter knowledge then the remedy for that would be...?
I'm using a lepotato for Home Assistant. Works very well for months now, but I'm a bit worried about long term distro support
I wonder why user tests aren't even mentioned once in the article. If you design an interface you have to test it with your audience
Did you know you can edit your posts? Could be helpful for other readers since you were incorrectly posting in several messages that wine needs root access.
The video's title "worst car ever reviewed" was not as balanced though :D
This happened just this morning. Probably not the dumbest thing ever, and I blame Snap for putting things where they don't belong: I deleted stuff from the /run/user/1000/doc directory. Turns out the files there are in fact hard links to files which actually reside somewhere else. Well, they were, until I deleted them forever.
Background: Firefox (as an Ubuntu snap package) downloads files in some kind of sandbox mode and references stuff there for some obscure reason. That was my weekly reminder to get rid of snap packages because snap sucks in a myriad of ways.
Media Corporations should not have a say in disconnecting users from the internet based on copyright infringement. The right to social participation is part of a basic human right - self-determination. Today, the majority of interactions with society involve communication via internet in one way or another, so that access to the internet is vital for enabling social participation.
Having a dedicated technical architect who hovers above the dev team handing architectural decisions down is also not always seen as an ideal construct in software development.
Ok, maybe it helps to be more specific. We have an LLM which is based on a broad range of human data input, like news, internet chatter, stories but also books of all kinds including those about philosophy, diplomacy, altruism etc. But if the topic at hand is "conflict resolution" the overwhelming data will be about violent solutions. It's true that humans have developed means for peaceful conflict resolution. But at the same time they also have a natural tendency to focus on "bad news" so there is much more data available on the shitty things that happen in the world which is then fed to the chatbot.
To fix this, you would have to train an LLM specifically to have a bias towards educational resources and a moral code based on established principles.
But current implementations (like ChatGPT) don't work that way. Quite the opposite, in fact: In training, first we ingest all the data that we can get our hands on (including all the atrocities in the world) and then in a second step we fine-tune the LLM to make it "better".
Don't want to spoil your little circlejerk here, but that should not surprise anyone, considering chatbots are trained on vast amounts of human data input. Humans have a rich history of violence with only brief excursions into "collaborating for the good of mankind and the planet we live on". So unless you build a chatbot that focuses on those values the result will inevitably be a mirror image of us human shitbags.
Are you sure your Facebook friends have posted anything at all lately? Most of my contacts have left Facebook long ago (so have I) but a lot of them never deleted their accounts.