Good God, of all the ways one could gamble, this sounds like playing Russian roulette with five bullets loaded in the revolver.
Akuchimoya
Thanks for articulating it this way for me. It's the hypocrisy that gets me, at least in my situation. I reported my boss to the directors. He was (still is) planning to cut a guy from our team, claiming he costs too much. But when I did the math, I found that my boss' personal expenses on food, gas, phone, vehicle, etc. (with increasingly unaccounted-for amounts) are more than that guy's pay.
My boss isn't worried about the financial health of the organization, he's worried he won't be able to keep spending it on himself if he has to pay the workers.
I know you didn't say they are (can be) racist, I am. I'm disagreeing that they are uninformed; a lot of Asians actively side with Trump and Musk. I know this because I am Asian myself and hear it from my parents and their friends who are Pro-Trump, anti-immigration, and racist against Black people, Hispanic people, Brown people, Muslims, LGBT people...
My dad defends every accusation against Trump. He thinks every bad thing said about him is a lie made up by his enemies. My mom's best friend loves him and says he's so smart, and everyone who disagrees with him is too stupid to understand.
A lot of Asians people are racist against non-white people and Asians of other countries. There are Pro-Trump Asians just as there are Pro-Trump Latinos who think they're "one of the good ones" and all about pulling up the ladder behind them. The people buying cybertrucks at best don't care and at worst are positively for it.
Truly, I don't understand why, but there are fully grown adults who believe that anything an LLM says is true. Maybe they think computers are unbiased (which is only as true as programmers and data are unbiased); maybe its the confidence with which LLMs deliver information; maybe they believe the program actually searches and verified information; maybe it's all of the above and more.
I know a guy who routinely says, "I asked ChatGPT...", and even after having explained how LLMs are complex word predictors and are not programmed for factual truth, he still goes to ChatGPT for everything. It's a total refusal to believe otherwise, but I can't fathom why.
All assignments are submitted electronically now, and if he's in philosophy, he will also have to follow formatting requirements like font, font size, margins, and spacing. Practically, he's doing as much as he is allowed off-computer.
If they had, they'd know there was a 13th disciple named Matthias. I'm not even kidding, it's in the Book of Acts. He was selected to replace Judas, and is described as having been with them since the beginning.
I tried out a bunch, including Babbel, Busuu, Language Transfer, Mango, and Memrise. I didn't like them for one reason or another. I finally landed on Lingodeer. It's similar to Duolingo, but it is a paid app. (You can try level 1 of any language for free.)
The regular subscription price is definitely not worth it. It's okay (not great, but not awful) when they do their sales. But I felt okay about paying human workers.
This kind of learning is a great start, but will only get you so far. If your local library has access to Kanopy, look for the Great Courses series on Spanish. I thought that was an excellent series after a little bit of Duolingo.
Duolingo got me enough vocabulary in Spanish to put the simplest sentences together, and then follow more robust lessons. I still think it was a good starting point, but I won't use it anymore on principle.
Librarians go to school to learn how to manage information, whether it is in book format or otherwise. (We tend to think of libraries as places with books because, for so much of human history, that's how information was stored.)
They are not supposed to have more information in their heads, they are supposed to know how to find (source) information, catalogue and categorize it, identify good information from bad information, good information sources from bad ones, and teach others how to do so as well.
I had to tell a bunch of librarians that LLMs are literally language models made to mimic language patterns, and are not made to be factually correct. They understood it when I put it that way, but librarians are supposed to be "information professionals". If they, as a slightly better trained subset of the general public, don't know that, the general public has no hope of knowing that.
It seems a probable case is she misunderstood or misheard what was being said to her as "she needs to finish the drink" and complied with the request she thought was being given to her.
Heck, even as a hearing person, if someone told me I can't have an open beverage in a space (alcoholic or not), finishing it seems like a reasonable way to be rid of it.