That's why I said
So as long as the training data is well selected for your problem...
It's clear that in the training data for LLMs, 4chan, reddit, etc. are over-represented, so that explains why chatgpt might be more awful than an average person. Having an LLM decide on, e.g., college admission would be like having a Twitter poll to decide on who should be its next CEO. Like that's obviously stupid, nobody would ever do that, right?
The problem is that for the college admission example, the models were trained on previous admissions, taken by college employees , and these models are still biased.
I think you're being very optimistic here. I hope very much that you'd be right about the humans. I have a feeling that a lot of these type of decisions are also resulting from implicit biases in humans that these humans themselves might not even recognize or acknowledge. Few sexists or racists will admit to being racists or sexists.
I agree about your point about the "computer says no" issue. That's also addressed in the video and fits well into her wider point that large parts of the population not understanding how so-called AI works is a huge problem.