I don't disagree, I was just pointing out that "each word is generated independently of each other" isn't strictly accurate for LLM's.
It's part of the reason they are so convincing to some people, they are able to hold threads semi-coherently throughout entire essay length paragraphs without obvious internal lapses of logic.
I think it getting tripped up on riddles that people often fail or it not getting factual things correct isn't as important for "believability", which is probably a word closer to what I meant than "coherence."
No one was worried about misinformation coming from r/SubredditSimulator, for example, because Marcov chains have much much less believability. "Just guessing words" is a bit of a over-simplification for neural nets, which are a powerful technology even if the utility of turning it towards language is debatable.
And if LLM's weren't so believable we wouldn't be having so many discussions about the misinformation or misuse they could cause. I don't think we're disagreeing I'm just trying to add more detail to your "each word is generated independently" quote, which is patently wrong and detracts from your overall point.