LLMs don’t solve problems. That’s the point being made here. Many other algorithms do indeed solve issues, but those are very niche, as the alogos were explicitly designed for those situations.
While yes, humans excel at pattern recognition, sometimes to the point of it being a problem, there are many things we do that have nothing to do with patterns beyond the fact that they are tangentially involved. Emotions for instance don’t inherently follow patterns. They can, but they aren’t directly tied. Exploration also doesn’t come from pattern recognition.
If you need examples of why people flat out say LLMs aren’t solving problems, look at the recent “how many r’s in strawberry” which has admittedly been “fixed” in many models.
Granted, things are getting to the point that it may very well be possible. But these kinds of claims have been around for over a decade, and today my voice control devices still fail to understand me rather regularly.
Not to mention, a song is usually extremely easy to pick out. In a loud bar with background music, your brain tends to pick up the beat and start grooving along with it, even if you don’t realize it most of the time. Compared to the person sitting across the table trying to yell things to you, and you having to resort to lip reading and guesswork.