Unironically yes, sometimes. A lot of the best works which its training samples are based on cites the original poster's qualifications, and this filters into the model where asking for the right qualifications directly can influence it to rely more on high quality input samples when generating its response.
But it's still not perfect, obviously. It doesn't make it stop hallucinating.
Training from scratch and retraining is expensive. Also, they want to avoid training on ML outputs as samples, they want primarily human made works as samples, and after the initial public release of LLMs it has become harder to create large datasets without ML stuff in them