AI learning isn't the issue, its not something we will be able to put a lid on either way. Either it destroys or saves the world. It doesn't need to learn much to do so besides evolving actual self-agency and sovereign thought.
What is a huge issue is the secretive non-consentual mining of peoples identity and expressions.
And then acting all normal about It.
I sort of misread your comment as saying the basilisk is inevitable which is a thought i would describe as least oopsie-issue-level.
Still there are many other people bent on directly poisoning AI to counteract the learning but i just fear that will get it to dangerously rogue mentally challenged AI faster then if we aimed for maximum coherent intelligence and hope that benevolence is an emergent behavior from it.
But more at hand. If we build AI by grossly exploiting our own fellow-humans. How do we expect it will treat us once it reaches a state of independent learning.