Ah yes, Hanlon's razor. Genuinely a great one to keep in mind at all times, along with it's corollary Clarke's law: "Any sufficiently advanced incompetence is indistinguishable from malice."
But in this particular case I think we need the much less frequently cited version by Douglas Hubbard: "Never attribute to malice or stupidity that which can be explained by moderately rational individuals following incentives in a complex system."
To me this implies that the navigation AI is going to hallucinate parts of its model of the world, because it's basing that model on what's statically the most likely to be there as opposed to what's actually there. What could go wrong?