justaderp
I'm not actually asking for good faith answers to these questions. Asking seems the best way to illustrate the concept.
Does the programmer fully control the extents of human meaning as the computation progresses, or is the value in leveraging ignorance of what the software will choose?
Shall we replace our judges with an AI?
Does the software understand the human meaning in what it does?
The problem with the majority of the AI projects I've seen (in rejecting many offers) is that the stakeholders believe they've significantly more influence over the human meaning of the results than exists in the quality and nature of the data they've access to. A scope of data limits a resultant scope of information, which limits a scope of meaning. Stakeholders want to break the rules with "AI voodoo". Then, someone comes along and sells the suckers their snake oil.
We're living in a late stage capitalistic hellhole and you're advocating faith in the free market.
What. The. Fuck.
Monopolies don't care about the user experience, only profit. The AI doesnt understand the former, only the latter. The continued degredation of the user experience is a likely indicator of an increase in revenue as function of successful application of AI.
Your question is good. You're missing understanding of time dilation and frame of reference. An explanation of the theory of relativity is at least pages long.
The first book I ever read on the subject, and IMO the best introductory text for any non-physiscist, is Stephen Hawking's "A Brief History of Time". But, any introduction to relativity should answer your question.
There's an incredible story behind it. But, the short form is that Proton is more expensive because they're not harvesting your private information. In a few months the law will prevent them from doing for as long as the core fiscal law and Proton exist (at least decades).