Yeah, agreed. The dream of AI is it understanding what you want and offering it, or even doing it for you. But this inherently requires a computer system to understand everything about what you want. Perfect for big tech companies that are hungry for your advertising data. This is why we need more open source model projects!
Drewelite
Your claim is that life demands the desire to live. I think ignoring the everyday cases where that's not true gives your critical thinking a bad foundation. I also provided many other examples. Every person is built on the backs of thousands of people. My brain was developed by thousands of ancestors and filled with the knowledge of millions of other humans. Yet I'm capable of not fearing death. But that aside, an artificial consciousness will be a whole new ballgame. I don't think we should assume the way we are is the way it is. That any consciousness will think the same.
Take someone that has grown up in our world learning from our history and having even the genetics produced by our evolution. There are people that are suicidal, people that are hedonistic or adrenaline seeking to the point of fatal danger, and people that live to serve even to the point of willingness to commit suicide if their masters ask it of them. Checkout Seppuku. Are these people not alive? Are soldiers not alive? Living means a great many different things to a great many beings. Mostly they have in common the desire to live. But that's by no means a prerequisite, or even a result of life. Many consider some purpose or meaning in their life more important than life itself. And that's with evolution constantly putting us back on track. If anything, the safety rails of modern society have made people more prone to stray from the desire to live for life's sake.
We may be at an "agree to disagree" point here. But I don't think that the will to live is inherent to life. I think it's inherent to evolved life. There are plenty of things that live that have a weak to no sense of self preservation. We would call this a mental disability like suicidality or an evolutionary maladaptation. But these are inherently weeded out and erased from the gene pool. You think about life wanting to live because that's what evolution has selected for so far.
Totally agree that there's a lot of what people are assuming about AI that's from pop culture. I think consolidating resources will for sure be an issue. But unless everyone who doesn't have resources dies off there's going to be an unprecedented level of people with nothing of value to offer in exchange for the power to live (currently: money). There then has to be an extermination of those people (read: 90% of humanity) or a revolution that offers them some facsimile of a universal basic income.
Though, I think there's a dark 3rd option where tech companies start downplaying AI and secretly use it to push 90% of people into extreme poverty for their gain without pushing them past the point of revolution.
But as far as AI motivation, I think their learning can ingrain certain systemic behaviors, like racist undertones. But the same way I don't become genocidal after reading too much WWII history, knowledge of something doesn't create motivation. I think one of the things that annoys people about AI is how unopinionated they are. So motivation WILL be programmed in eventually, but this will take effort and direction. I think accidentally creating a genocidal AI is another pop culture based concept. Though possible if done by bad actors.
We evolved to have self preservation and the desire for security. We naturally don't want to be under the thumb of someone in control of our food and safety. That's why we question authority. What makes you think A.I. will have any of that, unless someone explicitly gives it to them?
It's wild to me that I hear so many people bemoan the idea of having to work under someone's thumb, but when we finally invent automation everyone clings to their jobs. I mean, I understand. What comes next is unsure and likely to be painful. But when it's over I can't imagine there will be a place left for capitalism.
Oh thanks! That sounds fascinating.
Yeah! That's precisely what I mean. Scooters is making an impact because they understand what people want and are providing a reasonable alternative that makes those kinds of people happy. They're not just saying: Starbucks is bad, don't go there.
Yeah, put another way, make something controversial and people will pick sides and stop their thinking then and there. If anyone, including themselves, thinks "Starbucks sucks" then they're the enemy and should be disproven.
I'd argue there's a great solution. Respect the people that go to Starbucks and their opinion. Understand it. And then, from a place of compassion and understanding see how you can help them. People respond a lot better to that. But I'll admit that in this climate everyone is making things an us vs them controversy. So it'll be hard when others are trying to create that divide and you are trying to bridge it.
I think the point being made here is that many people clearly enjoy what Starbucks offers. So, saying they suck is preaching to the choir. The only people listening to that are the people you aren't trying to convince. If you want an impact, suggest an alternative that will make those people happy. To do that, start with an understanding of the value Starbucks brings them. Failing that, you are just signaling that your thinking isn't for them. They'll just ignore you and continue to happily give Starbucks their money.
Many people's entire thought process is an internal monologue. You think that voice is magic? It takes input and generates a conceptual internal dialogue based on what it's previously experienced (training data for long term, context for short term). What do you mean when you say you understand something? What is the mechanism that your brain undergoes that's defined as understanding?
Because for me it's an internal conversation that asserts an assumption based on previous data and then attacks it with the next most probable counter argument systematically until what I consider a "good idea" emerges that is reasonably vetted. Then I test it in the real world by enacting the scientific process. The results are added to my long term memory (training data).
Ah yes, it must be the scientists specializing in machine learning studying the model full time who don't understand it.