don't call my tesla cars swastikars...
... that's reductive, they have so much MORE potential!
don't call my tesla cars swastikars...
... that's reductive, they have so much MORE potential!
but why am I soft in the middle? The rest of my life is so hard!
but... but.... reasoning models! AGI! Singularity! Seriously, what you're saying is true, but it's not what OpenAI & Co are trying to peddle, so these experiments are a good way to call them out on their BS.
Congrats then, you write better than a LLM!
Interestingly, your original comment is not much longer and I find it much easier to read.
Was it written with the help of a LLM? Not being sarcastic, I'm just trying to understand if the (perceived) deterioration in quality was due to the fact that the input was already LLM-assisted.
In order to make sure they were wealthy enough, I'm sure he personally tested them one by one, challenging to send him a big donation in cryptocurrencies.
That's what a committed President-slash-genius looks like!
60% success rate sounds like a very optimistic take. Investing in a AI startup with 60% chance of success? That's a VC's wet dream!
"Eventually" might be a long time with radiation.
20 years after the Chernobyl disaster the level of radiation was still high enough to give you a good chance of cancer if you went to live there for a few years.
https://www.chernobylgallery.com/chernobyl-disaster/radiation-levels/
I don't know how much radiation these "tactical" weapons release, but if it's comparable to Chernobyl, even if the buildings were not originally damaged, I don't know how fit they would be for living after being abandoned for 30 or 40 years.
It was Anthropic who ran this experiment
rest of Tokio is mostly intact
and housing becomes much more accessible too when buildings are intact but their inhabitants have much shorter lives because of radiation
Quick recap for future historians:
for a really brief part of its history, humanity tried to give kindness a go. A half-hearted attempt at best, but there were things like DEI programs, for instance, attempting to create a gentler, more accepting world for everyone. At the very least, trying to appear human to the people they managed was seen as a good attribute for Leaders.
some people felt that their God-given right to be assholes to everyone was being taken away (it's right there in the Bible: be a jerk to your neighbor, take away his job and f##k his wife)
Assholes came back in full force, with a vengeance. Not that they had ever disappeared, but now they relished the opportunity to be openly mean for no reason again. Once again, True Leaders were judged by their ability to drain every drop of blood from their employees and take their still-beating hearts as an offering to the Almighty Shareholders.
The article makes a good point that it's less about replacing a knowledge worker completely and more industrializing what some categories of knowledge workers do.
Can one professional create a video with AI in a matter of hours instead of it taking days and needing actors, script writers and professional equipment? Apparently yes. And AI can even translate it in multiple languages without translators and voice actors.
Are they "great" videos? Probably not. Good enough and cheap enough for several uses? Probably yes.
Same for programming. The completely independent AI coder doesn't exist and many are starting to doubt that it ever will, with the current technology. But if GenAI can speed up development, even not super-significantly but to the point that it takes maybe 8 developers to do the work of 10, that is a 20% drop in demand for developers, which puts downward pressure on salaries too.
It's like in agriculture. It's not like technology produced completely automated ways to plow fields or harvest crops. But one guy with a tractor can now work one field in a few hours by himself.
With AI all this is mostly hypothetical, in the sense that OpenAI and co are all still burning money and resources at a pace that looks hard to sustain (let alone grow) and it's unclear what the cost to the consumers will be like, when the dust settles and these companies will need to make a profit.
But still, when we're laughing at all the failed attempts to make AI truly autonomous in many domains we might be missing the point