There are some numbers in this blog post https://www.wheresyoured.at/openai-is-a-systemic-risk-to-the-tech-industry-2/ (and a couple of others on the same blog) and they really don't look like OpenAI is going to last a couple of years until profitability.
taladar
The difficult question about AGI destroying humanity is deciding whether to be afraid of that option or to cheer it on and LLM enthusiasts are certainly among the people heavily pushing me towards the 'cheer it on' option.
As a standalone thing, LLMs are awesome.
They really aren't though and that is half the problem. Everyone pretends they are awesome when the results are unusable garbage 80% of the time which makes them unusable for 99% of practical applications.
The difference between AI companies and most other tech companies is that AI companies have significant expenses that scale with the number of customers.
It goes so far that a lot of the very same people vilifying open relationships are the ones cheating on their partners.
On the other hand that is also one of those things that annoys me about romance culture, the whole notion of your girlfriend/boyfriend/wife/husband being "stolen" by someone else as if your partner was just a passive object instead of being the actual person in the cheating who made promises to you (which might or might not include sexual exclusivity depending on mutually agreed upon preferences between everyone in the relationship) and should keep those promises or break up with you no matter what any third person tempts them with.
It is really not a big change to the way we work unless you work in a language that has very low expressiveness like Java or Go and we have been able to generate the boilerplate in those automatically for decades.
The main problem is that it is really not at all useful or produces genuinely beneficial results and yet everyone keeps telling us they do but can not point to a single GitHub PR or similar source as an example for a good piece of code created by AI without heavy manual post-processing. It also completely ignores that reading and fixing other people's (or worse, AI's) code is orders of magnitude harder than writing the same code yourself.
Probably not going to go belly-up, in a while
Don't be so sure about that, the numbers look incredibly bad for them in terms of money burned per actual revenue, never mind profit. They can't even pay for the inference alone (never mind training, staff, rent,...) from the subscriptions.
In fact Daggerfall was almost nothing but quests and other content like that.
They also don't apply the same attitude to those random sources they use instead. That is really the biggest problem with their approach. Literally going "you can't trust anyone any more" would be better than what they do.
True, but my point was that even a lot of the commercial websites that do have other products do not depend on ads, e.g. Amazon and all the other stores would still be there, every company offering a paid service would still be there, every company providing a service related to their RL goods (e.g. specs, drivers, product descriptions, lists of stores where you can buy them,...) would still be there.
Advertising does not finance a very large percentage of the useful parts of the internet. And among those advertising financed websites that are useful a lot are essentially duplicates to get a chunk of the ad revenue without doing a lot of work (e.g. almost all news websites that just republish AP, Reuters,... content).
Name a single task you would trust an LLM on solving for you that you feel confident would be correct without checking the output. Because that is my definition of perfectly and AI falls very, very far short of that.