this post was submitted on 08 Jul 2024
229 points (96.7% liked)

Technology

59495 readers
3135 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
all 27 comments
sorted by: hot top controversial new old
[–] nikita@sh.itjust.works 71 points 4 months ago (2 children)

This feels like the dot com bubble all over again.

[–] RecluseRamble@lemmy.dbzer0.com 3 points 4 months ago* (last edited 4 months ago)

The major difference is that we don't see an influx of insanely overvalued startups nobody heard of before.

That was the norm in the dotcom bubble and nobody remembers the "major players" of that time now.

The AI boom is pushed by the well established big tech which are also highly profitable and can afford the AI expense. Dotcom startups never were profitable.

[–] protoBelisarius@lemmy.world 3 points 4 months ago (3 children)

I dont really think so. I mean yea, too much money gets dumped into AI, but the dot com comparison doesn't really work. The dot com bubble burst because investors realized that a ton of small companies(aunt-emmas-flowers.com or something), had no strategies and unsustainable business models. They were all massively overvalued. But Microsoft, Google, Tencent, Baidu are all large companies, they aren't comparable and unlikely to suffer much if one of their investment fail. Additionally, AI is incredibly young and essentially still in beta, just because it works and can be used (and be profited from) doesn't mean the current versions are more than mometized research projects. Yes that's a problem if these are sold as full products, but that's want they are. All users are currently testers for the AI companies. A lot of companies managed to get some of that sweet VC but that's always been possible with the hype of that time. Now its just AI, gullible Investors have lost money since the invention of investing.

[–] frezik@midwest.social 30 points 4 months ago

Nvidia has a >$3T valuation, and that's entirely based on feeding the AI bubble. Without it, they're worth closer to what they were in 2022, which is about a tenth of what they are now.

[–] gravitas_deficiency@sh.itjust.works 19 points 4 months ago* (last edited 4 months ago) (1 children)

I get what you’re saying, but I still think the vast majority of AI use they’re trying to push nowadays is categorically pointless at best, and actively harmful and misleading at worst.

It’s because LLMs are logically incapable of mapping language to actual concepts (at least, in their current incarnation), which, in the vast majority of meaningful, complex, and nuanced knowledge domains, is going to yield subtle nonsense a meaningful proportion of the time, which is the most dangerously form of ML hallucination in the context of consumer/layperson usage. We have NOT done the work to deploy this technology safely and responsibly in modern society, but we’re deploying it anyways, and we’re deploying it at scale.

The bubble popping isn’t going to look like the .com bubble. It’s going to be a lot worse, because a lot more harm is being done - and will be done - to our societies, but at the same time, there are also a LOT more HUGE companies and people with TONS of money who stand to lose CATASTROPHIC amounts of capital… and they’re all ignoring the fact that this tech is CLEARLY being used in harmful ways all over the place. They only care about profitability.

And that’s without touching the energy consumption issue.

[–] canihasaccount@lemmy.world 3 points 4 months ago* (last edited 4 months ago) (1 children)

Claude Opus disagrees, lol (edit to add: all of what follows is Claude; not me):

I respectfully disagree with the claim that current language models are logically incapable of mapping language to actual concepts. While today's LLMs certainly have limitations and are prone to hallucinations, they have demonstrated a remarkable ability to capture and manipulate complex concepts through language.

A few key points:

LLMs are trained on vast amounts of human-generated text which is grounded in real-world concepts, allowing the models to build rich representations that go beyond surface-level word associations. Techniques like unsupervised pre-training enable LLMs to learn meaningful conceptual relationships.

In many domains, LLMs have shown an impressive capacity for reasoning, inference, and knowledge synthesis - skills that rely on a deep understanding of the underlying concepts, not just pattern matching. Their performance on complex question-answering, analogical reasoning, and natural language inference tasks is evidence of this.

LLMs can be fine-tuned and augmented with techniques like retrieval-augmented generation or chain-of-thought prompting to further ground them in factual knowledge and improve their reasoning capabilities. This allows more robust conceptual mappings.

Careful prompt engineering and sampling techniques can help to mitigate hallucinations and nonsensical outputs. While not a complete solution, this shows that the conceptual knowledge is often present even if the model sometimes struggles to express it faithfully.

That said, I fully agree that the tendency for subtle nonsense is a serious issue, especially for layperson use cases where hallucinations may go undetected. Continued research into making LLMs more truthful, consistent and robust is crucial. Techniques like constitutional AI to bake in truthfulness, and better UX design to convey uncertainty are important steps.

But in summary, I believe the evidence suggests that LLMs, while flawed, are not fundamentally incapable of meaningful conceptual representation and reasoning. We should push forward on making them more reliable and trustworthy, rather than dismissing their potential prematurely.

[–] gravitas_deficiency@sh.itjust.works 3 points 4 months ago (1 children)

Side note: I like how the LLM response didn’t even attempt to address the energy issue, which is frankly one of the biggest problems with current ML tech.

[–] canihasaccount@lemmy.world 5 points 4 months ago

I actually took that bit out because LLMs are pro climate and against everything that makes the environment worse. That's a result of being trained on a lot of scientific literature. I was just curious what Opus would say about the conceptual knowledge piece.

[–] errer@lemmy.world 6 points 4 months ago (1 children)

Someone can correct me if I’m wrong, but I don’t think there are too many highly valued startups out there with an AI bent. The gain in stock value has been mostly the big boys. In the dot com era you had tons of hyped IPOs of companies whose stock valuations went to the moon. That doesn’t seem to be the case today.

[–] technocrit@lemmy.dbzer0.com 2 points 4 months ago (1 children)

Yes, the rot is mostly centralized in a few monopolistic companies with outsized economic influence. That's why the crash will be much worse.

[–] SailorMoss@sh.itjust.works 2 points 4 months ago

Oh boy, I can’t wait until we bail out the tech giants because they’re too big to fail.

[–] jeena@piefed.jeena.net 35 points 4 months ago (3 children)

Oh I totally agree with that, just the other day I was trying to think what products there are which I would pay for which would make me utilize AI in some capacity (and not just a simple algorithm). I couldn't find a single one. I use ChatGTP basically every day for small things, but if I had to pay for it, I don't think I would.

[–] Alphane_Moon@lemmy.world 22 points 4 months ago* (last edited 4 months ago)

Image/video upscaling via neural net is something I would pay for.

I currently use freeware/open source alternatives, but I am planning to get a copy of Topaz Video AI in the future.

Even the freeware/open source algorithms are incredible in terms of quality. You do get artifacts and it doesn't handle certain things very well (I get problems with watermarks); but that might be my lack of knowledge.

When I first did a successful SD upscale (x2 resolution), it almost felt like magic.

I had seen some pretty impressive neural net upscaling before (Arrival of a Train at La Ciotat, Lumière Brothers, 1896), but it's a whole different feeling when you can do it yourself with pretty solid results.

[–] 100@fedia.io 6 points 4 months ago (1 children)

these chatbots are novelty toys at this point when stopping them from generating complete garbage isnt feasible with current methods

[–] jeena@piefed.jeena.net 12 points 4 months ago

Yes exactly, today I used it to find the name of a dish which I ate in Poland when I was a child, I remembered what it was made of but not the name, only remembered a similar soup. But describing it the chat spat out the name so I will be making it during the next couple of days for dinner.

Bigos
Bigos

[–] CosmoNova@lemmy.world 26 points 4 months ago* (last edited 4 months ago) (2 children)

Seems to be a broken clock working twice a day scenario. It sounds convincing and I believe the general statement to be correct but they wouldn’t have made this observation if their own AI efforts didn’t fall so short. They‘re seeing silicon valley raising massive amounts of money and become angsty.

Long story short Baidu is full of crap but they make the correct claim here because it serves them.

[–] technocrit@lemmy.dbzer0.com 8 points 4 months ago* (last edited 4 months ago) (1 children)

... if their own AI efforts didn’t fall so short. They‘re seeing silicon valley raising massive amounts of money...

Successful AI =/= grifting massive amounts of money

[–] CosmoNova@lemmy.world 2 points 4 months ago

It is in the corporate space though. Wealth > rest.

[–] ChowJeeBai@lemmy.world 2 points 4 months ago (1 children)

Sokay, they'll just steal the secrets to accelerate their programs as usual.

[–] a4ng3l@lemmy.world 1 points 4 months ago

And forget the clamp in the process ?

[–] _haha_oh_wow_@sh.itjust.works 10 points 4 months ago (1 children)

I don't want to live in a world without AI shoelaces (/s in case it's needed)

[–] AVincentInSpace@pawb.social 10 points 4 months ago (1 children)

"i like your shoelaces"

"thanks i stole them from sam altman"

[–] _haha_oh_wow_@sh.itjust.works 3 points 4 months ago

"It's ok, he stole them too."

[–] alienanimals@lemmy.world 6 points 4 months ago (1 children)

AI is so dumb. I've tried it and I can't get it to do anything. There are no useful applications and it's very bad!1!

[–] pastermil@sh.itjust.works 4 points 4 months ago

You may jest, but too many hammers and not enough nails is what led us here.