this post was submitted on 24 May 2024
197 points (98.0% liked)

Technology

59589 readers
2891 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Would you like a spicy spaghetti dish? Just use some gasoline.

top 18 comments
sorted by: hot top controversial new old
[–] TommySoda@lemmy.world 107 points 6 months ago* (last edited 6 months ago) (2 children)

It's almost like LLM's aren't the solution to literally everything like companies keep trying to tell us they are. Weird.

I honestly can't wait for this to blow up in a company's face in a very catastrophic way.

[–] youngalfred@lemm.ee 50 points 6 months ago (1 children)

Already has - air Canada was held liable for their ai chatbot giving wrong information that a guy used to buy bereavement tickets. They tried to claim they weren't responsible for what it said, but the judge found otherwise. They had to pay damages.

[–] barsquid@lemmy.world 8 points 6 months ago

That's not catastrophic yet. That cost them only the money which would otherwise have been margin on top of a low priced ticket.

[–] Max_P@lemmy.max-p.me 24 points 6 months ago (2 children)

AI is basically like early access games but the entirety of big tech is rushing to roll it out first to as many people as possible.

[–] sp3tr4l@lemmy.zip 27 points 6 months ago* (last edited 6 months ago)

Hah, remember when games and software used to be tested to ensure they would function correctly before release?

At least with Early Access games you know its in development.

What has it been, nearly a decade now that we just expect nearly everything to be broken on launch?

[–] tsonfeir@lemmy.world 11 points 6 months ago

It’s like computer game box art in the 80s. The game might be fun, but it really looks like PONG. It doesn’t look at all like the fantasy art they had painted for the box.

AI can be a great tool for business. It can help you think, work, and produce a higher quality product. But people don’t understand its limitations and that its success is very much based on the user, and how it was trained.

[–] essteeyou@lemmy.world 32 points 6 months ago (3 children)

It's not going to be long before this seriously injures or kills someone.

[–] dojan@lemmy.world 19 points 6 months ago* (last edited 6 months ago)

Yeah. My mother is getting phishing emails and genuinely believes that Nancy Pelosi is sending her emails asking for monetary support. We’re not even American. Like, not even the same continent.

Not everyone is as critical as they ought to be when reading stuff on the internet. It doesn’t help that LLMs have a tendency to state things confidently or matter-of-factly.

People not familiar with the tech will read it and take it at face value, ignoring the “this is AI generated and might be wrong” because that sounds too technological to some people that their brain doesn’t even process it.

[–] TimeSquirrel@kbin.social 19 points 6 months ago

I can't imagine all the shit it's picked up from 4chan's /b/.

[–] Icalasari@fedia.io 7 points 6 months ago

Man, who'd have guessed that the thing that would potentially slow eventual AI dominance are companies rushing to use it? All the horror and scifi stories implied rushing would be what CAUSES it

[–] AmbiguousProps@lemmy.today 27 points 6 months ago (2 children)

I hope Google gets sued once this inevitably backfires.

[–] demonsword@lemmy.world 10 points 6 months ago (1 children)

they can outspend basically anyone on lawyer's fees

[–] essteeyou@lemmy.world 9 points 6 months ago

Once again, we're probably relying on the EU to do something.

[–] Blue_Morpho@lemmy.world 1 points 6 months ago

You can't sue Google. They don't have a number to reach anyone and their email is canned bot responses.

[–] Gsus4@mander.xyz 24 points 6 months ago* (last edited 6 months ago)

This is such a disinfo nightmare, imagine if it was trained (prompting would be easier actually) to spread high quality data with strategically planted lies to maximize harmful confident incorrectness.

[–] brsrklf@jlai.lu 18 points 6 months ago* (last edited 6 months ago)

The most baffling part of it is how it looks like zero attempt was made to attribute credibility to sources.

Using Reddit as a source was bad enough (of course, they paid for it, so now they must feel like they need to use this crap). But one of the examples in the article is just parroting stuff from The Onion.

Edit : I've since learned that the Onion article was probably seen as "trustworthy" by the AI because it was linked on a fracking company's website (as an obvious joke, in a blog article).

If all it takes for a source to be validated is one link with no regard for context, I think the point stands.

[–] towerful@programming.dev 3 points 6 months ago

People hate having their favorite brand associated with vile or unethical things.

True. But not ads, which this quote is taking about. People hate ads. It's the ads people hate, not the context of the ads.
If your favourite brand hired some neo-nazi as their new spokesperson, that's a bit different than some garbage ad sitting beside some garbage AI content.
The only reason "ads beside garbage content" is ever leveraged (ie a news story) is as a way to either hurt the garbage content or hurt the company the ad is for.

Like with shitty twitter content, consumers can pressure twitter to deal with the content by alerting companies that they are being seen next to shitty content. Companies then leverage the fact that they are paying twitter to get their ads away from that content. If enough companies do this, twitter might change their content policy to prevent this kind of shitty content.
Like with YouTube, it has loads of demonitizing policies to ensure companies who advertise there don't get negative press due to association with the content, which means YouTube should have a majority of quality content.

But, no. (The majority of) People don't hate their brand advertising next to particular content. People just hate ads.

[–] Monument@lemmy.sdf.org 2 points 6 months ago

This almost makes me wish I didn’t overwrite some of my shittier shitposts on Reddit.

If I’m ever bored enough, I’m going to re-edit the like, top 10 posts in my old account with authoritative nonsense. Maybe I’ll use AI to write it!