this post was submitted on 02 Nov 2024
948 points (96.6% liked)

Technology

59495 readers
3081 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

"Translation: all the times Tesla has vowed that all of its vehicles would soon be capable of fully driving themselves may have been a convenient act of salesmanship that ultimately turned out not to be true."

Another way to say that, is Tesla scammed all of their customers, since you know, everyone saw this coming...

you are viewing a single comment's thread
view the rest of the comments
[–] MajorHavoc@programming.dev 31 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

That out of the way, FSD sucks, and it's getting worse, not better.

It's almost like they bet on the AI to teach the AI, rather than continuing to pay for skilled engineers.

Buckle up folks, we're going to see a lot more of this, across every industry, before the lawsuits go into high gear and anything gets better.

[–] capital@lemmy.world 6 points 2 weeks ago (4 children)

Since the first time I heard about FSD I’ve been wondering why Tesla (or others) doesn’t set up a system where drivers opt-in (no opt-in by default) to sending anonymized driving data to help train the model. The vast majority of the time, it’s probably modeling OK driving. At least no accidents. But the shitty driving and accidents are also useful as data about what to avoid.

Maybe they’re already doing this? But then I wonder why their FSD is getting shittier rather than improving. One would think with more driving data, good and bad examples, would only help.

[–] Tarquinn2049@lemmy.world 6 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Not enough paid humans sorting between which data is examples of good behaviour and which data is examples of bad behaviour. Not saying that is what is happening as we don't even know if there is data, but that would be the weakness in that plan when run the way it would be run if instituted by elon.

[–] capital@lemmy.world 3 points 2 weeks ago
[–] MajorHavoc@programming.dev 1 points 2 weeks ago* (last edited 2 weeks ago)

Yeah. Current generation learning models can do impressive things in the hands of a skilled engineer, but Elon is leading a round of class warfare against skilled engineers right now.

Shareholders need to decide which they really want to bet on to win.

[–] SoleInvictus@lemmy.blahaj.zone 5 points 2 weeks ago

I’ve been wondering why Tesla (or others) doesn’t set up a system where drivers opt-in (no opt-in by default) to sending anonymized driving data to help train the model.

That's exactly how they train the model, but every Tesla is opted in with, to my knowledge, no option to opt out.

[–] bitchkat@lemmy.world 3 points 2 weeks ago

That's what they do except for the opt in part.

[–] GenosseFlosse@feddit.org 5 points 2 weeks ago (1 children)

I don't believe that you can use traditional algorithms to teach the car street driving, because there are to many different variations of intersections, traffic signs, special conditions like accidents, heavy Rain or fog, road closures or construction sites to get it right every time. Even if your autopilot is 99% correct and you drive 20000km a year, you still drive wrong 200km of it.

This doesn't mean that AI will be better, because then you don't even have a source code to track down where it went wrong to correct it in future updates.

[–] MajorHavoc@programming.dev 1 points 2 weeks ago* (last edited 2 weeks ago)

I don't believe that you can use traditional algorithms to teach the car street driving, because there are to many different variations... Even if your autopilot is 99% correct and you drive 20000km a year, you still drive wrong 200km of it.

Exactly!

And this is why, if the problem is solveable, it must be solved by learning models shepherded by expert engineers. The LLMs can take care of the long boring stretches, freeing skilled engineer time to fine-tune an LLM algorithm hybrid for the tricky bits.

I'm inclined to believe the problem is solveable, but since I'm not selling anything, I'm allowed to say "if". Heh.