this post was submitted on 02 May 2024
60 points (91.7% liked)
Technology
59534 readers
3197 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
A serious self driving vehicle must be able to see around with different sensors. But then it must have a lot of computing power on board to merge different streams of data coming from different sensor. That adds up to the computing power required to make a proper prediction of the trajectories of dozen of other objects moving around the vehicle. I don't know about the latest model, but I knew that the google cars few years ago had the boot occupied by big computers with several CUDA cards.
That's not something you can put in a commercial car sold to the public, what you get is a car that relies only on one camera to look around and has a sensor in the bumper that cuts the engine if activated, but it does not create an additional stream of data. Maybe that there is a second camera looking down at the line on the road, but the data stream is not merged to the other, it is used to adjust the driving commands. I don't even know if the little onboard computer they have is able to computes the trajectories of all the objects around the car. Few sensors and little processing power, that is not enough, it is not a self driving car.
When Tesla sells a car with driving assistance they tell to the customer that their car is not a self driving car, but they fail to explain why, where is the difference. How big is the gap. That's one of the reasons why we had so many accidents.
It starts from the same news, but taking the idea from the book in the link it asks something different.
But those were prototypes. These days you can get an NVIDIA H100 - several inches long, a few inches wide, one inch thick. It has 80GB of memory running at 3.5TB/s and 26 teraflops of compute (for comparison, Tesla autopilot runs on a 2 teraflop GPU).
The H100 is designed to be run in clusters, with eight GPUs on a single server, but I don't think you'd need that much compute. You'd have two or maybe three servers, with one GPU each, and they'd be doing the same workload (for redundancy).
They're not cheap... you couldn't afford to put one in a Tesla that only drives 1 or 2 hours a day. But a car/truck that drives 20 hours a day? Yeah that's affordable.
A real self driving software must do a lot of things in parallel. Computer vision is just one of the many tasks it has to do. I don't think that a single H100 will be enough. The fact that the current self driving vehicles did not use so many processing power doesn't mean a lot, they are prototypes running in controlled environments or under strict supervision.