Not a fan of Tesla or Musk, but I think it always bears repeating in these conversations that AI driving will be much safer than human driving if it isn’t already.
Unfortunately, accidents will happen, but when an accident happens with an AI, ALL the other AI’s get to learn from that failure going forward.
I’m very happy that in my old age, I’ll have some future version of this driving me around… or more likely, taking the wheel from me if I do something stupid.
This isn’t necessarily accurate. More sensors means more raw data that needs to be parsed and computed and you can run into issues where the two systems don’t agree and the computer won’t know what to do. Additionally, things like rain and snow can confuse LIDAR.
It may very well be that LIDAR is a required component for autonomous driving, but no companies have a fully functional system yet, so none of us can do more than speculate on what sensors are necessary.
Computers don’t require two systems to agree. They just need good algorithms to analyse the data from both sensors.
Human body has sight and hearing sensors. Sometimes our sensors disagree (lightning vs thunder from distance), but we have the algorithm to analyse the input and come up with the correct conclusions.
I’m still waiting for my train from LA to SF. It’s been in the works since I was in college. I’ve already graduated, had multiple jobs, early retired, and there’s still no sign of it.
Plenty of people have great ideas on how to make self-driving cars, and we’re seeing them come into play.
If you don’t understand that computer reaction time is ludicrously faster than human reaction time, and what that means for safety, I really can’t help you, though.
We all understand the benefits of computer reaction time, computer-assisted safety features are being included in cars all over the world.
But those are “stop” features, that make the car refrain from doing something harmful. The problem are the “go” features, that give a car decision power.
We tend to forget about all the lives saved by the “stop” features and focus on one life lost through a “go” feature. It may be a shortcoming of human nature but we are what we are and this is why “go” features don’t have a future.
a straight flat interstate with well painted lines in clear conditions is the only time that i trust it anymore. waaaaaay too many close calls in every other situation.
The FSD stack understands the road MUCH better than any other car I’ve used out there. But it’s decision making can still be dumb when deciding which lane to be in.
I believe statistically FSD is already better driver than a human. Ofcourse there are situations that confuse the AI and it makes errors a human wouldn’t but this kind of stuff gets slowly ironed out over time. People also seem to forget that human drivers do pretty fucking stupid mistakes too. Enough so that 40000 of them die every ear in US alone. 100% safety is probably impossible to achieve and 99.99% safety means 33000 accidents per year.
It’s easy to pick on Tesla due to the CEO being quite unpopular so every time a Tesla does something it’s not supposed to it gets so wide media attention that it seems way more common that it really is.
Nevertheless self driving cars are here to stay and there will be time when wanting to drive by yourself will be considered irresponsible and unsafe. And I say this as someone with zero interest in owning such car.
AI driving will probably be safer one day, but there is no real data today that demonstrates its current state is. At the same time we’re getting lots of examples where it fails at the most basic stuff.
Not a fan of Tesla or Musk, but I think it always bears repeating in these conversations that AI driving will be much safer than human driving if it isn’t already.
Unfortunately, accidents will happen, but when an accident happens with an AI, ALL the other AI’s get to learn from that failure going forward.
I’m very happy that in my old age, I’ll have some future version of this driving me around… or more likely, taking the wheel from me if I do something stupid.
AI driving is only as good as it’s sensors.
While most other companies use LIDAR, Musk switched to video cameras because it’s cheaper.
Which is why Tesla “FSD” is worse than competitors.
This isn’t necessarily accurate. More sensors means more raw data that needs to be parsed and computed and you can run into issues where the two systems don’t agree and the computer won’t know what to do. Additionally, things like rain and snow can confuse LIDAR.
It may very well be that LIDAR is a required component for autonomous driving, but no companies have a fully functional system yet, so none of us can do more than speculate on what sensors are necessary.
Computers don’t require two systems to agree. They just need good algorithms to analyse the data from both sensors.
Human body has sight and hearing sensors. Sometimes our sensors disagree (lightning vs thunder from distance), but we have the algorithm to analyse the input and come up with the correct conclusions.
With good enough algorithms, you don’t even need two systems. Humans can drive perfectly fine off vision alone.
“This thing that does not exist and nobody has any idea how to make it” will totally be safer than human driving.
You know what is safer than human driving and we know how to make? Trains.
I’m still waiting for my train from LA to SF. It’s been in the works since I was in college. I’ve already graduated, had multiple jobs, early retired, and there’s still no sign of it.
Sorry, but that’s just silly.
Plenty of people have great ideas on how to make self-driving cars, and we’re seeing them come into play.
If you don’t understand that computer reaction time is ludicrously faster than human reaction time, and what that means for safety, I really can’t help you, though.
We all understand the benefits of computer reaction time, computer-assisted safety features are being included in cars all over the world.
But those are “stop” features, that make the car refrain from doing something harmful. The problem are the “go” features, that give a car decision power.
We tend to forget about all the lives saved by the “stop” features and focus on one life lost through a “go” feature. It may be a shortcoming of human nature but we are what we are and this is why “go” features don’t have a future.
I’m pretty sure people get hit by trains on a daily basis.
Autopilot is terrible and the fact that they advertise it as a reputable system is abhorrent. And yes, I own a tesla.
I’m pretty happy with autopilot in our cars, especially on road trips. It really helps with driving fatigue.
a straight flat interstate with well painted lines in clear conditions is the only time that i trust it anymore. waaaaaay too many close calls in every other situation.
The FSD stack understands the road MUCH better than any other car I’ve used out there. But it’s decision making can still be dumb when deciding which lane to be in.
I believe statistically FSD is already better driver than a human. Ofcourse there are situations that confuse the AI and it makes errors a human wouldn’t but this kind of stuff gets slowly ironed out over time. People also seem to forget that human drivers do pretty fucking stupid mistakes too. Enough so that 40000 of them die every ear in US alone. 100% safety is probably impossible to achieve and 99.99% safety means 33000 accidents per year.
It’s easy to pick on Tesla due to the CEO being quite unpopular so every time a Tesla does something it’s not supposed to it gets so wide media attention that it seems way more common that it really is.
Nevertheless self driving cars are here to stay and there will be time when wanting to drive by yourself will be considered irresponsible and unsafe. And I say this as someone with zero interest in owning such car.
AI driving will probably be safer one day, but there is no real data today that demonstrates its current state is. At the same time we’re getting lots of examples where it fails at the most basic stuff.