Just to be clear, “Full Self Driving” is the marketing name for the product. You are instructed to keep your hands on the wheel at all times and Tesla accepts no responsibility at all if it screws up (unlike Mercedes, who takes responsibility for their level 3 autonomous driving service).
And for other people who happen to read this, the only reason Tesla may seem ahead with their technology is that they just don’t care about safety. Tesla won’t have a safe product until they actually accept responsibility for their product’s failings.
Their infotainment system and app are pretty great compared to some other brands.
I’m currently driving a VW id5 and it’s like they’ve never designed any kind of software interfaces at all.
Example:
the VW app can tell me the car is unlocked, but can’t lock it for me.
it can’t show me the VIN number, even though I had to use it to register it ( it was hidden in my user profile on the site somewhere )
I can let it pre-heat and such, but only on two schedules .
can’t schedule appointments through the app
that weird sliding thingy for switching between speed limiter and cruise control is unintuitive AF
every other time I’m driving it’s giving me a pop up saying “there are new updated user settings for your account”. With only an ok and a cancel button. Where are they? What are they? Where can I find them? Did clicking"ok" accept them? Not a clue. When does it show this message that blocks the rest of the UI? After 1 minute of driving.
Not to hate on VW engineers but goddamnit guys. Get your shit together and hire a UX expert. Shortly drove a BMW 1 series before the VW and the infotainment was a lot more practical to use.
FSD beta is level 2 which still counts as a driver assist system. That’s why it’s on the driver’s responsibility. Level 3 means you can do other stuff while the car drives itself. If Tesla was marketin FSD beta as level 3 then by definition they would need to take responsibility when it fails. So far there’s only one death linked to FSD beta so I don’t quite get where the “they don’t care about safety” is coming from. I’m pretty sure V12 is already a safer driver than a human. When FSD beta fails it generally means it got stuck somewhere, not that it crashed and killed the passengers.
This is the key. I’ve actually been saved a few times now by FSD catching something I didn’t see, like some deer. I’m collecting videos of the things it does that impress me to share when my trial is over.
Like sure fuck Elon, but why do you think FSD is unsafe? They publish the accident rate, it’s lower than the national average.
There are times where it will fuck up, I’ve experienced this. However there are times where it sees something I physically can’t because of either blindspots or pillars in the car.
Having the car drive and you intervene is statistically safer than the national average. You could argue the inverse is better (you drive and the car intervenes), but I’d argue that system would be far worse, as you’d be relinquishing final say to the computer and we don’t have a legal system setup for that, regardless of how good the software is (e.g you’re still responsible as the driver).
You can call it a marketing term, but in reality it can and does successfully drive point to point with no interventions normally. The places it does fuckup are consistent fuckups (e.g bad road markings that convey the wrong thing, and you only know because you’ve been on that road thousands of times). It’s not human, but it’s far more consistent than a human, in both the ways it succeeds and fails. If you learn these patterns you can spend more time paying attention to what other drivers are doing and novel things that might be dangerous (people, animals, etc ) and less time on trivial things like mechanically staying inside of two lines or adjusting your speed. Looking in your blindspot or to the side isn’t nearly as dangerous for example, so you can get more information.
Just to be clear, “Full Self Driving” is the marketing name for the product. You are instructed to keep your hands on the wheel at all times and Tesla accepts no responsibility at all if it screws up (unlike Mercedes, who takes responsibility for their level 3 autonomous driving service).
And for other people who happen to read this, the only reason Tesla may seem ahead with their technology is that they just don’t care about safety. Tesla won’t have a safe product until they actually accept responsibility for their product’s failings.
Their infotainment system and app are pretty great compared to some other brands.
I’m currently driving a VW id5 and it’s like they’ve never designed any kind of software interfaces at all. Example:
Not to hate on VW engineers but goddamnit guys. Get your shit together and hire a UX expert. Shortly drove a BMW 1 series before the VW and the infotainment was a lot more practical to use.
FSD beta is level 2 which still counts as a driver assist system. That’s why it’s on the driver’s responsibility. Level 3 means you can do other stuff while the car drives itself. If Tesla was marketin FSD beta as level 3 then by definition they would need to take responsibility when it fails. So far there’s only one death linked to FSD beta so I don’t quite get where the “they don’t care about safety” is coming from. I’m pretty sure V12 is already a safer driver than a human. When FSD beta fails it generally means it got stuck somewhere, not that it crashed and killed the passengers.
This is the key. I’ve actually been saved a few times now by FSD catching something I didn’t see, like some deer. I’m collecting videos of the things it does that impress me to share when my trial is over.
Like sure fuck Elon, but why do you think FSD is unsafe? They publish the accident rate, it’s lower than the national average.
There are times where it will fuck up, I’ve experienced this. However there are times where it sees something I physically can’t because of either blindspots or pillars in the car.
Having the car drive and you intervene is statistically safer than the national average. You could argue the inverse is better (you drive and the car intervenes), but I’d argue that system would be far worse, as you’d be relinquishing final say to the computer and we don’t have a legal system setup for that, regardless of how good the software is (e.g you’re still responsible as the driver).
You can call it a marketing term, but in reality it can and does successfully drive point to point with no interventions normally. The places it does fuckup are consistent fuckups (e.g bad road markings that convey the wrong thing, and you only know because you’ve been on that road thousands of times). It’s not human, but it’s far more consistent than a human, in both the ways it succeeds and fails. If you learn these patterns you can spend more time paying attention to what other drivers are doing and novel things that might be dangerous (people, animals, etc ) and less time on trivial things like mechanically staying inside of two lines or adjusting your speed. Looking in your blindspot or to the side isn’t nearly as dangerous for example, so you can get more information.