Tesla's FSD Beta's Driving Modes Bring Up Interesting Ethical Issues We Should Talk About

Tesla's FSD Beta's Driving Modes Bring Up Interesting Ethical Issues We Should Talk About

Image for article titled Tesla's FSD Beta's Driving Modes Bring Up Interesting Ethical Issues We Should Talk About

Image: Tesla, JDT

Ever rolled through a stop sign? Of course you have. I think I did it this morning, in fact. If you’re a driver who hasn’t, then I hope being a liar is working out for you, because I bet you have. As far as illegal things go, rolling through a stop sign is about as minor as you can get, though it is technically illegal and I suppose for a decent reason, since stop signs are generally placed at locations where stopping fully is at the very least a pretty good idea. So, with that in mind, should we be programming self-driving car AI to commit this admittedly minor crime? Tesla seems to have already decided that it’s okay.

Tesla’s current version of their Full Self-Driving (FSD) Beta software contains a feature known as “FSD Profiles,” which started with version 10.3 released in October 2021 (but was then pulled for issues, and came back very soon with 10.3.1).

As noted in other articles about these updates, a big feature was the introduction of three “profiles” for FSD’s behavior, which include Chill, Average, and Assertive.

In Average and Assertive modes, the little description text reveals a bit about how the software will make the car behave, and reveals some details:

Image for article titled Tesla's FSD Beta's Driving Modes Bring Up Interesting Ethical Issues We Should Talk About

Image: Tesla, JDT

In both these modes, the description states that the car may perform a rolling stop. Let’s be absolutely clear about what this is: Yes, it’s incredibly minor and perhaps even trivial, but this is the car telling you that its programming may cause it to decide to perform an illegal act.

See also  Force-placed CPI Insurance - Is There a Better Way?

The reason I’m making a Big Thing out of this is that we’re still early enough in humanity’s development of what we hope will one day be actually, fully self-driving cars that we can still look at what we’re able to do and really ask ourselves if this is the path we want to take.

Is it? I’m honestly not certain.

The specific act — rolling through a stop sign — is less important than the bigger implications here. Those implications are the fact that we have traffic laws on our books that are routinely broken, because we’re human beings and the overall experience of driving can, we feel, be improved in some way by the willful ignoring of some of these laws.

Almost all of us speed at times, too. And while you can get Tesla’s (and others, of course) driver-assist systems to speed as well, it’s always been the human’s decision to do so. If you set your cruise control or Level 2 semi-automated driving system speed upper limit at 95 mph, that’s what the car will do, but that was your choice, not the car’s.

This situation is different, because the driver is not part of the decision making process that could result in the Tesla rolling through a stop sign, breaking a law. If a cop sees you do this, and pulls you over, who is to blame?

Is it the driver’s fault, because they were informed that the car might pull off such a crime when they selected the driving mode? Or should the cop send the ticket to Tesla HQ, since it was their software that willfully decided to roll through the stop sign?

See also  How to Switch Your Car Insurance Company

Do we want our eventual self-driving cars to be willing to break laws? Does this mean we need to take a realistic look at our traffic laws and maybe adapt them a bit better to real-world, real-life behaviors and situations? Should we just legalize a slow-rolling stop at certain intersections and conditions, and maybe have more flexible speed laws?

Or should we just program our cars to follow the law? Isn’t part of what makes the possibility of computer-controlled driving so appealing to people is that computers can always do the safe thing and won’t ever be tempted to break laws or run stop signs or speed because they’re not burdened with our flawed, impulsive, horny, hungry human brains?

It may sound minor, but the line of thinking shown in these driving profiles isn’t conceptually different than if, say, Tesla manages to actually develop and sell their humanoid robot (stop laughing, this is a thought experiment), and it includes a Shoplift Mode that would let it attempt to shoplift items if it thought it could get away with it.

Image for article titled Tesla's FSD Beta's Driving Modes Bring Up Interesting Ethical Issues We Should Talk About

Image: Tesla, JDT

Of course, this doesn’t exist, but it’s just not really different than a slider that lets the car decide to violate a traffic law.

We need to think all this through now and decide what we want for our future. Do we want total law-and-rule following? Do we want certain exceptions? The ability to override as needed and permit a law-breaking behavior? Give the decision over to the machine, possibly with a sliding set of acceptable parameters?

See also  The government is under pressure to ban gambling ads. History shows half-measures don’t work

Honestly, I’m not sure exactly how this should play out. What I am sure about is that we, collectively, as a society, need to take the time and do the admittedly hard work to decide on a standard set of rules, before we just start trying shit out and seeing how far we can push it.

Because, remember, we’re humans, and part of that deal means we’ll always push it, maybe too far.