A few months ago (October 2016 to be precise), I came across an article that caught my attention. I was planning on doing something with it earlier, but it ended up falling through the cracks. Life became chaotic on most all fronts, and so this fell to the bottom of the pile. Until I came across it once again today.
The article, posted on LinkedIn by a fellow named John Battelle (journalist and founder of a number of media platforms such as Wired magazine), focused on the long and messy transition period between the driver operated and autonomous vehicle eras.
The article took a decidedly common approach to the topic. Outside of fascination, the next most common approach to futuristic technology (such as autonomous vehicles and artificial intelligence) is often what I call the Boo factor. Essentially, contemplating the absolute worst case scenario.
For AI, this tends to be the possibility of our enslavement (or extermination) at the hand of our own creation (I have touched on this subject before). As for autonomous vehicle technology, the highlighted issue is usually the trolley problem.
For me personally, the answer is incredibly obvious. If the goal of preserving life requires a sacrifice, the choice with the fewest fatalities is the always the right choice. In fact, the only thing that makes this a problem, to begin with, is emotion. Which is why it’s important to keep them in check when pondering these things. If we’re using the trolly scenario, an emotional response (which could well end in inaction) will end in maximum fatalities.
A better way of looking at the trolley problem is to scale it up. In fact, not only scale it up but make the situation potentially realistic. Use situations that real deciders may very well face in real life. For example, rather than a mere trolley, let’s switch it up a bit.
Instead of an empty trolley on a path towards 6 non-existent people (and a non-existent fat man), I give you an aircraft filled with potentially hundreds of innocent bystanders.
Everyone remembers 9/11. With that in mind, consider this scenario.
It’s a busy travel day at any large international airport around the world. One of these massive intercontinental birds takes to the sky and initially sets out on its long journey, filled to capacity with passengers and fuel. All is well for the first part of the journey, the plane is traveling and checking in as anticipated. However, at some point, something changes.
The plane stops answering radio calls. Worse yet, its transponder disappears (but it is still trackable). And then you see the plane change course. It goes from its original heading to a heading that will eventually bring it to a population center. If the aircraft were to crash in this city, you are looking at a MINIMUM of hundreds of casualties, with a distinct possibility of thousands.
The aircraft is carrying just over 500 people of all backgrounds, ages, and cohorts. Fighter jets that were scrambled to the scene observe chaos in the cabin, and suspicious persons at the aircraft’s controls. All radio calls to the aircraft are going unanswered, and no attempt is being made to acknowledge the communications in any way (the plane equipment is unlikely to be faulty). By all accounts, it looks like the hijackers are not looking to cooperate.
There you have it. The trolley problem, on steroids. Knowing these details, what would you do?
Not do anything and hope (pray?) for the best possible outcome, ignoring past instances of such situations ending in disaster? Or do you take down the aircraft, in doing so guaranteeing the death of over 500, but potentially saving the lives of thousands?
Are you glad that you are not the one who has to possibly make this decision on a split second basis? Me too.
This is why I fail to take the trolley problem all that seriously. It is not as much a problem as it is a test of judgment. Does reason prevail in your decision-making processes no matter what variables are at play? Or can they be clouded by emotion?
We can call that part 1.
For part 2, I will move on to something that it seems it missing from these conversations. That is, the fact that we have had at least semi-autonomous vehicles around for decades. The technology has just been primarily deployed on vehicles that average people generally do not have operational access to. Once more, we come to aircraft.
Planes of all sizes and uses have been increasingly autonomous (or at least, semi-autonomous) for decades now. And as the percentage of overall human input has dropped, the safety of the industry as a whole has improved. These systems have improved so much in fact that pilots have increasingly grown TO reliant on this technology. A fact that became all too clear in 2009 after 2 modern Airbus aircraft (Airbus being known as a leader in fly by wire technology) crashed due to their operator’s overreliance on automated safeguards. One in France (an A320) crashed due to some of the protections being disabled for a test flight. And the other (an A330) was Air France 447, which went down due to its operators lacking skill in hand flying the aircraft. A glaring oversight on the part of airlines, considering that the automated systems forgo control to the human operators when abnormalities (such as receiving conflicting data about important external parameters) are detected.
While human observation is an integral part of aviation (at least for now), an argument could be made in some instances for giving the machines even more control. One instance is collision avoidance.
While Technology exists to warn other aircraft if they are dangerously close to any other nearby. Called TCAS, it issues either climb or descend requests to each aircraft in order to safely separate them. However, this only helps if both operators take the proper evasive maneuvers.
A lesson learned after a midair over Germany in 2002 took 71 lives. While one aircraft obeyed the TCAS generated request (descend), the other did not, due to confusion caused by conflicting instructions from Air Traffic Control. As a result, both aircraft descended and collided.
Another easily avoidable problem is a stall. In this context (aerodynamic stall), it means the threshold speed below which the aircraft’s wings can no longer generate enough lift to stay airborne. While there have been many accidents caused by this over the years, one of the more prominent ones is AF447.
Improper inputs ended up putting the aircraft into a stall, which could have been easily remedied. The stall resulted from the aircraft’s nose being held up, causing the wings to lose lift. While one pilot was aware of this and was attempting to correct the angle, the other was continuing to hold the nose up. Neither properly communicated their actions to one another, and so their conflicting actions were essentially canceled out. The opposing operator eventually realized his mistake and took corrective action, but by then it was too late. The aircraft had fallen below 10,000 feet and didn’t have room to recover.
In both of the above circumstances, I can’t help but think that more autonomy of the aircraft may have resulted in very different end results. Systems already existent had detected the problems and issued the proper warnings. It was the human inputs that culminated in the situations ending in tragedy.
If the case of the midair, if both aircraft are communicating and one aircraft detects that it is taking opposing action, would it not be logical for the automation to take over and reduce the threat? In fact, for the automation to take over on both aircraft until the threat is alleviated?
And how about AF447?
The aircraft detected an imminent stall long before anyone on board had followed the trail. The aircraft also theoretically detected the conflicting inputs from the pilots that were causing the stall to occur. So if it is that cut and dry, shouldn’t the aircraft take corrective action of its own accord?
It is indeed, complicated. While humans are fallible creators, machines do not always have the most accurate picture either, thus even they can not alone, be entrusted. I guess the big question is, what is the right balance between man and machine. It has been a conundrum in aviation for decades. And it will be increasingly a conundrum for car designers and manufacturers going forward.
And on that note, I will start with the article, as promised 1400 words ago.
Our most current case in point is the autonomous vehicle. The received wisdom in the Valley is that the technology for self-driving cars is already here — we just have to wait a few years while the slowpokes in Washington get with the program. Within five years, we’ll all be autopiloted around — free to spend our otherwise unproductive driving time answering email, Snapchatting, or writing code.
Except, come on, there’s no way that’s gonna happen. Not in five years, anyway.
I can agree with this sentiment. While automation and software in the workplace will make big advances in the coming years ahead, I doubt that autonomy in vehicles is going to keep pace. There are WAY too many variables to consider.
The most obvious being the other human drivers still using the roadways. First, because of the flawed nature of the human (of which these machines have to work around). But also because most people are tied to their vehicles for at least 5 to 10 years (leases, and the typical lifespan of many modern vehicles). It should also be considered that it may be very difficult (if not impossible) both practicality and/or expense wise to outfit many vehicles currently on the road with the necessary modifications to make autonomy possible.
We will get there, I have no doubt (well, aside from some calamity that renders it all irrelevant anyway). But I would ball park 20 or 30 years. Because not only will all the equipment (vehicles) have to be updated, but so too will human psychology. Not only people that enjoy the thrill of driving. Also, people that don’t want to part with valued obsolescent vehicles, but don’t have the funds to cover necessary upgrades (at least 2 people in my circle of people come to mind).
Reprogramming a computer is one thing. Reprogramming a human mind geared to a given status quo is quite another.
It’s the messy human bits which will slow it all down. Sure, the technology is pretty good, and will only get better. But self-driving cars raise major questions of social and moral agency — and it’s going to take us a long, long time to resolve and instrument the answer to those questions. And even when we do, it’s not clear we’re all going to agree, meaning that we’ll likely have different sets of rules for various polities around the world.
It will indeed be an interesting conversation, though I don’t think it will differ much from country to country. It seems to me a conversation of drivers vs passengers. Once the technology is more deployed and proven (with the reduced accident and fatality rates to buttress reasoning for its mass deployment), it seems that a choice will have to be made.
Do you allow both types of vehicle to operate? Do you designate each to separate corridors? Or do you outlaw human piloted vehicles altogether?
One thing is for sure . . . libertarian party platforms EVERYWHERE will get a whole lot more interesting. The days of fretting about whether or not drivers licensing is a government overreach will be gone.
It will also be interesting to see how this could be deployed, particularly in very interconnected places like Europe. For big countries like Canada and the US, who will take the lead?
Actually, for the US, I don’t need to ask. New technology is automatically put under the jurisdiction of the federal government. So they will be the decider of sorts.
For Canada and others, however . . . I am not sure. Particularly for Europe.
While it would be interesting to see how the Canadian provinces tackle this problem, Europe will be even more interesting. While Canada will have the federal government to ensure at least a minimum amount of consistency between provinces, all the independent nations of Europe are another ball game altogether.
A mess indeed.
At the root of our potential disagreement is the Trolley Problem.
We have been through this (and then some), so the explanation is unnecessary.
The following quotes are slightly paraphrased.
Our current model of driving places agency — or social responsibility — squarely on the shoulders of the driver. If you’re operating a vehicle, you’re responsible for what that vehicle does.
But autonomous vehicles relieve drivers of that agency, replacing it with algorithms that respond according to pre-determined rules. Exactly how those rules are determined, of course, is where the messy bits show up.
In a modified version of the Trolley Problem, imagine you’re cruising along in your autonomous vehicle, when a team of Pokemon Go playing kids runs out in front of your car. Your vehicle has three choices: Swerve left into oncoming traffic, which will almost certainly kill you. Swerve right across a sidewalk and you dive over an embankment, where the fall will most likely kill you. Or continue straight ahead, which would save your life, but most likely kill a few kids along the way.
What to do? Well if you had been driving, I’d wager your social and human instincts may well kick in, and you’d swerve to avoid the kids. I mean, they’re kids, right?!
I like that I am not the only one that thought of modifying the trolley problem in creative and morbid ways to get a point across. I still like mine better though, because it’s far more realistic.
The first things that come to mind are both breaks and horn. Whether you or a machine, these tools are both available. But yes, that is a very simplistic way to view the scenario. Considering that I was once in a similar (but very real) situation due to my negligence as a child.
I was riding my bike with a couple friends one evening. We were headed to a store to buy candy, which involved crossing a fairly busy road. It seems that in my excitement, I didn’t look both ways, putting me right in the path of a motorist. Rather than hit me, they ended up swerving and hitting a utility pole.
I remember (and will likely always remember) the impact. Riding along and out of nowhere (right beside me!) a “BANG!”, followed by a cloud of dust and plastic debris from the vehicle flying in the air. It scared the holy hell out of me.
At the time, I told the authorities that the car came out of nowhere and that I thought the street was clear (a claim contested by the driver). I don’t think I was lying (I can still picture a clear street). But either way, despite no doubt writing off the vehicle involved, as far as I know, no one was hurt. I do wonder what became of the driver, however. That being at least 20 years ago (likely longer).
While a conveniently handy anecdote, it is an anecdote none the less. While they do not tell the whole story, they can provide color to add context. In this case, it could be seen as a literal real life interpretation of the trolley problem. Involving a VERY real child and a VERY real adult. In this real life scenario, the adult made a decision that is generally accepted as the right one. The driver swerved and hit the pole as an alternative to more than likely killing me. Though my feelings on life have fluctuated in the following years since that incident, I am still grateful that this person allowed me this chance. Possibly even at their own lives expense.
Yet, there is an overlooked factor here. A quite substantial one at that. That is, the fact that the risk profile (not sure how to term it) is not equal for both parties in the equation.
The authors of the article (and really, many people tackling this subject) seem to assign the same level of risk (fatality) to both sides of the coin. If no corrective measures are taken, I and the children in the scenario presented by the author would perish. If corrective measures are taken, the drivers in both scenarios more than likely perish.
This does not jive with reality.
There once was a time when many (if not all) cars on the road could be considered death traps in an accident (particularly in a head on collision). Becoming a projectile due to the lack of seatbelts is one reason. Becoming impaled (literally) on the steering column is another. However, those days are long gone.
Seatbelts are generally mandatory, but for some grandfathered exceptions. The use of airbags reduces the risk of fatality. As do vehicles that are designed to crumple and collapse in order to dissipate impact forces. While the driver may end up getting HURT, assuming fairly low speeds, the driver will more than likely come away alive. As opposed to anyone outside of the protective bubble that is the vehicle.
In short, using the trolley problem in this context is problematic. It may be justifiable in some contexts. But those should be considered carefully.
Not doing so risks placing too much weight on extremes and exceptional circumstances. Which is unfortunate, since vehicular autonomy in the long term will do far more to mitigate vehicle travel risks than pretty much any innovation that predated it.
But Mercedes Benz, which along with just about every other auto manufacturer runs an advanced autonomous driving program, has made a different decision: It will plow right into the kids. Why? Because Mercedes is a brand that for more than a century has meant safety, security, and privilege for its customers. So its automated software will chose to protect its passengers above all others. And let’s be honest — who wants to buy an autonomous car that might choose to kill you in any given situation?
I am going to come right out and call this is ridiculous, and bordering on fear mongering.
I have to add a small caveat here.
When composing this piece, I did not even notice the link to (or take into consideration) the link to the car and driver article above, which was the basis for the author’s stance. Having said that, however, I still stand by what I had written. Which was essential that I doubt public outcry or governing bodies would ever allow vehicles with such programming to grace public roadways.
Yes, people are self-serving creatures. And big business will do almost anything to cater to damn near any cohort that will help to pad its bottom line. But designing algorithms that essentially condone murder for the sake of the occupants of the given vehicle?
Not a chance.
For one thing, this kind of hysteria is almost certain to cause governments to enact laws against this sort of thing. And for another . . . what vehicle manufacturer would WANT to be known as the one with the deadly software?
It may be a selling point for some. But I am fairly certain that they will not out number those that are turned off by that given programming choice.
And this isn’t even taking social implications into account. Short of a contrarian that loves to do anything and everything to mess with the masses, would people really want to be associated with such a death machine?
I could be wrong. Humans are known to release technology onto the world in mass, only to discover problems and flaws later. However, in this case, I doubt such an oversight would occur.
With the illiterate public only consuming awe or fear inducing information about the world of vehicle autonomy, it will be total and absolute stupidity on a MONUMENTAL level, for vehicles with such programming to ever be released into the wild.
It’s pretty easy to imagine that every single automaker will adopt Mercedes’ philosophy. Where does that leave us? A fleet of autonomous robot killers, all making decisions that favor their individual customers over societal good?
To be fair, the author dials it back a bit after this statement. It feels a bit dishonest not to bring that up, so there you have it.
This is a perfect example of one of my biggest pet peeves about futuristic technology conversation. This seeming necessity to take the threat right to the VERY edge of reasonable possibility, and run with it.
Humans are dumb, don’t get me wrong. It will be our undoing. But we are also largely self-serving. Even if I can’t trust that ethics or morality will keep killer software out of autonomous vehicles, I can count on selfishness to do it.
I doubt there would be much long term desire for inherently murderous vehicles.
And yes, I did overlook the possibility of all manufacturers embracing such software (and this making contact with it unavoidable in every way). Mainly because I figure that bad press will keep this out of the realm of possibility, to begin with. Or if nothing else, the laws of all our lands will.
Ralph Nader got us all seatbelts (among other consumer protections). I don’t doubt that someone will step up to the plate to tackle this issue, if necessary.
It sounds far fetched, but spend some time considering this scenario, and it becomes abundantly clear that we have a lot more planning to do before we can unleash this new form of robot agency on the world. It’s messy, difficult work, and it most likely requires we rethink core assumptions about how roads are built, whether we need (literal) guardrails to protect us, and whether (or what kind of) cars should even be allowed near pedestrians and inside congested city centers. In short, we most likely need an entirely new plan for transit, one that deeply rethinks the role automobiles play in our lives.
That’s going to take at least a generation.
There is going to be a transition period, I have no doubt. But I don’t like the emphasis on the negative (whether we need (literal) guardrails to protect us).
All that is changing is the driver of the vehicle. The same precautions that apply now are also going to apply in the future. My way of hinting that we will ALWAYS have to be careful in potentially dangerous situations. If I walk into the street without looking because I am busy texting, it does not matter if I am hit by an ordinary or an autonomous bus . . . I am still the idiot that was not paying attention.
I know “Don’t be stupid!” is a red pill in some circles. But it’s a good rule of thumb to stay in one piece, in the journey of life. In many contexts.
And as President Obama noted at a technology event last week, it’s going to take government.
…government will never run the way Silicon Valley runs because, by definition, democracy is messy. This is a big, diverse country with a lot of interests and a lot of disparate points of view. And part of government’s job, by the way, is dealing with problems that nobody else wants to deal with. — President Obama
Otherwise known as, essentially what I have eluded to previously.
I do have one criticism of president Obama however. Well, the author of the article (being that the quote is used in the context of his work). I disagree that no one else wants to tackle these type of problems (with the exception of government, being forced the task by the founding fathers). Even if heavy issues like this are generally beyond what most ordinary folks want to delve into, there are many philosophers and other wise minds that live for this stuff. People that could well provide helpful insights.
We are here. We have always been here, attempting to nudge the hopelessly misguided towards the beacon of reason. All you have to do is ask.
Governance takes time. The real world is generally a lot messier than the world of our technological dreams. When we imagine a world of self-driving cars, we imagine that only one thing changes: the driver shifts from a human to a generally competent AI. Everything else stays the same: The cars drive on the same roads, follow the same rules, and act just like they did when humans were in charge of them. But I’m not convinced that vision holds. Are you?
I am convinced. Not only am I convinced that things will not be all that different from today (aside from the vehicle operator anyway), I think they will be BETTER.
Roadways world wide are a disaster. There are some good and proficient drivers. But there are many, MANY that are lacking in various ways. The human element. Even the best drivers may not be able to avoid disaster if they come across one of these problematic drivers. Making terrible drivers (and really, humans in general) not just a danger to themselves, but also a danger to everyone sharing a roadway with them.
Automation has a proven track record of reliably replacing often flawed human inputs in all manner of contexts. Be it on the factory floor (where our flaw is both cost and productivity) or in the aviation industry (which has seen significantly fewer accidents as automation became more prominent).
More vehicular automation and autonomy could make common trips even faster. Consider traffic devices like stop signs and lights. If every vehicle on the road was keeping track of where every other nearby vehicle is at any moment (like TCAS does for aircraft), the need to stop (or yield) at many intersections will vanish. Though traffic flow or pedestrian crossings may keep the need for traffic lights (or some flow control mechanism anyway), I sense that these intersections will flow much more quickly than they currently do. Instead of timing to clear a mixture of aggressive and passive motorists, autonomy should ensure a standardized speed for all vehicles in the queue. Which means that the turnaround rate should be faster overall.
Another context that will be greatly helped by the autonomy of vehicles will be emergency response times. I can not list the number of times that I have seen emergency vehicles caught behind people that don’t do what is mandated of them (GET THE HELL OUT OF THE WAY!).
Having a transmitter in emergency vehicles that changes traffic lights within 2 blocks of them to green would help even now. But automation would make things even better. All opposing traffic would automatically clear and deviate as necessary.
I do concede that the basis of my interpretation of autonomous vehicles is based around an interconnected fleet. All vehicles would need to be fully compatible with one another, which would likely require manufacturer cooperation (or a government mandate). Not to mention possible infrastructure upgrades to add external control inputs either under or near all existing roadways. This seemingly is in stark contrast to the individual islands of autonomy and automation that are presented in this (and most other) articles on the subject.
It is my hypothesis of a future that could go anywhere from here. But I ended up at this conclusion for seemingly rational reasons.
While self-driving vehicles may be able to exist as stand alone entities, adding a component of communication and interconnection opens up the potential of the technology even more. From travel time to overall safety, interconnected deployment would be the best option for maximizing this technologies full potential.
I haven’t a clue where the future of autonomous vehicles is headed. I don’t even know where the conversation is headed. What I do know, however, is that there are interesting things to come.