Tesla Crash: Subtle Insights Into The Human-AI Relationship
Expectations, reality, and why artificial intelligence will continue to revolutionize safety in the wake of Tesla's crash.
Last week it emerged that a self-driving Tesla car had been involved in a fatal crash in Williston, Florida on May 7. The car was in Autopilot mode when its on-board system failed to recognize a left-turning truck intercepting its path, resulting in the car proceeding underneath the trailer and losing its roof. There are already numerous theories about what happened, the most likely being that the car's cameras could not decipher the pale trailer against the bright Florida sky. Another possibility is that the system registered the trailer as an overhead road sign – Tesla cars are programmed to ignore overhead signs to avoid excessive and unnecessary braking – which could have happened due to the onward view beneath the trailer being clear to cameras. Nonetheless, the crash resulted in the death of 40-year-old Joshua Brown from Ohio and official investigations have begun.
Since the incident, Tesla has released a blog post expressing their condolences and outlining the safety procedures all Tesla cars are equipped with. For example, Autopilot is disabled by default in Tesla cars and requires driver confirmation of understanding that this is new technology in a “public beta phase” – when it is enabled, information on-screen advises the driver that Autopilot “is an assist feature that requires you to keep your hands on the steering wheel at all times,” and that “you need to maintain control and responsibility for your vehicle”. At this point in time it is unclear as to whether Brown followed these instructions or not, which is part of the ongoing investigation. See the Florida Highway Patrol diagram below for a visual depiction of what happened.
 |
Image: Florida Highway Patrol |
Expectations vs. reality
This kind of operating technology is relatively new in terms of public access – self-driving cars started surfacing in the 1980s, but have only recently become a popular and achievable concept for public use. In 2012 there was a jump in investments (over doubling from $6BN 2011 to $15BN 2012), with AI garnering $34BN in investments in 2014 - 2015 combined. Aside the issue of ‘robots’ taking30% of jobs by 2025, there’s also the impeding topic of safety, which is a discussion that raises many questions. This Tesla crash is the first time an autonomous vehicle has been involved in a fatality, creating all kinds of new challenges – the requirement of new legislation is one among many. This issue applies to the wider subject of artificial intelligence as a whole. Autonomous vehicles are leading the way, but as we see further automation in manufacturing, financial services, construction, food production and oil and gas – to name a few sectors – new laws will have to be carefully considered to ensure compliance of AI across the board whilst up-to-date guidelines will be needed for investigations and prosecutions. What’s important to remember at this point in time and at least for the next 10 years is that autonomous machines/AI technology in its infancy. We must consider the relationship between humans and AI with an open mind, but with reasonable expectations and a certain level of caution.
Cause, effect and confidence
Whilst considering this tragic event, it’s important to bear in mind the track record of autonomous vehicles to date. Tesla has enjoyed 130 million miles accident-free, compared to the average American record of 94 million miles per fatal accident and the global figure of 60 million miles per fatal accident. This ratio makes the Autopilot in Tesla’s cars undoubtedly safer than the average vehicle. However, true to human form, all sorts of reactions have been inspired following this fatality and early, subtle insights into how our relationship with AI technology is going to have to progress are, for many of us, dawning in the distance. Looking at the broader landscape of artificial intelligence, there are two contradictory yet intertwined challenges presented:
- Many people will struggle to trust technology until it is 100%, guaranteed safe (despite it already being safer than existing human procedures)
- Users may expect more from the technology than it is capable of
There is a fine line between the two, with the latter fueling the former – over-confidence causes accidents, and accidents cause wobbles in public trust. Swedish car manufacturer Volvo has said that come the release of its ‘Drive Me’ autopilot technology in 2017, it will take full legal liability for all its cars operating in fully autonomous mode – this might seem a bold move, or is it rather reasonable based on the much-better-than-average safety records of such technology? As mere humans bound by time, we will have to wait to discover whether the company's confidence is justified; however, itcan be said that Volvo taking responsibility (they have a lot to lose) translates to high expectations for the future of autonomous vehicles, which can only be a good thing for EHS.
Ethics: how will AI make decisions?
The executive chairman of Ford Motor Company, Bill Ford believes the main issue at hand lies in hardware outrunning society, with humans resistant to change or standing in the way of the impeccable safety records that autonomous technology is – or will be – capable of. Ethical discrepancies also inevitably play a major role in advancement and acceptance. In this video, he presents an interesting scenario that raises thought-provoking ethical questions and problems – when an intelligent vehicle is faced with either saving 10 pedestrians from an out-of-control van by intercepting the van, thus killing the intelligent vehicle’s driver, or ignoring the problem and sparing its driver’s life and letting the 10 pedestrians die, what does it do? This is a question for society as a whole that may hold back or at least slow the progression of technology.
Ethics aside (should anyone ever really say that?), in the wake of the Tesla accidents the autonomous vehicle industry has suffered a mere bump in the road. Tesla’s shares did take a minor tumble when news of the investigation broke on June 28, but finished 2% up on the day’s trading. The incident has primarily served as an unexpected glitch in a steep learning curve. An article on CNN suggested that “Had the tractor-trailer also been driven by computer, it could have been on the same network as the Tesla. Like an air traffic control system, the network could have orchestrated the safe passage of both vehicles.” - an interesting and valid point, and perhaps an insight into what the future of autonomous driving holds. As we migrate and advance further towards such a world, technology will be able to communicate on a level-playing field without the hindrance of human error or inefficiencies. However, it’s important to remember that humans are the main character here – all of these efforts into artificial intelligence would be dumb-founded and actually have a boomerang effect on the human race without the specific goal of safer, easier, more-meaningful human lives at the forefront of everything. The human-AI relationship must be developed very, very carefully, and I think we're only at the beginning of understanding the intricacies involved.
The biggest game-changer for EHS
As Bill Ford points out, it’s not good enough for autonomous vehicles to be 80% or 90% safe, they have to provide 100% safety 100% of the time. This is one of the big challenges for technologists working on such projects, but it doesn’t seem that the industry’s progress is set to dwindle in the slightest – new technology is making humans safer in innovative ways, which, based on figures, can’t be contested and this safety performance is what will continue to be the driver (pun intended) behind the success of autonomous cars and AI as a whole. But, a bit like with new software, user adoption is what holds the whole project in the balance and it remains to be seen how the human-AI relationship will develop. For me, the most resounding take away from the Tesla incident - perhaps isolated in its positive nature - is that all Tesla systems are now being updated to recognize and appropriately react to the circumstances that led to this disaster. Unlike humans, autonomous vehicles need only make a mistake once before every car on the global network is corrected – and for that reason, it’s going to be revolutionary for EHS.
“As more real-world miles accumulate and the software logic accounts for increasingly rare events, the probability of injury will keep decreasing.” – Tesla