The idea and concept behind driverless vehicles isn’t a new one – we’ve seen TV and movie incarnations of automated transport from Knight Rider’s KITT to Demolition Man’s “Self Drive” and the “Johnny Cab” in Total Recall. Whilst flying vehicles such as the deLorean in Back To The Future 2 haven’t exactly made it into the driverless blueprint, the former have – albeit scaled back somewhat. But how does a driverless vehicle go from prototype testing to on the road ? For this, we need to take a look at Tesla. Being at the forefront of IoT enabled technology in terms of automotive transport certainly does put you into a decent position on the starting grid, but what went into the reported 47m miles of autonomous research, and design ? A recent post on Peerlyst interested me a great deal, and so I decided to write this article. Along the way, I wanted to see how others reacted to the driverless phenomenon – the results are surprising, with most participants willing to trust a driverless vehicle
From the initial safety standpoint, it would appear that Tesla fell on their own sword when a passenger in a driverless vehicle died in a collision with an oncoming lorry – in fact, the driverless vehicle ended up smashing into the oncoming vehicle, relieving the car of its roof and windscreen (no prizes for guessing what happened to the occupant), then after passing under the lorry, veered off the road across a field and through a couple of fences before being stopped by a stationary power pole about 30 metres away from the road. Tesla’s response was that the vehicle
“failed to recognise the white side of the tractor trailer against a brightly lit sky”
Call me a cynic, but isn’t this the primary function of the driverless vehicle’s sensors and cameras ?
Whilst I understand that nobody can realistically predict or play out every expected scenario with even the most advanced artificial intelligence, it does make you wonder why the victim of this accident choose not to remain engaged and take back control (rumour has it that he was watching a Harry Potter DVD at the time). In reality, this isn’t self-drive – it’s more aligned with “assisted drive” or auto-pilot. Tesla have since been vindicated after 6 months of US government enquiries into the safety of its driverless vehicle technology as noted here. After watching this case unfold, other technology vendors have jumped onto the driverless bandwagon. Such names include Amazon, Volkswagen, Ford, and even Airbus with an “pilotless” small aircraft. I’d expect the aircraft is a second generation drone built to carry passengers instead of cargo or Amazon’s deliveries – but doesn’t this in itself raise an alarm bell ? Admittedly, commercial aircraft have had autopilot for a number of years, but a human pilot is still responsible for takeoff and landing. Can you imagine these tasks being automated on a passenger aircraft carrying 300 passengers ?
The risk of automated vehicles
As previously noted, all technology comes with an associated risk. In the case of Tesla, a fatality – based on this, how safe are driverless vehicles ? Tesla has been credited with proving that they can reduce human related accidents by 40% as a result of technology making decisions based on local environmental variables such as pending hazards or unpredictable and careless driving. Despite such impressive statistics, this did not seem to help in the case of Joshua Brown. In addition, the driverless car project from Google seemed to have its own issues after a collision with another vehicle that had jumped a red light in Mountain View. Admittedly, this crash wasn’t actually caused by the driverless vehicle, and even with preemptive intelligence, you’d have to question how feasible it would be to get out of harm’s way if there are obstructions in other lanes – similarly, mounting the kerb or sidewalk wouldn’t exactly be ethical either owing to the risk to pedestrians. However, Google isn’t entirely exonerated in this instance, as one of their driverless vehicles struck the back of a bus – oddly enough, in the same street. According to Google, the driverless vehicle was traveling at 5mph, so it would seem that this was more a result of a poorly executed manoeuvre than a full on collision. So are these autonomous vehicles completely safe ? Courtesy of Inside Edition, let’s get their view
Inevitably, Tesla’s claim of being able to reduce human related accidents thanks to its patented safety technology is very likely to increase appeal across a broad spectrum of both enthusiasts and sceptics alike. However, it does make you wonder how various scenarios would work without a human driver involved. For example
- If two driverless vehicles were involved in a collision, who would be at fault ? This is likely to raise a few eyebrows in the legal and insurance profession, and could ultimately end up as the proverbial legal bowl of spaghetti that would need to be unravelled. Perhaps aircraft black boxes should be considered in this case.
- What provisions will be made for employees that drive for a living – in this sense, taxis, long distance truck drivers etc. – will they be cross trained and redeployed elsewhere ? Admittedly, it would be extremely appealing from the cost perspective for freight companies to be able to make deliveries without drivers running the gauntlet of “pass the batten” in order to legally work around a tachometer to ensure a delivery target is met, but you need to consider the impact to those who perform this particular function as a way of living. And even in driverless scenarios where goods are being transported from one place to the next, you’d still need human interaction to unload the cargo – at the moment, anyway.
In a similar vein, there’s also a large number of positives that could be created from the introduction of driverless vehicles. One that immediately springs to mind is a reduction in road rage. Take a look at the below courtesy of the Naked Gun trilogy for an idea of how this could prevent similar situations
Road rage in a driverless vehicle effectively becomes non-existent. Having two people generating hand gestures from within their vehicles with neither actually responsible for the quality of driving would become a pointless exercise. However, with artificial intelligence making decisions based on driving conditions raises inevitable questions. For example, what happens if a driverless vehicle destined for the foreign market is being used in the UK ? Given the fact that here in the UK we drive on the left, would a vehicle from let’s say France be able to adapt in an automated fashion ? Being in a driverless vehicle driven at 60mph in the opposite direction of oncoming traffic would be enough to cause widespread panic amongst the passenger of the vehicle, and other road users.
In addition, what happens if a driverless vehicle goes “rogue” ? Imagine this scenario
You’re cruising along the highway without a care in the world when suddenly you look at the management console in the vehicle to find that it’s running Windows – and it’s just blue screened…..
OK, perhaps not. But you get the subtle undertone of “maybe we should all get out and back in again” courtesy of Microsoft. The point here is that the vehicle in this instance could refuse manual override. At this stage, you’d need to consider your options – attempt to control the vehicle, or bail out (provided the doors where not locked). Worse still, the driverless vehicle could be subject to sudden intervention in the form of either a remote exploit or ransomware attack. Here’s another thought. In the right circumstances, a criminal gang could first determine your whereabouts by exploiting the GPS location from your smart device (think Find My iPhone) for example. It’s perfectly feasible that your handset is connected to the vehicle in some sort of fashion – be it Bluetooth or WiFi. Using a rogue access point, the gang could effectively determine the vehicle you are travelling in, gain remote control, and essential perform a virtual kidnap – a bit like bringing the pigeons to the cat. Too far fetched ? It’s not. Tesla already has the capability of summoning your vehicle to your doorstep via an app on the phone. Very convenient on a cold morning, but could this be an exploit waiting to be leveraged and executed ?
The real problem with driverless vehicles is not just about safety – it’s also very much about security. It’s been proven several times that Tesla technology isn’t as safe as you’d like to think, and whilst these exploits are quickly patched, how many more are there ? In addition, over the air updates could also be subject to malicious intent – for example, a deliberately modified kernel and firmware could be downloaded autonomously by thousands of vehicles believing it is an legitimate update when in fact, the only “enhancement” it provides is to allow a remote attacker complete control of the vehicle (admittedly, this would require exploitation of the Tesla network in order to distribute the modified firmware, but with the dark web looking to recruit people “on the inside”, this is a very realistic threat). Tesla have enhanced the security of the vehicle CAN bus meaning that a key exchange only known to the vendor was required before an update was accepted. Despite the “what if”, this is still a very plausible scenario if you consider the previous point around “recruitment”.
There are wealth of possibilities that driverless technology could provide, but would you be completely comfortable with both the security and safety aspect ?