
This summer, Drive.ai will be starting a six-month program in Frisco, Texas, to launch its driverless car program. Drive.ai isn’t offering rides to the public quite yet, and at the outset, Drive.ai’s vehicles will include human drivers, who can intervene in the event of failure in the vehicle’s underlying automatic mechanisms. The human role is limited, however, and Drive.ai intends to phase out live drivers until the vehicles become fully automatic. If all goes according to the company’s plan, driverless cars could be on the road at the conclusion of the six-month pilot program, at the end of 2018.
The benefits of driverless cars are easy to see: less traffic, diminished need for parking lots, and the ability to work or watch tv while being driven. In time, there may even be lowered costs associated with personal transport.
But what about safety?
In March of this year, an autonomous car operated by Uber struck and killed a pedestrian in Tempe, Arizona, during approved vehicle testing on public roads. Uber’s sensors detected the victim but was unable to determine that it was a human being, classifying her first as an unknown object or vehicle. The vehicle’s automatic braking system had been intentionally disabled prior to the incident, and the vehicle failed to give an adequate alert to the safety operator on board to step in.
At the 2018 Mid-Year TTLA (Texas Trial Lawyers Association) Conference, speaker Daniel Hinkle, Senior State Affairs Counsel at the American Association for Justice, challenged the industry message that driverless cars are safer. In fact, Hinkle pointed out that if you compare the recent Uber fatality with the number of miles driven by automated vehicles, automated vehicles are far less safe than their non-automated counterparts.
What limits the ability for driverless cars to be safe? Automated cars simply do not understand the complexities of human interactions, the law, and the roadway. In the case of the Uber accident this past March, the automated system could not even recognize a pedestrian as a human being. This accident, which notably involved a vehicle with a backup driver as an added safety precaution, was the first of its kind, and reminded lawmakers and civilians alike of the inescapable perils of automated transport.
Who is liable?
If driverless car accidents are an inevitable part of our future, who is liable to passengers, other drivers, and pedestrians alike, who may be injured by an automated car? In Texas, the owner of the driverless car is considered to be the driver or operator of the car and therefore is liable as opposed to the automated driving system itself. Texas law is among the more permissive in the nation when it comes to rolling out automated driving programs. In fact, Senate Bill 2205 does not require companies to even alert state and local governments in Texas when driverless vehicles are put on the road, nor does it require a human operator to be in the vehicle. The Senate Bill also overcame efforts to increase the minimum insurance coverage on such vehicles, instead keeping minimums in line with coverage that traditional cars are required to hold.
What does this mean for you?
If you or someone you know is injured by a driverless car, seek legal counsel immediately. The law governing such incidents is still forming, and you will need the help of an attorney to navigate your claim and interactions with the insurance adjuster and legal system. The personal injury lawyers at the Stilwell Law Firm in Houston, Texas are working hard to stay on top of the latest legal developments in automated driving and are here to answer any questions you may have regarding your claims.