Going driverless: Legal considerations for the auto industry

By Imran Ahmad and Sarah Nasrullah   

Industry Innovation & Technology Automotive Manufacturing automotive autonomous driverless manufacturing

Introducing driverless vehicles into our lives comes with unique legal risks.

Gathering data from the environment. PHOTO: FOTOLIA

When the first Michael Bay Transformer movie was released in 2007 depicting alien robots that could transform into driverless vehicles of various sizes, the idea of such autonomous contraptions seemed far-fetched. Today, autonomous and connected vehicles are much closer to reality than fiction.

Studies predict approximately 21 million autonomous vehicles will be on the road by 2035, depending on how quickly the technology can be developed, tested and brought to market.

Vehicles that drive with minimal human supervision are classified on a scale from 0 to 5 (fully automated), as published by the Society of Automotive Engineers. And they rely on sensors (such as radar and cameras) and computer analytics to sense their environments and navigate without human input.

Vehicles using wireless technology connect to the internet and transportation infrastructure to interact and exchange information with other vehicles to create a safer and more efficient transport network.


Autonomous vehicles offer many potential advantages. They eliminate human error and emotion to reduce fatalities (up to 90% by some estimates), and reduce costs for transportation and trucking companies, as well as reduce daily congestion in large metropolitan centres by 50 minutes. But there are legal risks to consider:

  1. Cybersecurity. Autonomous vehicles have more than 100 electrical components. Any of these could be hacked and controlled from a remote location. There’s additional risk related to information gathered by vehicles that’s usually stored with a third-party cloud provider. And many devices in these vehicles come from different manufacturers, adding another source of vulnerability.
  2. Data privacy. Vehicle-collected data includes: location; the environment; driver biometrics; climate control; and the passenger communication system. This raises significant privacy concerns. Protection of personal information is governed by the Personal Information Protection and Electronic Documents Act, which applies to all provinces except those with substantially similar legislation.
  3. Accident liability. Who shoulders the liability: the OEM, or the supplier of sensors and equipment that malfunctioned and caused the accident? This is yet to be clarified. Adding to the complications is whether the accident was due to a malfunction occurring during the interaction of different sensors, or relaying the information from the cloud took too much time. And how will insurance companies calculate risk as liability shifts from the driver to the autonomous vehicle and its manufacturer?

Regulations and policy

Currently, there are no federal safety standards in Canada for autonomous vehicles, but the 2016 federal budget allocated $7.3 million over two years to support their development.

Ontario is the first province to allow automated vehicle testing on its roads. The framework covers how testing will be done and the development of a more comprehensive framework.

Canada and the US generally harmonize safety regulations. The American Association of Motor Vehicles (AAMVA) convened US and Canadian experts to develop guidelines. Their NHTSA Federal Automated Vehicles Policy (released in September 2016) coincided with the NHTSA Cybersecurity Best Practices for Modern Vehicles, which offered these recommendations:

• Use the guidance and best practices provided by NIST, NHTSA and Auto ISAC. Identify risks, analyze potential threats and establish rapid detection and remediation capabilities. Document companies’ actions, changes, design choices and analysis.

• Appoint a high-level, senior executive responsible for the cybersecurity of the product. A top-down approach encourages the rest of the organization to make this a priority. Allocate resources focused on researching, investigating, implementing, testing and validating product cybersecurity measures and vulnerabilities.

• Share information. Join the Auto ISAC and share as much information as possible to stay on top of cybersecurity.

• Create a reporting and disclosure policy to provide guidance to external cybersecurity researchers on how to disclose vulnerabilities.

• Develop a documented process for responding to incidents, vulnerabilities and exploits. Outline roles and responsibilities for each group to ensure a rapid response; define metrics to assess the effectiveness of the process; report all incidents, exploits and vulnerabilities to the Auto ISAC; and periodically run response capability exercises to test the effectiveness of the disclosure policy operations and internal response processes.

• Document the details related to the cybersecurity process for both auditing and accountability. Include risk assessments, penetration test results and organizational decisions.

• Implement fundamental vehicle cybersecurity protections. Develop cyber awareness, such as controlling developer access to an ECU for deployed units by eliminating or limiting access to authorized privileged users; limit diagnostic features to a specific mode of vehicle operation; consider encryption to prevent unauthorized recovery and analysis of firmware; limit the use of network servers on vehicle ECUs to essential functionality; and avoid sending safety signals as messages on common data buses.

As technology and capabilities evolve, long-term security is critical. Have a thorough understanding of policy, regulatory framework and accepted best practices.

How driverless works

Autonomous vehicles gather data from the environment using a combination of sensors such as Lidar (light detection and ranging), radars, cameras, and sensors, both ultrasonic and infrared.

An onboard computer combines these data feeds with GPS coordinates and detailed maps to build a three-dimensional model of the immediate environment (typically up to 61 metres). Artificial intelligence identifies important features such as other vehicles, pedestrians, lane markings and speed signs; then the computer extracts relevant information for each one, such as the speed limit, location, speed and acceleration of other vehicles and pedestrians.

These features update the initial plan to ensure the vehicle reaches its destination safely and legally. For example, even if the initial intent was to accelerate to 60 kph but traffic is heavy and moving slowly, the vehicle must limit its speed to that of the next car.

Once an action is taken, the vehicle will repeat the cycle of sensing, planning and acting (if needed) several times a minute to react quickly to new situations.

Imran Ahmad is a partner specializing in cybersecurity and technology law at Miller Thomson LLP. Sarah Nasrullah practices cybersecurity law.


Stories continue below

Print this page

Related Stories