What is fuelling the autonomous vehicle race?

Self-driving cars have the potential to revolutionise the automotive industry. They operate without a driver, which opens the door to new possibilities: mobility improvements, less traffic congestion, and more

In 2021, the global autonomous vehicle market was valued at almost $106 billion and is projected to reach over $2.3 trillion by 2030. Several major companies, including Tesla, Ford, and Uber, are actively working on the development of their own automated vehicles.

The fuel behind these vehicles? Data. 

Data captured from the world around us and fed to machine learning algorithms is how these cars learn the way of the land. Autonomous vehicles must be able to recognise where they are going, detect and avoid hazards, and safely transport passengers. For these detection capabilities to work, large amounts of visual data are collected, such as faces, license plates and other biometric information. 

These vast amounts of data bring data privacy risks and vulnerabilities for data breaches. In response, the EU in 2022 released guidance on these detection capabilities, which include the ability to detect signs and road obstructions. But is it enough? 

Yes, self-driving cars are a possible norm for our future. But where is the line between a technologically advanced car that is safe for people both in and around it - and using all kinds of data to create it? 


What are the privacy risks for automotive vehicle data? 

Whilst these vehicles have amazing capabilities, how data is collected and used can seem like it’s crossing a fine line. 

In the automotive industry, data can be used to track individuals’ movements and profile driving habits. Through video annotation, developers can identify and label these movements with bounding boxes, segmentation masks and object classes. This process enables the cars to label, track, and classify objects in the video data captured by their cameras.

Google’s self-driving cars require cameras, thermal imaging devices, radio detection and ranging (RADAR), and light detection and ranging (LIDAR) devices which collect data of the environment. The vehicles can then analyse their driving environment (weather conditions and surroundings), and detect signage, lanes, other vehicles and pedestrians. 

These cars require the mass collection of sensitive data; biometric, location and personally identifying information, and more, to optimise performance, learn routes, and essentially improve safety. However, there is an increased risk of data access falling into the wrong hands. 

If this data is not properly secured, it can have extremely detrimental consequences.

Studies have shown that when automotive cars are connected to IOT systems to collect this data, they are more vulnerable to data exploitation and cyber attacks.

Manufacturers also typically use third parties and subcontractors, increasing the chain of people with access. 

As a result, there is more risk of data leaks, incorrectly published data, or accidentally being sent or stored in a non-compliant location. Additionally, this personalised information could be improperly used in ways to unfairly target or discriminate against individuals. 

This is particularly problematic in the US if data is shared across state lines, as states have different data privacy regulations and standards for compliance.


What are the privacy laws for self-driving vehicles?

There are numerous data privacy laws relating to autonomous vehicles - several US states including California, Nevada and Michigan have passed laws, and the European Data Protection Board (EDPB) has released extra guidance on the topic.

Manufacturing companies must ensure they take the necessary steps to protect personal privacy, responsibly manage the vast amounts of video data captured, and ensure cyber security. This includes regular data protection impact assessments, reviews of data protection protocols, and succinct and clear privacy policies that are publicly available.

One company that failed to do so was Volkswagen, who was forced to pay a €1.1 million GDPR fine in 2022 after cameras on their test vehicles recorded nearby drivers for error analysis without their knowledge or consent. 

Video anonymisation is a tactful way of balancing the privacy risks of collecting visual data with developing smarter vehicles. 

This means removing any identifiable information that can be linked back to individuals, freeing up video data to be used for training. Under several data protection laws, data is no longer considered “personal data” once it is anonymised. 

By automatically redacting this video footage, it embeds privacy and compliance by design, as under several data protection laws, data is no longer considered “personal data” once it is anonymised.


How can video anonymisation help you in the automotive industry?

Previous
Previous

Why is data privacy in eHealth and telemedicine so important?

Next
Next

What should you know about AI in US healthcare?