Autonomous vehicles rely on several advanced technologies to self-navigate. We unbundle the AV to see how these technologies work together and which companies are driving them forward.
Autonomous vehicles rely on a set of complementary technologies to understand and respond to their surroundings.
Some AV companies are focusing on these specific components and partnering with automakers and Tier-1 suppliers to help bring their products to scale while others, such as Zoox and Nuro, are designing their vehicles from the ground up.
We take a closer look at the many autonomous technologies that make autonomous driving possible and map out the startups looking to make AVs more advanced, less costly, and easier to scale.
This market map consists of private, active companies only and is not meant to be exhaustive of the space. Categories are not mutually exclusive, and companies are mapped according to primary use case.
Please click to enlarge.
Perception
Autonomous vehicles have to be able to recognize traffic signals and signs as well as other cars, bicycles, and pedestrians. They also have to sense an oncoming object’s distance and speed so that they know how to react.
AVs typically rely on cameras and other sensors such as radar and light detection and ranging (lidar), each of which offers its own set of advantages and limitations.
The data collected by these sensors is blended together through a technology called “sensor fusion” to create the most accurate representation of the car’s surroundings as possible.
Cameras & computer vision
Cameras are universally used in autonomous vehicles and vehicles equipped with advanced driver assistance systems (ADAS). Unlike radar and lidar, cameras can identify colors and fonts, which help detect road signs, traffic lights, and street markings.
However, cameras pale in comparison to lidar when it comes to detecting depth and distance.
A number of startups are looking to create cameras for the automotive space that extract the most vivid images possible.
Light, which raised $121M in Series D in July, has developed a camera designed to compete with lidar’s accuracy. The camera can integrate images across all of its 16 lenses to extract a highly-accurate 3D image.
Light’s L16 camera, which features 16 lenses (Source: Light)
To process the data pulled in from the cameras, AV systems use computer vision software that’s trained to detect objects and signals. The software should be able to identify specific details of lane boundaries (e.g. line color and pattern) and assess the appropriate traffic rules.
A number of startups are looking to develop more sophisticated and more efficient computer vision technology.
Companies like DeepScale are deploying deep neural networks to enhance recognition capabilities and maintain an improving error rate over time.
Paris-based Prophesee has developed event-based machine vision that facilitates object recognition and minimizes data overload. The company’s deep learning technology mimics how the human brain processes images from the retina.
Frame-based sensors in a standard camera rely on pixels that capture an image all at the same time and process images frame-by-frame; event-based sensors rely on pixels working independently from each other, allowing them to capture movement as a continuous stream of information.
This technology reduces the data load that traditional cameras experience when processing an image from a series of frames.
Source: Prophesee
Prophesee is looking to deploy its machine vision capabilities across several industries, from autonomous vehicles to industrial automation to healthcare. In February, the startup raised $19M in a Series B follow-on round.
Radar, LiDAR, & V2X
AV developers are incorporating radar and lidar sensors to enhance the camera’s visual capabilities.
AVs use sensor fusion — software that integrates the data from all sensors to create one coherent view of the car’s surroundings — to process the data coming from the multitude of sensors.
Beyond line-of-sight sensors, a number of startups and auto incumbents are working on vehicle-to-everything (V2X) technology, which allows vehicles to wirelessly communicate with other connected devices.
The technology is still in its early days, but it has the potential to provide vehicles with a live feed of nearby vehicles, bicycles, and pedestrians — even when they’re outside the vehicle’s line of sight.
Radar
Cars use radar to detect an oncoming object’s distance, range, and velocity by sending out radio waves.
Radar technology is viewed as more reliable than lidar because it has a longer detection range and doesn’t rely on spinning parts, which are more prone to error. It’s also substantially less costly. As a result, radar is widely used for autonomous vehicles and ADAS.
Lunewave, which raised $5M in seed funding from BMW and Baidu in September, is using 3D printing to create more powerful antennas with greater range and accuracy. The company’s technology is based on the Luneburg antenna, which was developed in the 1940s.
Metawave is also working to enhance radar’s capabilities. The company has developed an analog antenna that uses metamaterials for faster speeds and longer detection ranges.
Metawave’s radar technology (Source: Metawave)
Metawave’s $10M follow-on seed round in May included investments from big auto names such as DENSO, Hyundai, and Toyota, as well as smart money VC Khosla Ventures. The firm announced Tier-1 supplier Infineon’s contribution to a follow-on round in August.
Light detection and ranging (lidar)
Lidar is viewed as the most advanced sensor. Its high accuracy is capable of creating a 3D rendering of the vehicle’s surroundings, facilitating object detection.
Lidar technology creates a 3D rendering of the vehicle’s surroundings (Source: Velodyne)
Lidar technology uses infrared sensors to determine an object’s distance. The sensors send out pulses of laser light at a rapid rate and measure the time it takes for the beam to return to its surface.
Traditional lidar units contain a number of spinning parts that capture a 360° view of the car’s surroundings. These parts are more expensive to develop, and tend to be less reliable than stationary parts. Startups are working to reduce the cost of lidar sensors while maintaining high accuracy.
One solution is solid-state lidar units, which have no moving pieces and are less costly to implement.
Israeli startup Innoviz has developed solid-state lidar technology that will cost “in the hundreds of dollars,” a fraction of the cost of Velodyne’s $75,000 lidar unit, which contains 128 lasers.
In April, Innoviz announced a partnership with automaker BMW and Tier-1 supplier Magna to deploy its lidar laser scanners in BMW’s autonomous vehicles.
Innoviz’s lidar unit, the Innoviz Pro (Source: Innoviz)
Aeva is also developing solid-state lidar. It raised $45M in Series A funding in October. The company claims that its technology has a range of 200 meters and costs just a few hundred dollars. Unlike traditional lidar, Aeva’s technology shoots out a continuous wave of light instead of individual pulses.
China-based Robosense is developing solid-state lidar. It raised $43.3M in Series C funding in October, the largest single round of financing for a lidar company in China. Investors in the round included Alibaba’s logistics arm Cainiao Smart Logistics Network and automakers SAIC and BAIC.
Vehicle-to-everything sensors (V2X)
Vehicle-to-everything (V2X) technology enables the wireless exchange of information between vehicles and other connected devices. While still in its very early stages, V2X tech could help address the limitations of line-of-sight sensors such as lidar, radar, and cameras.
V2X sensors can detect road hazards, traffic jams, and oncoming blind spots outside the vehicle’s field of vision.
Israel-based startup Autotalks is working with Hyundai to scale its V2X sensor technology for the mass market. The startup has received funding from Hyundai as well as Tier-2 supplier Samsung.
Driver data & simulation
Driver data from road testing and simulations is critical for developing self-driving technology, as it trains the algorithms that guide the vehicle.
Autonomous vehicles need to drive hundreds of millions — or even billions — of miles to validate their safety, according to the Rand Corporation. This distance would take years for AV developers to collect from test fleets.
As a result, AV developers are amassing additional miles through simulation.
Simulation startups and AV developers use artificial intelligence to generate or augment simple datasets to train autonomous vehicles. The technology is especially helpful with training AVs on dangerous, less frequent situations, such as blinding sun or a pedestrian jumping out from behind parked cars.
Israel-based startup Cognata has developed a 3D simulation platform that provides customers with a variety of autonomous driving testing scenarios.
Cognata’s 3D simulation platform (Source: Cognata)
The company raised $18.5M in Series B funding in October from investors including Airbus and Maniv Mobility.

NVIDIA is one of the major corporations on the forefront of simulation. In May, it launched a cloud-based simulation platform called DRIVE Constellation. The platform runs on the company’s GPUs and generates a stream of sensor data for the AV system to process. NVIDIA can train its algorithms on billions of miles of custom-built scenarios.
In September, the company opened up its simulation platforms to a partner network, including startups Cognata and Parallel Domain, as well as major tech corporate Siemens.
Another challenge associated with collecting driver data is image annotation, or labeling the data so the AV can recognize and classify objects.
Training data startup MightyAI is working with companies that build computer vision models to help label the data they use to train their systems. MightyAI offers tools for data management, annotation, and validation.
One technique the company utilizes to make sense of collected data is semantic segmentation, which breaks down video images by the pixel to allow for more granular processing.
Source: Medium
Chinese tech giant Baidu has also developed its own semantic segmentation software called ApolloScape for its open-source dataset for autonomous driving.
Baidu’s technology enables image annotation of up to 26 classifications, including cars, pedestrians, bicycles, buildings, and street lights, to help self-driving cars recognize driveable areas on the road and oncoming hazards.
Localization
Autonomous vehicles also need to know their precise location, both for decision making and path planning.
Many rely on GPS signals, but these measurements can be off by as much as 1-2 meters — too significant an error rate, given that an entire bike lane is roughly 1.2 meters on average.
As a result, AV developers rely on a set of technologies, including prebuilt maps, that help reduce errors to less than 1 meter.
Prebuilt maps
As vehicles navigate themselves, they compare their surroundings to a digital map stored in their memory.
These maps, known as HD maps, are more precise than the digital maps used for personal navigation software. They contain road-based information such as lane sizes, crosswalks, and road signs, and are enhanced with data collected from exterior vehicle sensors.
Source: Ars Technica
A number of startups have designed the required hardware (i.e. sensors) and software that can collect data on the road and then turn it into a digital map.
DeepMap has developed map-building software that it plans to license out to automakers and AV-focused tech companies. Tier-1 supplier Robert Bosch invested in the startup in August, joining prior investors Andreessen Horowitz and Accel Partners.
Source: DeepMap
Civil Maps is also developing 3D mapping technology for fully autonomous vehicles. Using AI, the company converts raw sensor data into meaningful map information.
Some companies are building out HD maps themselves, with the intention of licensing the data out to interested parties.
Two major players in the mapping space are HERE Maps and TomTom. HERE Maps was acquired by the German automaker consortium (Audi, BMW, and Daimler) in December 2015. TomTom partnered with Baidu in January to integrate its maps of the US and Western Europe with Baidu’s extensive mapping of China.
Google is also making notable headway in the mapping space. Volvo announced in October that it was switching its map platform to Google from TomTom. Google’s self-driving arm Waymo is also building its own HD maps using data collected by its own vehicles on the roads.
Baidu is building out HD maps for its self-driving car software platform, Apollo. The company sees an opportunity to monetize the maps by selling them to automakers, and either charging service fees or integrating the fees into the cost of the vehicle.
Baidu believes that its HD maps business will eventually be larger than its search business, which is currently the largest in China.
Full systems
A number of companies are working on full autonomous driving systems rather than specific components.
While most of these startups are focusing solely on autonomous driving and partnering with automakers to deploy their technology, a few are rebuilding their vehicles from the ground up.
autonomous driving systems
Most companies building out the full autonomous driving stack offer a package that includes computer vision and sensor fusion software, as well as the required hardware for autonomous driving. These systems are the “brains” of the autonomous vehicle.
Startups in this space typically partner with automakers to deploy their technology. In some cases, they’re making it possible to retrofit existing vehicles with the technology.
Drive.ai, for example, is using its autonomous system to create retrofit kits. After piloting a self-driving car service in Frisco, Texas for several months, the company expanded its service to Arlington, Texas in October.
Drive.ai partnered with Lyft in September 2017 to bring self-driving cars equipped with its system onto Lyft’s open source software platform.
Source: Drive.ai
China also has several companies working on autonomous driving systems.
Beijing-based Momenta attained unicorn status in October, raising a Series C round with contributions from EV manufacturer NIO and Chinese tech giant Tencent. Momenta has partnered with the government of Suzhou to deploy a large-scale test fleet and build out smart transportation systems in the city.
Pony.ai has also reached unicorn status. The company has partnered with Guangzhou Automobile Group, China’s second-largest automaker, to deploy its full AV stack. It launched an autonomous car fleet in Guangzhou in September, just three months after raising $102M in Series A funding.
Full vehicles
Companies like Zoox and Nuro are building vehicles from the ground up.
Zoox’s prototype vehicles differ substantially from the traditional car — they do not include a steering wheel or dashboard, and the interior contains two bench seats that face each other.
Source: Zoox
Its vehicles are not yet legally allowed to drive on public roads, so Zoox is temporarily testing its technology with Toyota Highlanders.
The company’s unique approach has garnered notable attention from investors, and has captured substantial press attention in recent months following the ousting of its co-founder and CEO.
To date, Zoox has raised $800M, including a $500M Series B round in July at a $3.2B valuation. The company plans to deploy its AVs in a ride-hailing service by 2020.
Nuro’s AV is designed to carry cargo rather than people, catering to the last-mile delivery bottleneck that plagues so many retailers.
Source: Nuro
If you aren’t already a client, sign up for a free trial to learn more about our platform.