Below: Designers often position multiple sensors in hidden spots around the car then stitch data together with high-speed computer processors to generate 360° views.
All photos courtesy of Insight LiDAR

Light detection and ranging (LiDAR) is a crucial sensing technology for autonomous vehicles (AVs). While AVs use radar, cameras, and ultrasound, LiDARs are often viewed as the eyes of autonomous vehicles. Frost & Sullivan notes in a recent report, “The LiDAR sensor is gaining utmost importance in being one of the primary enablers of autonomous driving.”

As autonomous vehicles move out of the lab and into the real world, a new LiDAR technology – frequency-modulated, continuous-wave (FMCW) LiDAR – will play an important role.

LiDAR must-haves

LiDAR provides an AV with critical 3- or even 4-dimensional data on the world around it, enabling quick and accurate decisions to keep passengers, pedestrians, and others safe. LiDAR performance measures that ensure safe AV operation include:

  • Range: AVs must be able to detect and identify objects up to 200m (656ft) away. A vehicle travelling 65mph covers 200m in about 7 seconds. The AV must detect and identify the object, such as a small girl in a dark coat, and have enough time to decide how to react.
  • Sensitivity: Different materials reflect different amounts of light. A white car may have 40% reflectivity while a glossy black car reflects only 4%. LiDARs for AVs must detect a 10% reflectivity target at 200m.
  • Resolution: Cameras specify resolution in megapixels. LiDAR specifies resolution by describing the angular spacing between pixels. Current generation LiDAR sensors typically have a resolution of 0.2° x 0.1° (vertical x horizontal), with some systems as high as 0.1° x 0.1°. While this is adequate for shorter ranges (<100m), long-range LiDAR (up to 250m) needs better resolution.
  • Frame rate: LiDARs for this application need a frame rate of at least 10 frames per second (fps), with 20fps or more being desirable. A higher frame rate typically means faster decisions by the AV.
  • Field of vision: AVs need a full horizontal 360° field of vision. AV manufacturers accomplish this by placing a rotational LiDAR on top of the vehicle or by placing multiple LiDARs at different locations on the vehicle. The goal is to minimize the number of LiDARs per vehicle while still achieving 360° vision. Vertical field of view is typically 30° to 40°.
  • Immunity: LiDAR sensors must operate equally well in bright sunlight or at night. They also must work while other LiDARs operate close by. Traditional LiDAR addresses this issue with optical filters and pulse encoding schemes. The new generation of FMCW LiDAR is naturally immune to such signals due to its detection technique.
  • Velocity measurement: To know which objects to track and avoid, AVs need to determine the velocity of objects in their path. Traditional LiDAR sensors calculate velocity by taking range calculations throughout time, then calculating the object’s speed. This can be error-prone and time consuming. FMCW LiDAR captures the velocity of objects with a single measurement.
  • Cost: Early LiDAR systems cost anywhere from $75,000 to more than $100,000. While prices have dropped, the LiDARs in today’s advanced driver assistance systems (ADAS) and AVs still cost $4,000 to $8,000 per unit. With multiple LiDARs needed per vehicle, this is still too expensive for widespread deployment. While high costs are acceptable for experimental vehicles, widespread adoption will require LiDAR sensors that cost less than $250 per unit in high volume.

LiDARs limitations

LiDARs being used in AVs and ADAS today use a detection technique called time of flight (ToF) – imaging a scene by sending out a short light pulse and then measuring the time it takes for a reflection to return. Only a small amount of light reaches the receiver, requiring sensitive detectors.

Most current LiDARs use lasers that operate around 905nm. Short-pulse lasers and sensitive detectors are available at this wavelength, however, there is a limit to the amount of laser power that can be delivered while remaining eye-safe, making that 200m range requirement a challenge.

FMCW LiDARs transmit a continuous beam of laser light, sweeping through various wavelengths. Like ToF sensors, FMCW LiDAR looks for the signal reflected from an object, however, it compares the frequency of the reflection to a local copy of the signal that it sent out. The difference in frequency determines the range. When the return signal combines with the local copy, the return signal is amplified by 10x to 1,000x, enabling much more sensitive detection than ToF.

This process, called coherent detection, allows FMCW LiDARs to recognize dim objects at a greater distance. It also brings natural immunity to sunlight and other LiDAR sensors.

Driving directly into bright sunlight can blind drivers and it presents a similar problem for traditional LiDARs. Optical filters can help overcome this, but they come at a cost. More importantly, other LiDAR systems operating in the vicinity can confuse traditional ToF LiDARs, a problem that will worsen as more LiDAR-equipped vehicles enter service. FMCW LiDAR sees only the exact frequency range that it transmitted. Other LiDAR or bright sunlight signals are naturally rejected.

Coherent detection can also improve security. Cheap laser pointers can interfere with some ToF LiDAR systems, causing the AVs to drive off the road, swerve, or stop. FMCW LiDARs ignore such disruption attempts because the sensor rejects all light that is not an exact copy of the transmitted wavelength sweep.

Insight Lidar’s frequency-modulated, continuous-wave (FMCW) system.
LiDAR systems can detect large objects such as cars and trucks and smaller on-road obstacles such as bicyclists and pedestrians.

Faster identification, classification

The LiDAR sensor detects and identifies objects from 0m to 200m, providing high-resolution data quickly enough for the system to detect, identify, classify, and act. Today’s best ToF sensors, with 0.1° × 0.1° resolution, put only a few pixels on a pedestrian at 200m, not nearly enough to identify and classify.

Perception engineers prefer to have at least 40 pixels on any object, regardless of distance, to quickly and accurately classify it and decide what action to take. Placing 40 pixels on a pedestrian at 200m equates to a resolution of 0.025° x 0.025°, more than 8x better than the best current systems.

But putting 0.025° x 0.025° resolution over the entire field of view would generate an enormous amount of data, too much for the system to process quickly. Insight LiDAR and other technology companies have developed systems with dynamic resolution control that enable very high resolution (0.025° x 0.025°) in the central area of interest. In addition to putting enough pixels on target, because Insight LiDAR uses FMCW detection, velocity is reported with every pixel as well. Perception teams can detect, identify, and classify objects much more quickly and make faster, more accurate reaction decisions.

LEFT: FMCW LiDAR sensors generate 360° views around a vehicle.

Chip-scale LiDAR

To meet cost goals, all optical functionality including laser emission, calibration, control, and detection should be integrated on a single photonic-integrated circuit (PIC). All electronic controls and processing should be on an application-specific integrated circuit (ASIC), eliminating fiber amplifiers, fiber routing, and fiber connections and the reliability concerns that come with micron-scale alignments.

Long-range FMCW LiDAR, coupled with high resolution and full PIC/ASIC integration can, perhaps, meet the aggressive industry cost goals while delivering the critical performance needed for safe AV operation.

Sensors for autonomous vehicles continue to evolve, and to meet the critical long-range, high-resolution requirements, FMCW LiDAR is emerging as a critical enabling technology.

Insight LiDAR

Autonomous Products

Ground-penetrating radar

TerraVision localizing ground penetrating radar (LGPR), developed at MIT Lincoln Laboratory for military applications, sends radio waves into the ground, creating a digital fingerprint of the subsurface.

The underground map of soils and rocks becomes the reference to guide autonomous vehicles and is immune to above-ground conditions such as snow, fog, rain, or dust that present huge challenges to the usual AV sensors.

LGPR uses radar to map underground rocks, soil layers, pipes, and roots. It stitches together each 3m-deep slice image to create a 3D fingerprint that can be used by any LGPR-equipped vehicle to know exactly where it is.

LGPR testing has shown an in-lane localization accuracy at highway speeds of about 4cm.