It is not easy for driverless cars to pay attention to road conditions, understand traffic signs, detect and classify objects, and perceive speed/trajectory and other vehicles. More importantly, it must be able to locate itself on the map in order to know the destination accurately. When tracking the surrounding environment, driverless cars must rely on many sensors, including cameras, radars, ultrasonic waves, GPS antennas and lidar components that use light pulses to measure distance. Each sensor has its advantages and disadvantages. Lidar describes several main parameters of the surrounding environment, including line number, point density, horizontal and vertical viewing angles, detection distance, scanning frequency and accuracy. In addition to position and distance information, lidar also provides density information of the scanned object, which can be used by subsequent algorithms to judge the reflectivity of the scanned object before further processing. By detecting the spatial orientation and distance of the target object, the 3D environment model is described by point clouds, and the laser reflection intensity information of the target is provided to provide a detailed shape description of the detected target, which not only performs well in the environment with good lighting conditions, but also performs well in extreme situations such as night and rainy days. Generally speaking, the lidar sensor is excellent in accuracy, resolution, sensitivity, dynamic range, sensor angle, active detection, low false alarm rate, temperature adaptability, dark and bad weather adaptability, signal processing ability and other indicators. Companies that have done well in lidar at home and abroad include Verdun, iBeo and Sagitar Juchuang.
It is difficult to achieve safe autonomous driving only by relying on a single type of sensor and a single technology. It reminds us that we can't reduce key sensors on the most basic sensing scheme, but also need redundant configuration and information fusion of multiple types of sensors.
First of all, we should figure out how to fill the inherent defects of the sensor most effectively. The second step may be more important, that is, to formulate the best strategy to combine different data streams so that key information will not be lost. It is already a problem that each sensor transmits data at its own image update rate, and sensor fusion is more complicated-some sensors provide original data, while others provide their own answers to object data. 20 17 we have seen a series of progress in sensing technology. Phil Magney, founder and head of VSI Lab, said: "Perception is a major area of driverless car software stack, and there are many innovations in this area." . Technology companies, first-class suppliers and OEMs have been trying to acquire sensor technologies that their companies lack or cannot independently develop. At the same time, there have been many sensor startups in the past two years, many of which are concerned about the driverless car market that is still in its infancy.