How is driverless technology realized?

How is the driverless technology realized?

How to operate a car? I believe that drivers can give accurate answers to this question, such as observing road conditions with eyes, assisting hearing to judge whether there are obstacles, and feeding back the collected information data to the brain, so that the brain can manually control the steering wheel and gear lever, and control the brakes and throttle with feet. Driving is so simple, and the realization of driverless technology is actually very simple.

system structure

1: powerful computer replacement! Whether using the five senses to collect information or using sensors to collect information, all the information will be judged and analyzed by the brain, and then the limbs can be visualized. Therefore, self-driving cars must have very powerful computers to calculate the road conditions in the real 3D world. However, with the current technical level, it is still impossible to achieve zero-vulnerability super-large data flow operation, and even if there is, it is impossible to control the cost to match the ordinary scooter, so there is no real driverless car in the strict sense.

2: accurate sensor! Millimeter-wave radar, laser radar and camera replace the driver's "eyes" to a certain extent, and sensors will determine the vehicle's dynamics by detecting obstacles on the road, road signs and other information. For example, if the radar detects an obstacle ahead, the vehicle will calculate the distance according to the reflection time of the radar wave to adjust the braking force and accelerate after the obstacle disappears. When the radar can't accurately identify the characteristics of obstacles, the video information is collected by the camera for analysis. However, there are huge loopholes in this system.

One of the two kinds of radars can't accurately identify obstacles, because millimeter-wave radar is very easy to misjudge, and the detection distance of lidar is too close. The former is easy to cause the wrong braking of the vehicle, while the latter cannot guarantee the standard safe distance. As for video acquisition, it is even more difficult. How can the zoom lens decide when to zoom, and how can the fixed-focus lens fit the road signs in the distance? The key point is that in the meteorological conditions with low visibility such as rain, snow and smog, and many impurities suspended in the air, the misjudgment rate of these sensors will be very high.

3: automatic driving system! Active braking and adaptive cruise are enough to perform automatic acceleration and deceleration of the vehicle. Because acceleration is nothing more than giving ECU a set of data, it is necessary to adjust the throttle and fuel injection quantity when acceleration is needed, and the power output will be cut off on the premise of "braking priority" when deceleration is needed, and braking will only be carried out by the relevant configuration of ESP body stability program. These systems are the most basic configuration of vehicles, so most non-intelligent cars have the potential to upgrade, but it is not necessary at present.

knowledge point: the active acceleration and deceleration function derived from ESP system has scene restrictions, such as driving on roads with low friction coefficient such as rain, snow, gravel, etc., and vehicles are not allowed to use even ordinary constant speed endurance. Because the setting program will fully accelerate on the basis of a large amount of fuel injection by ECU after slowing down and restoring the constant speed; If the vehicle accelerates sharply on the slippery road, it is very likely that the vehicle will get out of control, so the application scope of these autopilot configurations is not wide. Absolute driverless cars do not exist at present, and that's it.