At present, logistics intelligent handling robots, sweeping robots and so on have been applied in some cities and families, unmanned aerial vehicles, unmanned vehicles and so on are also rapidly promoted, the reason why these robots can quickly enter the application stage, and the development of autonomous positioning and navigation technology is inseparable.
Recently, iResearch, a subsidiary of iresearch Consulting, released their summary of the "TOP10 global AI breakthrough technologies in 2018", and the robot autonomous navigation technology based on multi-sensor cross-border fusion was among them. What is robot autonomous positioning and navigation technology? At present, there are several technical means to realize the autonomous positioning and navigation of robots. What are the difficulties and challenges in implementing these technologies and applications?
Basic: Vision and radar are the primary sensors
It can be said that autonomous positioning and navigation technology has become one of the core and focus of robot products. Dr. Du Mingfang, an expert member of the Chinese Society of Automation and the Internet Industry Research Institute of Tsinghua University, told Sci-Tech Daily that autonomous navigation includes two parts: local navigation and global navigation from a large perspective. Local navigation refers to real-time acquisition of current environmental information through vision, radar, ultrasonic and other sensors, extraction of data fusion features, and intelligent algorithm processing to achieve the judgment of the current passable area and multi-target tracking. Global navigation mainly refers to the use of global navigation data provided by GPS to carry out global path planning and realize path navigation within the scope of full electronic map.
"Currently, vision and radar are the two most important sensors used for local autonomous navigation." Du Mingfang explained that as a passive sensor, the advantages of visual sensor are significant, such as rich access to information, good concealment, small size, will not bring "environmental pollution" due to interference, low cost compared with radar. In order to realize autonomous navigation, it is common for a variety of sensors to cooperate with each other to identify a variety of environmental information, such as road boundaries, terrain features, obstacles, guides, etc. In this way, the robot can determine the reachable area or unreachable area in the forward direction through environment perception, confirm its relative position in the environment, predict the movement of dynamic obstacles, and provide a basis for local path planning.
Du Mingfang told reporters that from the current development situation, multi-sensor information fusion technology has been applied to the autonomous navigation system, and its role is also related to the intelligent level of the robot. "The core of the navigation technology is that it can effectively process and fuse the information collected by multiple sensors, improve the robot's 'resistance' ability to uncertain information, ensure that more reliable information is used, and help to judge the surrounding environment more intuitively." "He said.
Visual navigation has been successfully applied to low altitude aircraft navigation, unmanned aerial vehicle navigation and Mars rover landing navigation. However, Du Mingfang also said that the information provided by visual sensors is not direct, the demand for computing and storage is large, and the burden of network transmission is large. Multi-sensor information fusion can eliminate the uncertainty in robot positioning and navigation and improve the accuracy, but excessive fusion will also bring double increase in the amount of computation.
How can these problems be solved? Du Mingfang believes that choosing the right fusion algorithm is the key. At present, "there are more and more basic theories such as intelligent computing theory and probability theory applied to the field of robot multi-sensor fusion." "He said.
Method: A variety of technology combination to achieve complementary advantages
What are the ways to realize robot autonomous positioning and navigation? In fact, the autonomous driving of cars and the partial autonomous positioning and navigation technology used by robots are consistent. Chen Jinpei, CEO of Chihiro Position, told reporters that the company uses a combination of lidar positioning and navigation and sensor technology to achieve positioning accuracy of about one meter and complete initial positioning in three seconds.
The so-called lidar navigation is to install a laser reflector with precise position around the driving path. The robot sends a laser beam through the laser scanner and collects the laser beam reflected by the reflector to determine its current position and course, and realizes guidance through continuous triangular geometric operation. In addition to ranging and positioning functions, lidar also has the functions of identification and obstacle avoidance.
Du Mingfang said that lidar is an active sensor, and the perception data it provides is much simpler and more direct than visual information, with less calculation when processing. But the disadvantage is high cost, poor concealment, "pollution" to the environment, information is not rich enough.
It is understood that Suning's robot and unmanned vehicle autonomous navigation adopts another "multi-line lidar +GPS+ inertial navigation and other multi-sensor fusion positioning mode". Specifically, firstly, lidar is used for environment mapping to obtain a prior point cloud map, and the global position of the machine is initially determined through GPS and inertial navigation. Then, lidar scanning data is matched with the prior point cloud map to obtain a more accurate global position and achieve accurate positioning and autonomous navigation. At the perception level, lidar integrates vision to identify pedestrians, vehicles and obstacles around them in real time, providing a basis for planning the optimal detour path.
In addition, there is inertial navigation, which refers to the installation of gyroscope on the robot or unmanned vehicle, the installation of positioning block on the ground in the driving area, through the calculation of gyroscope deviation signal (angular rate) and the collection of ground positioning block signal to determine their own position and heading, so as to achieve guidance. The person in charge of Suning said in an interview with Science and Technology Daily that inertial navigation technology has accurate positioning, small workload of ground processing and strong path flexibility. However, the manufacturing cost is high, and the precision and reliability of guidance are closely related to the manufacturing accuracy of gyroscope and its subsequent signal processing. In short, one technical means cannot solve all problems.
Challenges: Power consumption, cost and industrialization issues to be solved
At present, the application of autonomous positioning and navigation robot is mainly divided into two categories, one is the family use of sweeping robot and family care, companion robot. Chen Shikai, CEO of Silan Technology, said that such application scenarios can be summarized as "zero configuration", in terms of consumer use, it should be as simple as possible, and it can be used when bought back. The other is in the commercial scenario, which requires a pre-configuration process with high reliability and scalability.
Chen Shikai said that the personal home scene navigation and positioning system should solve the challenges of power consumption, volume and cost. At present, both real-time localization and map construction (SLAM) algorithm and path planning system have high complexity. "For a robot sweeping the floor, the battery itself may only have a capacity of more than 20 watt-hours. If you put a laptop on it to run the SLAM algorithm, it might run out of power in less than an hour, which is completely unacceptable."
In addition, when the new robot is turned on for the first time, it does not know the structure of the home environment and needs to map out in advance. "This is a contradiction," Chen said. Robots are expected to work immediately when they are in the environment, but mainstream algorithms also need to have a pre-built or explored environment, and in this area, "there is some work for the industry to do." For example, an initial path can be planned, and the path can be gradually refined and improved as the robot is used and explored, Chen said.
In commercial or professional scenarios, the difficulty of autonomous navigation systems is that the map area in commercial scenarios is large, even more than tens of thousands of square meters. "Currently, SLAM systems are memory and computing intensive. How to make it work in such a large scene is a big challenge for navigation and positioning systems." The solution, Mr. Chen said, is to have powerful hardware, along with better optimization of software and algorithms. "At present, a qualified navigation and positioning system should not only have lidar, but also visual sensors and ultrasonic waves, and the corresponding fusion should be carried out in navigation and positioning algorithm. This integration may not be difficult academically or algorithmically, but considering the problems of industrialization, for example, many ultrasonic sensors are non-standard products, and depth vision sensors have different specifications and different installation locations, there are challenges in how to provide a unified standardized interface for customers to use."