mmWave radar and sonar sensors can be combined and used for passive perception. By passive perception, we mean that when obstacles are detected, the raw data are not fed to the planning and control module for decision making. Instead, the raw data are directly sent to the chassis through the CAN bus for quick decision making. In this case, a simple decision module is implemented in the chassis to stop the vehicle when an obstacle is detected within a short range.
The main reason for this design is that when obstacles are detected in close range, we want to stop the vehicle as soon as possible instead of going through the complete decision pipeline. This is the best way to guarantee the safety of passengers as well as pedestrians.
1.4.4 GNSS for Localization
The GNSS system is a natural choice for vehicle localization, especially with RTK capability, GNSS systems can achieve very high localization accuracy. GNSS provides detailed localization information such as latitude, longitude, altitude, as well as vehicle heading. Nonetheless, GNSS accuracy suffers when there are buildings and trees blocking an open sky, leading to multipath problems. Hence, we cannot solely rely on GNSS for localization.
1.4.5 Computer Vision for Active Perception and Localization
Computer vision can be utilized for both localization and active perception. For localization, we can rely on visual simultaneous localization and mapping (VSLAM) technologies to achieve accurate real-time vehicle locations. However, VSLAM usually suffers from cumulative errors such that the longer the distance the vehicle travels, the higher the localization error. Fortunately, by fusing VSLAM and GNSS localizations, we can achieve high accuracy under different conditions, because GNSS can be used as the group-truth data when it is not blocked, and VSLAM can provide high accuracy when GNSS is blocked.
In addition, computer vision can be used for active perception as well. Using stereo vision, we can extract spatial or depth information of different objects; using deep learning techniques, we can extract semantic information of different objects. By fusing spatial and semantic information, we can detect objects of interest, such as pedestrians and cars, as well as getting their distance to the current vehicle.
1.4.6 Planning and Control
The planning and control module receives inputs from perception and localization modules, and generates decisions in real time. Usually, different behaviors are defined for a planning and control module and under different conditions, one behavior is chosen.
A typical planning and control system has the following architecture: first, as the user enters the destination, the routing module checks the map for road network information and generates a route. Then the route is fed to the behavioral planning module , which checks the traffic rules to generate motion specifications. Next, the generated route along with motion specifications are passed down to the motion planner , which combines real-time perception and localization information to generate trajectories. Finally, the generated trajectories are passed down to the control system , which reactively corrects errors in the execution of the planned motions.
A mapping module provides essential geographical information, such as lane configurations and static obstacle information, to the planning and control module. In order to generate real-time motion plans, the planning and control module can combine perception inputs, which detect dynamic obstacles in real time, localization inputs, which generate real-time vehicle poses, and mapping inputs, which capture road geometry and static obstacles.
Currently, fully autonomous vehicles use high definition 3D maps. Such high precision maps are extremely complex and contain a trillion bytes of data to represent not only lanes and roads but also semantic and locations of 3D landmarks in the real world. With HD maps, autonomous vehicles are able to localize themselves and navigate in the mapped area.
In the previous sections we have introduced the proposed modular design approach for building autonomous vehicles and robots. In the rest of the book, we will delve into these topics, and present the details of each module as well as how to integrate these modules to enable a fully functioning autonomous vehicle or robot.
The first part of the book consists of Chapters 2– 8, in which we introduce each module, including communication systems, chassis technologies, passive perception technologies, localization with RTK GNSS, computer vision for perception and localization, planning and control, as well as mapping technologies.
Chapter 2: In-Vehicle Communication Systems
Chapter 3: Chassis Technologies for Autonomous Robots and Vehicles
Chapter 4: Passive Perception with Sonar and mmWave Radar
Chapter 5: Localization with RTK GNSS
Chapter 6: Computer Vision for Perception and Localization
Chapter 7: Planning and Control
Chapter 8: Mapping
The second part of the book consists of Chapters 9and 10, in which we present two interesting case studies: the first one is about applying the modular design to build low-speed autonomous vehicles; and the second one is about how NASA builds its space robotic explorer using a modular design approach.
Chapter 9: Building the DragonFly Pod and Bus
Chapter 10: Enabling Commercial Autonomous Space Robotic Explorers
From our practical experiences, the capabilities of autonomous vehicles and robots are often constrained by limited onboard computing power. Therefore, in the final part of the book, we delve into state-of-the-art approaches in building edge computing systems for autonomous vehicles and robots. We will cover onboard edge computing design, vehicle-to-everything infrastructure, as well as autonomous vehicle security.
Chapter 11: Edge Computing for Autonomous Vehicles
Chapter 12: Innovations on the Vehicle-to-Everything Infrastructure
Chapter 13: Vehicular Edge Security
1.6 Open Source Projects Used in this Book
As you can see, an autonomous driving system is a highly complex system that integrates many technology pieces and modules. Hence, it is infeasible and inefficient to build everything from scratch. Hence, we have referred to many open source projects throughout the book to help readers to build their own autonomous driving systems. Also, throughout the book we have used PerceptIn's autonomous driving software stack to demonstrate the idea of modular design. The open source projects used in this book are listed below:
CANopenNode [14]: This is free and open source CANopen Stack is for CAN bus communication.
Open Source Car Control [15]: This is an assemblage of software and hardware designs that enable computer control of modern cars in order to facilitate the development of autonomous vehicle technology. It is a modular and stable way of using software to interface with a vehicle's communications network and control systems.
OpenCaret [16]: This is an open source Level-3 Highway autopilot system for Kia Soul EV.
NtripCaster [17]: A GNSS NTRIP (Networked Transport of RTCM via Internet Protocol) Caster takes GNSS data from one or more data stream sources (Base Stations referred to as NTRIP Servers) and provides these data to one or more end users (often called rovers), the NTRIP Clients. If you need to send data to more than one client at a time, or have more than one data stream, you will need a Caster.
GPSD (GPS Daemon) [18]: This is a service daemon that monitors one or more GNSS receivers attached to a host computer through serial or USB ports, making all data on the location/course/velocity of the sensors available to be queried on Transmission Control Protocol port 2947 of the host computer. With GPSD, multiple location-aware client applications can share access to supported sensors without contention or loss of data. Also, GPSD responds to queries with a format that is substantially easier to parse than the NMEA 0183 emitted by most GNSS receivers.
Читать дальше