PC: Ubuntu 20.04, Intel i7, RTX 4090
This article simply explains the architecture and code of the model in relation.
The model is divided into four stages, where in the first stage, LIDAR information is obtained depending on the environment, and in stages 2 to 4, the data is mainly processed and managed through the model's architecture.
This study assumes a 270 degree range rather than detecting everyone through the existing 360 degree LiDAR.
Furthermore, the robot cannot obtain 'any' information about the blind spot.
This can be changed by setting the robot.belief on line 122 and robot.FOV on line 129 in crowd_nav/config/config.py.
The core of POMDP implementation, pose information, and belief information are handled in rl/vec_env/vec_pretext_normalize.py.
Lines 180 to 248 deal with information about visibility.
Specifically, information about the robot's location is received through the robot_node, information about people's location is received through the spatial_edge, and people's visibility is received through the visible_mask.
The first thing to do is to predict the future location through trajectory prediction because the spatial_edge is initialized to the current location of people.
Therefore, by using traj_buffer and mask_buffer, the location information of the detected people is accumulated, and the trajectory is predicted based on this.
The reason for using a buffer is that the spatial_edge is initialized to the current location of the person every 1 time step, and there is a problem that information is lost, so it is separately stored through the buffer.
The buffer stores up to 5 time steps.
In addition, the buffer needs to be accumulated for more than 2 time steps to proceed with trajectory prediction, so if only 1 time step is accumulated, the trajectory points to the same place.