This article describes the part of the code that applies the reward function.
In crowd_sim/env/crowd_sim_var_num.py, there is a calc_reward section on line 473. This is where you can modify and adjust $r_{dist}$.
This is where we measure the distance of each person to see if they collided. If there is a collision, we break immediately, otherwise, we put the distance between the human and the robot into dmin.
If it goes to dmin, then the next step is to determine the reward feature according to the danger_zone. For circle, there is no $r_{dist}$, and for future, the reward is set to be linear. gaussian is a reward that applies social norms and determines the reward based on a Gaussian function. It doesn't impose a reward yet and simply calculates the distance.
This part covers timeout, collision, and goal, and if it doesn't meet any of them, it calculates and applies a reward. Therefore, the part about $r_{dist}$ can be modified here. However, the part for $r_{traj}$ can be modified in rl/vec_env/vec_pretext_normalize.py. For adjusting the reward function values, please refer to the readme on github. The following sections are about rendering and other parts.