PC: Ubuntu 20.04, Intel i7, RTX 4090
This article simply explains the architecture and code of the model in relation.
This article covers the parts that make up the model's neural network and phases 3 and 4.
The code can be modified at rl/networks/selfAttn_srnn_temp_node.py.
SpaitialEdgeSelfAttn is responsible for the very front part.
The input dimension is 12, which is composed of Euclidean coordinates (2) * (current position (1) + future position (5)).
Therefore, CrowdSimVarNum, which is in charge of DSRNN, does not predict the future position, so it is composed of 2.
The init is responsible for the initialization and structure of the model.
The model is simply composed of 12 > 128 > 512 using relu, and multiheadattention is used to interpret data from various angles.
Attention is the equation as mentioned in the previous paper.
create_attn_mask is responsible for excluding junk data, which is not necessary for calculation when performing multi_head_attention, from the attention mechanism.
forward is responsible for performing calculations by inserting actual data into the model designed in init.