This page deals with a brief description of the paper "Transformable Gaussian Reward Function for Socially-Aware Navigation with Deep Reinforcement Learning" (TGRF) posted in February 2024. Please refer to the github below for how to use the source code.
Robot navigation has transitioned from prioritizing obstacle avoidance to adopting socially aware navigation strategies that accommodate human presence. As a result, the recognition of socially aware navigation within dynamic human-centric environments has gained prominence in the field of robotics. Although reinforcement learning technique has fostered the advancement of socially aware navigation, defining appropriate reward functions, especially in congested environments, has posed a significant challenge. These rewards, crucial in guiding robot actions, demand intricate human-crafted design due to their complex nature and inability to be automatically set. The multitude of manually designed rewards poses issues with hyperparameter redundancy, imbalance, and inadequate representation of unique object characteristics. To address these challenges, we introduce a transformable gaussian reward function (TGRF). The TGRF significantly reduces the burden of hyperparameter tuning, displays adaptability across various reward functions, and demonstrates accelerated learning rates, particularly excelling in crowded environments utilizing deep reinforcement learning (DRL). We introduce and validate TGRF through sections highlighting its conceptual background, characteristics, experiments, and real-world application, paving the way for a more effective and adaptable approach in robotics. Thecomplete source code is available on https://github.com/JinnnK/TGRF.
Keywords: Artificial intelligence, Machine learning, Reinforcement learning, Robot control, Robotic programming, Reward Shaping.
https://www.youtube.com/watch?v=9x24k75Zj5k&t=1s
Transformable Gaussian Reward Function for Socially-Aware...