Obstacle avoidance problems using Deep Reinforcement Learning (DRL) are becoming possible solutions for autonomous mobile robots. In real-world situations with stationary and moving obstacles, mobile robots must be able to navigate to a goal and safel...
Obstacle avoidance problems using Deep Reinforcement Learning (DRL) are becoming possible solutions for autonomous mobile robots. In real-world situations with stationary and moving obstacles, mobile robots must be able to navigate to a goal and safely avoid collisions. This work is an extension of ongoing research on the navigation approach for a mobile robot. We show that through the proposed DRL, a goal-oriented collision avoidance model can be trained end-to-end without manual turning or supervision by a human operator. We suggest performing the obstacle avoidance algorithm of the mobile robot in both simulated environments and continuous action space of the real world. Finally, we measure and evaluate the obstacle avoidance capability through data collection of hit ratio metrics during robot execution.