Skip to content

Implementation of an autonomous mobile robot with semantic segmentation, object detection, motion planning and control systems

License

Notifications You must be signed in to change notification settings

jdgalviss/autonomous_mobile_robot

Repository files navigation

autonomous_mobile_robot

[Paper] [BibTex]

01_results

SemanticV4.mp4

This is the implementation of an navigation system for an autonomous mobile robot using only front-facing RGB Camera. The proposed approach uses semantic segmentation to detect drivable areas in an image and object detection to emphasize objects of interest such as people and cars using yolov5. These detections are then transformed into a Bird's-Eye view semantic map that also contains spatial information about the distance towards the edges of the drivable area and the objects around the robot. Then, a multi-objective cost function is computed from the semantic map and used to generate a safe path for the robot to follow.

The code was tested on both simulation and a real robot (clearpath robotics' jackal).

The simulation is implemented in gazebo and uses dolly and citysim.

Semantic segmentation is strongly based on PSPNet and FCHardNet.

Install

  1. Install ROS 2.

  2. Install Docker following the instructions on the link and nvidia-docker (for gpu support). Semantic segmentation will be run inside docker container, however it could be run on the host with the proper configuration of pytorch.

  3. Clone this repo and its submodules

    git clone --recursive -j8 https://github.com/jdgalviss/autonomous_mobile_robot.git
    cd autonomous_mobile_robot
  4. (Optional if you want tu use PSPNet, FCHardNet are already included) Download Semantic Segmentation pretrained models for PSPNet from the following link: Google Drive. This is the required folder structure for these models:

    autonomous_mobile_robot
    |   ...
    └───pretrained_models
        |   ...
        └───exp
            └───ade20k
            |   |   ...
            |
            └───cityscapes
            |   |   ...
            |
            └───voc2012
                |   ...
    
  5. Build Dockerfile.

    cd semantic_nav
    docker build . -t amr

Run

  1. Run docker container using provided script
    source run_docker.sh

Run Simulation

  1. Inside the docker container, run ros2/gazebo simulation using the provided scripts (The first time, it might take a few minutes for gazebo to load all the models)
    source run.sh

Test Semantic Segmentation and calculate perspective transformation matrix

  1. Run docker container and jupyterlab

    source run_docker.sh
    
    jupyter lab --ip=0.0.0.0 --port=8888 --allow-root --no-browser
  2. Follow the instructions in the Jupytenotebook located inside the container in: /usr/src/app/dev_ws/src/vision/vision/_calculate_perspective_transform.ipynb

Additional notebooks are provided in /usr/src/app/dev_ws/src/vision/vision/ to explain some of the concepts used in this work.

Note: Old Implementation using DWAin citysim: dwa

Citation

If you find this project useful for your research, please consider citing:

@inproceedings{galvis2023autonomous,
  title={An Autonomous Navigation Approach based on Bird’s-Eye View Semantic Maps},
  author={Galvis, Juan and Pediaditis, Dimitrios and Almazrouei, Khawla Saif and Aspragathos, Nikos},
  booktitle={2023 27th International Conference on Methods and Models in Automation and Robotics (MMAR)},
  pages={81--86},
  year={2023},
  organization={IEEE}
}

About

Implementation of an autonomous mobile robot with semantic segmentation, object detection, motion planning and control systems

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published