Contenus de la page
Abstract
This project was born thinks to an european collaboration between University of Tartu, Bordeaux INP and ENSAM. As part of the Ros4pro workshop, the university of tartu (Estonia) has developed an educational robot: the Robotont.
In 2022 it is now time to build it in France. This is why two first-year trainees at ENSEIRB MATMECA, Lohézic Victor and Bonnet Lilian modified the Robotont. Furthermore, they provided the following deliverables, which explains how to build and program the robot.
Presentation
Robotont is a mobile robot equipped with three holonomic wheels. Its biggest feature is its real sense camera. Indeed, thanks to this, he can locate himself in space by recognizing his environment.
It is programmed under ROS which allows it to be integrated into the Ros4pro workshop. Ros4pro is an educational project of introduction to ROS offered in engineering schools and for other occasions. The project is studied from two points of views. Indeed, two Raspberry Pi were used. Each of them have a different operating system : Ubuntu 20.04.4 and Raspberry Pi OS, hence a different Noetic installation.
Noetic Installation & Robotont
Computer Ubuntu Focal (20.04)
Noetic Installation
Follow instruction from this link.
Creating a catkin workspace
You need to install catkin package with the following command :
sudo apt-get install python3-catkin-tools
Let’s create and build a catkin workspace:
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/
catkin_make
Run this command (you can add it to your .bashrc):
source ~/catkin_ws/devel/setup.bash
Robotont’s Packages
In the directory src from your workspace, run
git clone https://github.com/robotont/robotont_navigation
git clone https://github.com/robotont/robotont_msgs
git clone https://github.com/robotont/robotont_description
git clone https://github.com/robotont/robotont_nuc_description
git clone https://github.com/robotont/robotont_demos
git clone https://github.com/robotont/robotont_gazebo
Move in your workspace directory and run:
catkin build
Last Robotont’s packages
The following packages are needed to run the demos:
- For mapping 2D
sudo apt update
sudo apt install ros-neotic-depthimage-to-laserscan
sudo apt install ros-neotic-move-base
Run
sudo apt-get update
sudo apt-get install -y python3-wstool python3-rosdep ninja-build stow
In your workspace :
wstool init src
wstool merge -t src https://raw.githubusercontent.com/cartographer-project/cartographer_ros/master/cartographer_ros.rosinstall
wstool update -t src
Use rosdep
to install the packages
sudo rosdep init
rosdep update
rosdep install --from-paths src --ignore-src --rosdistro=${ROS_DISTRO} -y
Install abseil-ccp
src/cartographer/scripts/install_abseil.sh
Build and install
catkin_make_isolated --install --use-ninja
2. For 3D mapping
sudo apt install ros-noetic-rtabmap-ros
3. For AR tracking
git clone -b noetic-devel https://github.com/machinekoder/ar_track_alvar/tree/noetic-devel
Launch simulation
Update the .bashrc with the following lines :
export ROS_HOSTNAME=$(hostname).local
export ROS_MASTER_URI=http://localhost:11311
Mapping 2D
Launch the simulator
roslaunch robotont_gazebo world_minimaze.launch
Launch teleop keyboard
roslaunch robotont_demos teleop_keyboard.launch
Launch 2d_slam.launch
roslaunch robotont_demos 2d_slam.launch
Display the map on RViz
roslaunch robotont_demos 2d_slam_display.launch
Mapping 3D
Launch the simulator
roslaunch robotont_gazebo world_colors.launch
Launch 3d_mapping.launch
roslaunch robotont_demos 3d_mapping.launch
Launch 3d_mapping_display.launch to visualize the result
roslaunch robotont_demos 3d_mapping_display.launch
To move the robot open another terminal window and run teleop twist keyboard
roslaunch robotont_demos teleop_keyboard.launch
Mapping and navigation
Clone the following package from the directory src and keep only map_server
git clone -b noetic-devel https://github.com/ros-planning/navigation
From your workspace, run
catkin_make
Make the cartography then run the following command to save it :
rosrun map_server map_saver -f mymap
Run the server
rosrun map_server map_server -f mymap.yaml
Run the commands from Mapping 2D part.
Modify the rviz interface to load the map.
Some errors
ERROR: cannot launch node of type [gmapping/slam\_gmapping]: gmapping
sudo apt install ros-noetic-slam-gmapping
Error
Could not find a package configuration file provided by "joy" with any of the following names: joyConfig.cmake joy-config.cmake
Solution
sudo apt-get install ros-noetic-joy
Error
ERROR: cannot launch node of type [teleop_twist_keyboard/teleop_twist_keyboard.py]: teleop_twist_keyboard
Solution
sudo apt-get install ros-noetic-teleop-twist-keyboard
Depth Camera
The camera is a Realsense Depth Camera D435.
SetupThe camera assembly is defined on the following website page.
Register the server’s public key:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-key F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE || sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-key F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE
Add the server to the list of repositories:
sudo add-apt-repository "deb https://librealsense.intel.com/Debian/apt-repo $(lsb_release -cs) main" -u
Install the libraries
sudo apt-get install librealsense2-dkms
sudo apt-get install librealsense2-utils
Install the following libraries (optionally):
sudo apt-get install librealsense2-dev
sudo apt-get install librealsense2-dbg
To finish:
sudo apt-get update
sudo apt-get upgrade
You can reconnect your camera and run:
realsense-viewer
This example demonstrates how to start streaming depth frames from the camera and display the image in the console as an ASCII art.
This link provided the example. I modified slightly the script :
## License: Apache 2.0. See LICENSE file in root directory.
## Copyright(c) 2015-2017 Intel Corporation. All Rights Reserved.
#####################################################
## librealsense tutorial #1 - Accessing depth data ##
#####################################################
import time
# First import the library
import pyrealsense2 as rs
try:
# Create a context object. This object owns the handles to all connected realsense devices
pipeline = rs.pipeline()
# Configure streams
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
# Start streaming
pipeline.start(config)
while True:
# This call waits until a new coherent set of frames is available on a device
# Calls to get_frame_data(...) and get_frame_timestamp(...) on a device will return stable values until wait_for_frames(...) is called
frames = pipeline.wait_for_frames()
depth = frames.get_depth_frame()
if not depth: continue
# Print a simple text-based representation of the image, by breaking it into 10x20 pixel regions and approximating the coverage of pixels within one meter
coverage = [0]*64
for y in range(480):
for x in range(640):
dist = depth.get_distance(x, y)
if 0 < dist and dist < 1:
coverage[x//10] += 1
if y%20 is 19:
line = ""
for c in coverage:
line += " .:nhBXWW"
coverage = [0]*64
print(line)
time.sleep(1)
exit(0)
#except rs.error as e:
# # Method calls agaisnt librealsense objects may throw exceptions of type pylibrs.error
# print("pylibrs.error was thrown when calling %s(%s):\n", % (e.get_failed_function(), e.get_failed_args()))
# print(" %s\n", e.what())
# exit(1)
except Exception as e:
print(e)
pass
The result is rather fun :
This example display the same videos as in the RealSense viewer (with the command realsense
).
The code is available here.
This is the following result :
This example remove the background thanks to the depth.
Raspberry Pi 4
Ubuntu 20.04.4 installation
Operating System Installation- Download and install Raspberry Pi Imager to a computer with an SD card reader.
- Choose the Ubuntu Server 20.04.4 LTS (RPI 3/4/400) 32 bits. Indeed, Noetic is available only on Ubuntu 20.04.4.
- Then, choose the SD card and write on it.
- Click on the setting button
- Set hostname
raspberrypi.local
- Click on Enable SSH and Configure Wireless LAN
Open a terminal and run:
ssh pi@raspberrypi.local
Solution
The terminal displays :
The authenticity of host 'raspberrypi.local (192.168.0.119)' can't be established. ECDSA key fingerprint is SHA256:13zUWPqvlgOGa3yJ/oHvZIC8bnDb0gw7p86L9PNVEp4. Are you sure you want to continue connecting (yes/no/[fingerprint])? }
Answer yes.
pi@raspberrypi.local's password:
Answer the previous password.
Setup your sources.list
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
Set up your keys
sudo apt install curl # if you haven't already installed curl
curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | sudo apt-key add -
If you have an error, run this command:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys F42ED6FBAB17C654
sudo apt update
Then run:
sudo apt upgrade
Install ROS packages :
sudo apt install ros-noetic-desktop-full
Don’t forget to add the following command in your .bashrc :
source /opt/ros/noetic/setup.bash
Install all dependencies
sudo apt-get install -y python3-rosdep python3-rosinstall-generator python3-wstool python3-rosinstall build-essential cmake
Swap is a storage on a hard disk where the operating system store temporally data. It is required when there aren’t enough spaces in the RAM.
The following command checks if there is a swap available :
sudo swapon --show
Run (to check if a swap is running):
free -h
Create a swapfile of 2Go :
sudo fallocate -l 2G /swapfile
You can check is the command is a success with :
ls -lh /swapfile
The file is available only for the root :
sudo chmod 600 /swapfile
The space is defined like a swap space :
sudo mkswap /swapfile
Enable the swap :
sudo swapon /swapfile
You can check if the swap is working with:
free -h
These following commands allows the swap to work each time :
sudo cp /etc/fstab /etc/fstab.bak
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
source ~/.bashrc
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/
catkin_make
If you have the following error : No CMAKE_CXX_COMPILER could be found
, run:
sudo apt-get update && sudo apt-get install build-essential
sudo apt-get install python3-catkin-tools
Clone the following packages in the directory src:
git clone -b git dev-noetic-devel https://github.com/victorlohezic/robotont_description
git clone -b git dev-noetic-devel https://github.com/victorlohezic/robotont_nuc_description
git clone -b git dev-noetic-devel https://github.com/victorlohezic/robotont_navigation
git clone -b git dev-noetic-devel https://github.com/victorlohezic/robotont_demos
git clone -b git dev-noetic-devel https://github.com/victorlohezic/robotont_msgs
git clone https://github.com/ros-teleop/teleop_twist_keyboard
git clone https://github.com/victorlohezic/robotont_laserscan_to_distance
sudo apt-get install ros-noetic-joy
Install a package for the drivers in your source :
git clone https://github.com/robotont/robotont_driver
cd robotont_driver
rosinstall . .ci-rosinstall
cd serial
make
make install
cd ..
cd ..
mv robotont_driver/serial .
sudo apt install ros-noetic-tf
Add to your .bashrc
:
source ~/catkin_ws/devel/setup.bash
Then
source .bashrc
Then, run from your workspace:
catkin_make
The following packages are needed to run the demos :
- For mapping 2D:
sudo apt update
sudo apt install ros-noetic-depthimage-to-laserscan
sudo apt install ros-noetic-gmapping
sudo apt install ros-noetic-move-base
2. For 3D mapping :
sudo apt install ros-noetic-libg2o
Install RTAB-Map libraries (not in your workspace):
cd ~
git clone https://github.com/introlab/rtabmap.git rtabmap
cd rtabmap/build
cmake .. [<---double dots included]
make -j6
sudo make install
Install the ROS package
cd ~/catkin_ws
git clone https://github.com/introlab/rtabmap_ros.git src/rtabmap_ros
catkin_make -j4
3. For AR tracking, in the directory src
git clone -b noetic-devel https://github.com/machinekoder/ar_track_alvar
Raspberry Pi OS installation
Operating System InstallationFollow the same steps of the Ubuntu 20.04.4 installation part. However, choose the Raspberry Pi OS Lite (A port of Debian Buster with security updates and no desktop environment)
Setup your sources.list
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
Set up your keys
sudo apt install curl # if you haven't already installed curl
curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | sudo apt-key add -
Then run:
sudo apt update
If you have an error, run these commands:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys F42ED6FBAB17C654
sudo apt update
Install all dependencies
sudo apt-get install -y python-rosdep python-rosinstall-generator python-wstool python-rosinstall build-essential cmake
Initialize Rosdep
sudo rosdep init
Then, update
rosdep update
Run
mkdir ~/catkin_ws
cd ~/catkin_ws
Use rosinstall_generator to generate a list of noetic dependencies :
rosinstall_generator ros_comm --rosdistro noetic --deps --wet-only --tar > noetic-ros_comm-wet.rosinstall
Run the following command:
wstool init src noetic-ros_comm-wet.rosinstall
Install all system dependencies
rosdep install -y --from-paths src --ignore-src --rosdistro noetic -r --os=debian:buster
Follow instructions from swap part, then write the following command :
sudo src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release --install-space /opt/ros/noetic -j1 -DPYTHON_EXECUTABLE=/usr/bin/python3
Install the Robotont’s package
git clone https://github.com/robotont/robotont_description
git clone https://github.com/robotont/robotont_nuc_description
git clone https://github.com/robotont/robotont_navigation
git clone https://github.com/robotont/robotont_demos
git clone https://github.com/robotont/robotont_driver
Run
sudo apt-get install libspnav-dev
sudo apt install libx11-dev
From this repository, download in your your directory src
the following directories :
sensor_msg geometry_msg
Git clone the following repositories in your src
:
git clone https://github.com/ros-drivers/joystick_drivers
git clone https://github.com/ros/bond_core
git clone https://github.com/ros/roslint
Add the following lines in your .bashrc
:
source /opt/ros/noetic/setup.bash
source ~/catkin_ws/devel/setup.bash
From this repository, download the following directories in your src
:
nav_msgs actionlib_msgs
In the same way, from this repository :
tf
Install the package angle in your src
:
git clone https://github.com/ros/angles
From this repository, download the following directories in your src
:
tf2 tf2_msgs tf2_ros tf2_py
Install the packages actionlib and serial in your src
:
git clone https://github.com/ros/actionlib
git clone https://github.com/wjwwood/serial
In the directory serial
, run the following commands to intall :
make
make test
make install
Then, run from your workspace:
catkin_make
If your have the following error :
-- tf: 1 messages, 1 services CMake Error at catkin/cmake/test/gtest.cmake:180 (add_executable): add_executable cannot create target "test_transform_datatypes" because another target with the same name already exists. The existing target is an executable created in source directory "/home/flynn/dev/ros/noetic/src/geometry2/tf2". See documentation for policy CMP0002 for more details. Call Stack (most recent call first): catkin/cmake/test/gtest.cmake:89 (_catkin_add_executable_with_google_test) catkin/cmake/test/gtest.cmake:37 (_catkin_add_google_test) geometry/tf/CMakeLists.txt:86 (catkin_add_gtest) CMake Error at geometry/tf/CMakeLists.txt:87 (target_link_libraries): Attempt to add link library "tf" to target "test_transform_datatypes" which is not built in this directory. This is allowed only when policy CMP0079 is set to NEW.
You need to change the file CMakeLists.txt
from src/geometry2/tf2
as here.
Your CMakefile looks like :
cmake_minimum_required(VERSION 3.0.2)
project(tf2)
find_package(console_bridge REQUIRED)
find_package(catkin REQUIRED COMPONENTS geometry_msgs rostime tf2_msgs)
find_package(Boost REQUIRED COMPONENTS system thread)
catkin_package(
INCLUDE_DIRS include
LIBRARIES tf2
DEPENDS console_bridge
CATKIN_DEPENDS geometry_msgs tf2_msgs rostime)
include_directories(include ${catkin_INCLUDE_DIRS} ${console_bridge_INCLUDE_DIRS})
# export user definitions
#CPP Libraries
add_library(tf2 src/cache.cpp src/buffer_core.cpp src/static_cache.cpp)
target_link_libraries(tf2 ${Boost_LIBRARIES} ${catkin_LIBRARIES} ${console_bridge_LIBRARIES})
add_dependencies(tf2 ${catkin_EXPORTED_TARGETS})
install(TARGETS tf2
ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
RUNTIME DESTINATION ${CATKIN_GLOBAL_BIN_DESTINATION}
)
install(DIRECTORY include/${PROJECT_NAME}/
DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION}
)
# Tests
if(CATKIN_ENABLE_TESTING)
catkin_add_gtest(test_cache_unittest test/cache_unittest.cpp)
target_link_libraries(test_cache_unittest tf2 ${console_bridge_LIBRARIES})
add_dependencies(test_cache_unittest ${catkin_EXPORTED_TARGETS})
catkin_add_gtest(test_static_cache_unittest test/static_cache_test.cpp)
target_link_libraries(test_static_cache_unittest tf2 ${console_bridge_LIBRARIES})
add_dependencies(test_static_cache_unittest ${catkin_EXPORTED_TARGETS})
catkin_add_gtest(test_simple test/simple_tf2_core.cpp)
target_link_libraries(test_simple tf2 ${console_bridge_LIBRARIES})
add_dependencies(test_simple ${catkin_EXPORTED_TARGETS})
add_executable(speed_test EXCLUDE_FROM_ALL test/speed_test.cpp)
target_link_libraries(speed_test tf2 ${console_bridge_LIBRARIES})
add_dependencies(tests speed_test)
add_dependencies(tests ${catkin_EXPORTED_TARGETS})
catkin_add_gtest(test_transform_datatypes2 test/test_transform_datatypes.cpp)
target_link_libraries(test_transform_datatypes2 tf2 ${console_bridge_LIBRARIES})
add_dependencies(test_transform_datatypes2 ${catkin_EXPORTED_TARGETS})
endif()
Run again
catkin_make
The project required laser_scan_to_distance. Thus, run in your directory src
:
git clone https://github.com/ros-perception/depthimage_to_laserscan
git clone https://github.com/ros/dynamic_reconfigure
From this repository, dowload in your src
:
image_geometry
In the same way, from this repository:
image_transport
From this repository:
nodelet
The project required also some packages to do slam. These packages allow the robot to do simultaneous localization and mapping.
Install in your src
:
git clone https://github.com/ros-perception/slam_gmapping
git clone https://github.com/ros-perception/openslam_gmapping
The robot needs the package move_base. It is useful for the navigation.
From this repository, download in your src
:
Install in your src
:
git clone https://github.com/ros-perception/laser_geometry
https://github.com/ros-planning/navigation_msgs
From this repository, dowload the following directories in your src
:
move_base base_local_planner costmap_2d voxel_grid nav_core clear_costmap_recovery nvfn rotate_recovery
In the same way, from this repository:
tf2_geometry_msgs tf2_sensor_msgs
From this repository:
viualization_msgs
Install the following package to control the robot with your keyboard :
git clone https://github.com/ros-teleop/teleop_twist_keyboard
Run the last command :
catkin_make
Set up ROS_MASTER_URI
First, get the IP of your computer and your raspberry (connect in ssh before) with the following command :
hosname -I
For instance, I have the following IP:
Raspberry Pi 4 | PC |
192.168.2.115 | 192.168.2.103 |
Modify the .bashrc of the raspberry pi and the computer :
#Raspberry Pi
export ROS_MASTER_URI=http://192.168.2.115:11311
export ROS_IP=192.168.2.115
#computer
export ROS_MASTER_URI=http://192.168.2.115:11311
export ROS_IP=192.168.2.103
source .bashrc
Run the following command on the raspberry pi
ssh pi@raspberrypi.local
then
cd catkin_ws
roscore
Run on the computer :
cd catkin_ws
rosnode list
It’s a success, if there aren’t errors.
Change the wifi
Modify the file wpa_suplicant.conf. Its path is /etc/wpa_supplicant/wpa_supplicant.conf
.
country=FR ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev ap_scan=1 update_config=1 network={ ssid="your-ssid" psk="your-password" }
Coral USB Accelerator
TPU InstallationAdd packages to your system:
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update
Install the Edge TPU runtime:
sudo apt-get install libedgetpu1-std
Connect the USB Accelerator to your computer using the provided USB 3.0 cable.
sudo apt-get install python3-pycoral
Download the following code :
mkdir coral && cd coral
git clone https://github.com/google-coral/pycoral.git
cd pycoral
Download the following resources :
bash examples/install_requirements.sh classify_image.py
Run the image classifier with the bird photo
python3 examples/classify_image.py \
--model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \
--labels test_data/inat_bird_labels.txt \
--input test_data/parrot.jpg
The result must be :
----INFERENCE TIME---- Note: The first inference on Edge TPU is slow because it includes loading the model into Edge TPU memory. 11.8ms 3.0ms 2.8ms 2.9ms 2.9ms -------RESULTS-------- Ara macao (Scarlet Macaw): 0.75781
Object Recognition using Coral Edge TPU USB Accelerator
Check the cameraBranch the camera.
Enable the camera
sudo raspi-config
Click on Interface Option
Then, click on P1 Camera :
Click on yes
Run the following command :
vcgencmd get_camera
If you have :
supported=1 detected=1
It works !
Install Tensorflow lite interpreter
sudo apt-get install python3-tflite-runtime
Download a sample project
mkdir google-coral && cd google-coral
git clone https://github.com/google-coral/examples-camera --depth 1
Load the pre-trained model :
cd examples-camera
sh download_models.sh
Install requirements
cd opencv
bash install_requirements.sh
Modify detect.py as the following file:
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A demo that runs object detection on camera frames using OpenCV.
TEST_DATA=../all_models
Run face detection model:
python3 detect.py \
--model ${TEST_DATA}/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite
Run coco model:
python3 detect.py \
--model ${TEST_DATA}/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite \
--labels ${TEST_DATA}/coco_labels.txt
"""
import argparse
import cv2
import os
from pycoral.adapters.common import input_size
from pycoral.adapters.detect import get_objects
from pycoral.utils.dataset import read_label_file
from pycoral.utils.edgetpu import make_interpreter
from pycoral.utils.edgetpu import run_inference
def main():
default_model_dir = '../all_models'
default_model = 'mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite'
default_labels = 'coco_labels.txt'
parser = argparse.ArgumentParser()
parser.add_argument('--model', help='.tflite model path',
default=os.path.join(default_model_dir,default_model))
parser.add_argument('--labels', help='label file path',
default=os.path.join(default_model_dir, default_labels))
parser.add_argument('--top_k', type=int, default=3,
help='number of categories with highest score to display')
parser.add_argument('--camera_idx', type=int, help='Index of which video source to use. ', default = 0)
parser.add_argument('--threshold', type=float, default=0.1,
help='classifier score threshold')
args = parser.parse_args()
print('Loading {} with {} labels.'.format(args.model, args.labels))
interpreter = make_interpreter(args.model)
interpreter.allocate_tensors()
labels = read_label_file(args.labels)
inference_size = input_size(interpreter)
cap = cv2.VideoCapture(args.camera_idx)
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
cv2_im = frame
cv2_im_rgb = cv2.cvtColor(cv2_im, cv2.COLOR_BGR2RGB)
cv2_im_rgb = cv2.resize(cv2_im_rgb, inference_size)
run_inference(interpreter, cv2_im_rgb.tobytes())
objs = get_objects(interpreter, args.threshold)[:args.top_k]
for obj in objs:
if is_detected_object(obj):
print(f"Object detected : {labels.get(obj.id, obj.id)}, confidence rate : {get_score_object(obj)} %")
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
def is_detected_object(obj):
return get_score_object(obj) > 70
def get_score_object(obj):
return int(100*obj.score)
def get_name_object(obj):
return labels.get(obj.id, obj.id)
if __name__ == '__main__':
main()
Run
python3 detect.py
You can lead your camera to an object as a bottle and you should get:
object detected : bottle, confidence rate : 83 %
RealSense Camera Installation
RequirementsStart with the following commands :
sudo apt-get update && sudo apt-get dist-upgrade
sudo apt-get install automake libtool vim cmake libusb-1.0-0-dev libx11-dev xorg-dev libglu1-mesa-dev
Create a new udev
rule:
cd ~
git clone https://github.com/IntelRealSense/librealsense.git
cd librealsense
sudo cp config/99-realsense-libusb.rules /etc/udev/rules.d/
Apply the changes:
sudo su
udevadm control --reload-rules && udevadm trigger
exit
Add to the .bashrc the following line:
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
Apply the changes:
source ~/.bashrc
Install protobuf
:
cd ~
git clone --depth=1 -b v3.10.0 https://github.com/google/protobuf.git
cd protobuf
./autogen.sh
./configure
make -j1
sudo make install
cd python
export LD_LIBRARY_PATH=../src/.libs
python3 setup.py build --cpp_implementation
python3 setup.py test --cpp_implementation
sudo python3 setup.py install --cpp_implementation
export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp
export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION=3
sudo ldconfig
protoc --version
Install libtbb-dev
:
cd ~
wget https://github.com/PINTO0309/TBBonARMv7/raw/master/libtbb-dev_2018U2_armhf.deb
sudo dpkg -i ~/libtbb-dev_2018U2_armhf.deb
sudo ldconfig
rm libtbb-dev_2018U2_armhf.deb
Install RealSense SDK librealsense
:
cd ~/librealsense
mkdir build && cd build
cmake .. -DBUILD_EXAMPLES=true -DCMAKE_BUILD_TYPE=Release -DFORCE_LIBUVC=true
make -j1
sudo make install
Install RealSense SDK pyrealsense2
Pytho bindings for librealsense
:
cd ~/librealsense/build
cmake .. -DBUILD_PYTHON_BINDINGS=bool:true -DPYTHON_EXECUTABLE=$(which python3)
make -j1
sudo make install
Add the following line to your .bashrc
:
export PYTHONPATH=$PYTHONPATH:/usr/local/lib
Apply the change:
source ~/.bashrc
Install OpenGL
:
sudo apt-get install python-opengl
sudo -H pip3 install pyopengl
sudo -H pip3 install pyopengl_accelerate==3.1.3rc1
If your Raspberry Pi has the Raspberry Pi OS, run:
sudo raspi-config
Choose the following option
Advanced Options A2 Gl Driver G2 GL (Fake KMS)
Dowload create in the directory debug
in your raspberry pi the following python script:
## License: Apache 2.0. See LICENSE file in root directory.
## Copyright(c) 2015-2017 Intel Corporation. All Rights Reserved.
#####################################################
## librealsense tutorial #1 - Accessing depth data ##
#####################################################
# First import the library
import pyrealsense2.pyrealsense2 as rs
try:
# Create a context object. This object owns the handles to all connected realsense devices
pipeline = rs.pipeline()
# Configure streams
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
# Start streaming
pipeline.start(config)
while True:
# This call waits until a new coherent set of frames is available on a device
# Calls to get_frame_data(...) and get_frame_timestamp(...) on a device will return stable values until wait_for_frames(...) is called
frames = pipeline.wait_for_frames()
depth = frames.get_depth_frame()
if not depth: continue
# Print a simple text-based representation of the image, by breaking it into 10x20 pixel regions and approximating the coverage of pixels within one meter
coverage = [0]*64
for y in range(480):
for x in range(640):
dist = depth.get_distance(x, y)
if 0 < dist and dist < 1:
coverage[x//10] += 1
if y%20 is 19:
line = ""
for c in coverage:
line += " .:nhBXWW"
coverage = [0]*64
print(line)
exit(0)
#except rs.error as e:
# # Method calls agaisnt librealsense objects may throw exceptions of type pylibrs.error
# print("pylibrs.error was thrown when calling %s(%s):\n", % (e.get_failed_function(), e.get_failed_args()))
# print(" %s\n", e.what())
# exit(1)
except Exception as e:
print(e)
pass
Then run it, you should get this photo:
sudo apt-get install ros-noetic-realsense2-camera
sudo apt-get install ros-noetic-realsense2-description
Run:
roslaunch realsense2_camera rs_camera.launch
ROS package for Coral Edge TPU USB Accelerator
Package InstallationThe package is available here.
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update
# If you do not have USB3, install libedgetpu1-legacy-std
sudo apt-get install libedgetpu1-legacy-max # Choose <YES> when asked
sudo apt-get install python3-edgetpu
Install just the TensorFlow Lite interpreter:
sudo apt-get install python3-tflite-runtime
Add the package to the workspace:
cd ~/catkin_ws/src
git clone https://github.com/jsk-ros-pkg/coral_usb_ros.git
wstool init
wstool merge coral_usb_ros/fc.rosinstall
wstool update
rosdep install --from-paths . --ignore-src -y -r
cd ~/catkin_ws
catkin_make
It may be necessary to comment on the following lines in ~/catkin_ws/src/coral_usb_ros/CMakeLists.txt
:
# INPUT_REQUIREMENTS requirements.in
#install(FILES requirements.in requirements.txt
# DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
#)
Model download
source ~/catkin_ws/devel/setup.bash
roscd coral_usb/scripts
rosrun coral_usb download_models.py
You need to install this package in the computer in the same way as previously.
If you get this error :
ERROR: launchpadlib 1.10.13 requires testresources, which is not installed.
You need to run the following command:
sudo apt install python3-testresources
If your get this error:
ModuleNotFoundError: No module named 'jsk_topic_tools'
You can run:
sudo apt install ros-noetic-jsk-topic-tools
For this use, you need to use a fan. You need to configure it to use with a power of 100%.
Connect in ssh to the Raspberry Pi.
ssh pi@raspberrypi.local
Then run:
curl https://download.argon40.com/argonfanhat.sh | bash
Reboot the Raspberry Pi.
Run:
argonone-config
Choose the third option.
Associate for 30°C a power of 100%. It is done.
Connect in ssh to the Raspberry Pi with three terminals:
ssh pi@raspberrypi.local
Run all following commands in different terminals in parallel:
roscore
roslaunch realsense2_camera rs_camera.launch
roslaunch coral_usb edgetpu_object_detector.launch INPUT_IMAGE:=/camera/color/image_raw
Run the following command in your computer terminal:
rosrun image_view image_view image:=/edgetpu_object_detector/output/image
The camera must be detected by the Raspberry Pi.
Install some Raspberry librairies:
sudo apt install libraspberrypi-bin
Modify some config files:
sudo nano /boot/firmware/config.txt
Run:
vcgencmd get_camera
The output must be:
supported=1 detected=1
To get the video flow, thie package raspicam_node is useful.
Install the following librairies:
sudo apt install libraspberrypi0
sudo apt install libraspberrypi-dev
sudo apt install libpigpiod-if-dev
Run the following command in your directory src
:
git clone https://github.com/UbiquityRobotics/raspicam_node
Connect in ssh to the Raspberry Pi with three terminals:
ssh pi@raspberrypi.local
Run all following commands in diffrent terminals in parallel:
roscore
roslaunch raspicam_node camerav2_1280x960.launch
roslaunch coral_usb edgetpu_object_detector.launch INPUT_IMAGE:=/raspicam_node/image IMAGE_TRANSPORT:=compressed
Run the following command in your computer terminal:
rosrun image_view image_view image:=/edgetpu_object_detector/output/image
Robot Control Application
Install the following application on the Play Store.
Connect in ssh to the Raspberry Pi:
ssh pi@raspberrypi.local
Run to get your IP:
hostname -I
your-IP
First, install this package:
sudo apt install ros-noetic-compressed-image-transport
Run:
roslaunch robotont_driver driver_basic.launch
This application isn’t updated since 2016. It’s an issue. Besides, the application doesn’t display the camera Topic. Thus, I found another application called ROS Mobile. This application is developped with Android Studio. It’s a free application, the github is available here.
Run the application on your phone, add a robot and write the following configuration:
The camera topic is:
/camera/color/image_raw/compressed
This application has some drawbacks. Indeed, the user needs to launch the node for the motor driver with a computer. ROS Mobile allows users to connect in ssh.
This application is developped with Android Studio. The code is available is here.
If you want control your robot with directional key, I created a widget called directionnal :
The Github repository is available here.
Either you build from the source, either you dowload the apk here.
For each button you need to write an only topic name, for instance:
twist_up twist_right twist_left twist_down twist_angular_right twist_angular_left
You need to add the following node :
#! /usr/bin/python3.8
import rospy
from geometry_msgs.msg import Twist
def msgCallback(inputMsg: Twist):
outputMsg.linear.x = 0
outputMsg.linear.y = 0
outputMsg.linear.z = 0
outputMsg.angular.x = 0
outputMsg.angular.y = 0
outputMsg.angular.z = 0
if inputMsg.linear.x:
outputMsg.linear.x = inputMsg.linear.x
elif inputMsg.linear.y:
outputMsg.linear.y = inputMsg.linear.y
elif inputMsg.linear.z:
outputMsg.linear.z = inputMsg.linear.z
elif inputMsg.angular.x:
outputMsg.angular.x = inputMsg.angular.x
elif inputMsg.angular.y:
outputMsg.angular.y = inputMsg.angular.y
elif inputMsg.angular.z:
outputMsg.angular.z = inputMsg.angular.z
publisher.publish(outputMsg)
if __name__ == "__main__":
outputMsg = Twist()
rospy.init_node('combiner', anonymous=True)
subscriber1 = rospy.Subscriber("/twist_up", Twist, msgCallback)
subscriber2 = rospy.Subscriber("/twist_down", Twist, msgCallback)
subscriber3 = rospy.Subscriber("/twist_right", Twist, msgCallback)
subscriber4 = rospy.Subscriber("/twist_left", Twist, msgCallback)
subscriber5 = rospy.Subscriber("/twist_angular_left", Twist, msgCallback)
subscriber6 = rospy.Subscriber("/twist_angular_right", Twist, msgCallback)
publisher = rospy.Publisher("/cmd_vel", Twist, queue_size=10)
rospy.spin()
Add this launch file:
<?xml version="1.0"?>
<launch>
<node pkg="robotont_demos" type="app_merge.py" name="app_merge"></node>
</launch>
This files are in robotont_demos repository.
To use the application, connect in ssh to the Raspberry Pi with the app and your computer:
ssh pi@raspberrypi.local
Lanch the following node in the app :
roslaunch robotont_driver driver_basic.launch
Launch the new node from the Raspberry Pi:
roslaunch robotont_demos app_launch.launch
Motors debug
Establish an ssh connection:
ssh pi@raspberrypi.local
Start the driver
roslaunch robotont_driver driver_basic.launch
Move the robot using a keyboard
rosrun teleop_twist_keyboard teleop_twist_keyboard.py
Your motors should run.
You can check if the information from ros is sent
rostopic echo /cmd_vel
You should see the speed of the robot.
Slam
Simultaneous localization and mapping allows the robot to create a map of a place.
You need to get the package map_server:
sudo apt install ros-noetic-map-server
Connect in ssh to the Raspberry Pi:
ssh pi@raspberrypi.local
Then:
roslaunch robotont_demos 2d_slam_only.launch
In another terminal (Raspberry Pi) :
roslaunch realsense2_camera rs_camera.launch
In your computer, run the following command to see the map :
roslaunch robotont_demos 2d_slam_display.launch
When you have done, you can dowload the map with the following command :
rosrun map_server map_saver -f map
Navigation
You need to install the following packages:
sudo apt install ros-noetic-map-server
sudo apt install ros-noetic-amcl
sudo apt install ros-noetic-teb-local-planner
A map from the place is needed to move the robot.
In the directory src/maps
, add the map. There are two files : map.pgm
and map.yaml
.
Warning, in the map.yaml, you need to write the good path to map.pgm, for instance :
image: /home/pi/catkin_ws/src/robotont_navigation/maps/map.pgm
Connect in ssh to the Raspberry Pi:
ssh pi@raspberrypi.local
Run the following command in different terminals:
roslaunch robotont_demos 2d_navigation_teb_amcl.launch
roslaunch roslaunch realsense2_camera rs_camera.launch
In your computer:
roslaunch robotont_demos 2d_slam_display.launch
Nucleo
Debug
Nucleo board send data about odometry. The raspberry can check if it receives data. You need install the following python’s package :
pip3 install pyserial
The following command in your shell diplays the ports available :
python3 -m serial.tools.miniterm
Run
ssh pi@raspberrypi.local
Create a directory debug
:
mkdir debug
cd debug
Create a python file to debug
nano read_port.py
Write the following lines in the latter :
import serial
ser=serial.Serial(’/dev/ttyACM0’, 115200)
readedText = ser.readline()
print(readedText)
ser.close()
Manufacturing of robotont
The brain of the robot now functional, it must at present be crafted.
Electronics
Now it is a question of explaining the different electronic parts necessary for the operation of the robot. it is already used a Raspberry pi 4 as the brain for the Robotont. you have to add a Nucleo-64 (STM32L476) to it for everything related to motor driver control. So it is absolutely essential to have one motor driver (MC33926) per motor. For the power part there is also an electronic board.
Motor driver
To the motor control it used a MC33926 (datasheet). This version of project is based on the Pololu MC33926 Motor Driver Carrier (website). To communicate with the Nucleo, the choice made was to create a shield for each motor in order to have an easier connection. The electronic board is in a KiCad project. It is possible to manufacture the electronic card, which gives this:
Nucleo
The Nucleo part is composed of a NUCLEO-L476RG (or equivalent) and a shield to simplify wiring.
Shield
The shield is on a KiCad project. The connexion on this shield respect the pinout as following:
After manufacture it give this result:
Programming
In addition, the nucleo must be programmed, which can be downloaded from the following git: link. There is already a PlatformIO project created, just upload the code as explained below:
PlatformIO tutorialFirst of all you have to install Visual Studio Code (VSCode), it is possible to download it here.
Now we will install the PlatformIO extension. To do this, go to the extensions tab:
It is then possible to search and install PlatformIO:
After restarting VSCode PlatformIO appears! So it is possible to open our downloadable project on this link.
Once the Nucleo is connected to the computer, it is possible to upload the code:
This step may take some time, but once « success » is displayed in the terminal the Nucleo is ready!
Power managment board
To manage all the electronic power part. It was necessary to make an other electronic card. It has a battery voltage rail, and a 5V rail to power our raspberry Pi 4 and the Nucleo. There is also enough to protect the battery and put a beeper and a fuse (30A here). We must also connect an emergency stop button and a start button (which can be replaced by a jumper).. Here is the KiCad project, who give after fabrication:
List of material
Mechanics
The different parts are made to be cut with a laser cutter. The ideal material is therefore 5mm PMMA. The following parts in the specified quantities are required (they can be downloaded on a complete plate or one by one, there are the 3D model too: here):
To assemble all this you will need the following hardware:
Note :
Either you buy three hubs or you machine it. Here there is a 3D model that can be easily printed.
Wires
Now we need some cables:
Assembly
Now that all the elements of the Robotont are ready, the assembly can begin. By following this assembly instructions:
This documentation is written thanks to :
- Creating a catkin workspace : wiki ROS
- Robotont Documentation
- Swap working
- Installation ROS on Raspberry Pi OS
- Coral TPU Documentation
- Object Recognition
- Setup camera Realsense on Linux
- Python debug reading usb port
- Setup camera Realsense on Raspberry Pi 4
- Installation Package ROS RealSense
- Installation Package ROS rtabmap
- Coral USB ROS