Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Map is overlapping when using a 2d lidar and odometry coming from icp matching and real sence camera, how to fix it? #670

Open
astronaut71 opened this issue Feb 14, 2024 · 0 comments

Comments

@astronaut71
Copy link

astronaut71 commented Feb 14, 2024

Hi

Im using ROS2 humble and having 2D lidar and Realsence D435if camera with IMU. Created icp odometry with rtabmap package and here is the launch file

import os
from click import launch
import launch
from launch.substitutions import Command, LaunchConfiguration
import launch_ros
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument, IncludeLaunchDescription, GroupAction
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch.substitutions import LaunchConfiguration, ThisLaunchFileDir
from launch_ros.actions import Node
from launch.conditions import IfCondition, UnlessCondition




def generate_launch_description():
    #pkg_share = launch_ros.substitutions.FindPackageShare(package='realsense2_camera').find('launch')
    
    rs_camera_launch=IncludeLaunchDescription(
            PythonLaunchDescriptionSource('/home/user/ros2_ws/src/realsense-ros/realsense2_camera/examples/align_depth/rs_align_depth_launch.py'),
            launch_arguments={'align_depth': 'true', 'linear_accel_cov': '1.0', 'unite_imu_method': 'linear_interpolation'}.items(),
    )

    #rs_camera_launch = launch_ros.actions.Node(
        #package = 'realsense2_camera',
	    #executable = 'realsense2_camera_node',
	    #arguments = {'align_depth': 'true', 'linear_accel_cov': '1.0', 'unite_imu_method': 'linear_interpolation'},
    #)

    #rs_camera_launch = IncludeLaunchDescription(
        #PythonLaunchDescriptionSource([ThisLaunchFileDir(), '/rs_launch.py ']),
        #condition=IfCondition(LaunchConfiguration('offline')),
        #condition=UnlessCondition(LaunchConfiguration('offline')),
        #launch_arguments={'align_depth': 'true', 'linear_accel_cov': '1.0', 'unite_imu_method': 'linear_interpolation'}.items(),
    #)

    #launch rgbd_sync
    rgbd_sync_node = launch_ros.actions.Node(
        package='rtabmap_sync',
        executable='rgbd_sync',
        namespace='/rtabmap',
        output='screen',
        parameters=[{'approx_sync': 'false'}],
        remappings=[('rgb/image','/camera/camera/color/image_raw'),('depth/image', '/camera/camera/aligned_depth_to_color/image_raw'),
                    ('rgb/camera_info','/camera/camera/color/camera_info')],
    )
    
    #launch frame matcher
    points_xyzrgb_node = launch_ros.actions.Node(
        package='rtabmap_util',
        executable='point_cloud_xyzrgb',
        namespace='/rtabmap',
        output='screen',
        parameters=[{'decimation': 4, 'voxel_size': 0.05, 'approx_sync': 'false'}],
        remappings=[('cloud', 'depth/color/voxels')],
    )
    
    rgbd_odometry_node = launch_ros.actions.Node(
        package='rtabmap_odom',
        executable='rgbd_odometry',
        namespace='/rtabmap',
        output='screen',
        parameters=[{
        'frame_id': 'base_link',
        'publish_tf': False,
        'publish_null_when_lost': True,
        'guess_from_tf': True,
        'Odom/FillInfoData': True,
        'Odom/ResetCountdown': 1,
        'Vis/FeatureType': 6,
        'OdomF2M/MaxSize': 1000,}],
        remappings=[
        ('rgb/image', '/camera/camera/color/image_raw'),
        ('depth/image', '/camera/camera/aligned_depth_to_color/image_raw'),
        ('rgb/camera_info', '/camera/camera/color/camera_info'),
        ('odom', '/vo')],
    )

    icp_odometry_node = launch_ros.actions.Node(
        package='rtabmap_odom',
        executable='icp_odometry',
        namespace='/rtabmap',
        output='screen',
        parameters=[{
        'frame_id': 'base_link',
        'scan_downsampling_step ': 1  }],
        remappings=[('scan', '/scan'), ('odom', '/scan_odom')],
    )   
    return launch.LaunchDescription([
        rs_camera_launch,
        rgbd_sync_node,
        points_xyzrgb_node,
        rgbd_odometry_node,
        icp_odometry_node,
    ])

Then the localization with ekf is this yaml file

publish_tf: true
        
# 1. Set the map_frame, odom_frame, and base_link frames to the appropriate frame names for your system.
#     1a. If your system does not have a map_frame, just remove it, and make sure "world_frame" is set to the value of odom_frame.
# 2. If you are fusing continuous position data such as wheel encoder odometry, visual odometry, or IMU data, set "world_frame" 
#    to your odom_frame value. This is the default behavior for robot_localization's state estimation nodes.
# 3. If you are fusing global absolute position data that is subject to discrete jumps (e.g., GPS or position updates from landmark 
#    observations) then:
#     3a. Set your "world_frame" to your map_frame value
#     3b. MAKE SURE something else is generating the odom->base_link transform. Note that this can even be another state estimation node 
#         from robot_localization! However, that instance should *not* fuse the global data.
        map_frame: map              # Defaults to "map" if unspecified
        odom_frame: odom            # Defaults to "odom" if unspecified
        base_link_frame: base_link  # Defaults to "base_link" if unspecified
        world_frame: odom           # Defaults to the value of odom_frame if unspecified
        
        odom0: scan_odom
        odom0_config: [true,  true,  true,
                       false, false, false,
                       true, true, true,
                       true, true, true,
                       false, false, false]

        imu0: /camera/camera/accel/sample
        imu0_config: [false, false, false,
                      false,  false,  false,
                      false, false, false,
                      false, false, false,
                      true, true, true]

and the slam_toolbox is this config

slam_toolbox:
  ros__parameters:

    # Plugin params
    solver_plugin: solver_plugins::CeresSolver
    ceres_linear_solver: SPARSE_NORMAL_CHOLESKY
    ceres_preconditioner: SCHUR_JACOBI
    ceres_trust_strategy: LEVENBERG_MARQUARDT
    ceres_dogleg_type: TRADITIONAL_DOGLEG
    ceres_loss_function: None

    # ROS Parameters
    odom_frame: odom
    map_frame: map
    base_frame: base_link
    scan_topic: /scan
    use_map_saver: true
    mode: mapping #localization

    # if you'd like to immediately start continuing a map at a given pose
    # or at the dock, but they are mutually exclusive, if pose is given
    # will use pose
    # map_file_name: /home/user/serial_map_2_22
    # map_start_pose: [0.0, 0.0, 0.0]
    #map_start_at_dock: true

    debug_logging: false
    throttle_scans: 1
    transform_publish_period: 0.02 #if 0 never publishes odometry
    map_update_interval: 5.0
    resolution: 0.05
    max_laser_range: 50.0 #for rastering images
    minimum_time_interval: 0.5
    transform_timeout: 0.2
    tf_buffer_duration: 30.
    stack_size_to_use: 40000000 #// program needs a larger stack size to serialize large maps
    enable_interactive_mode: true

    # General Parameters
    use_scan_matching: true
    use_scan_barycenter: true
    minimum_travel_distance: 0.1
    minimum_travel_heading: 0.1
    scan_buffer_size: 30
    scan_buffer_maximum_scan_distance: 20.0
    link_match_minimum_response_fine: 0.1  
    link_scan_maximum_distance: 1.5
    loop_search_maximum_distance: 3.0
    do_loop_closing: true 
    loop_match_minimum_chain_size: 10           
    loop_match_maximum_variance_coarse: 3.0  
    loop_match_minimum_response_coarse: 0.35    
    loop_match_minimum_response_fine: 0.45

    # Correlation Parameters - Correlation Parameters
    correlation_search_space_dimension: 0.5
    correlation_search_space_resolution: 0.01
    correlation_search_space_smear_deviation: 0.1 

    # Correlation Parameters - Loop Closure Parameters
    loop_search_space_dimension: 8.0
    loop_search_space_resolution: 0.05
    loop_search_space_smear_deviation: 0.03

    # Scan Matcher Parameters
    distance_variance_penalty: 0.5      
    angle_variance_penalty: 1.0    

    fine_search_angle_offset: 0.00349     
    coarse_search_angle_offset: 0.349   
    coarse_angle_resolution: 0.0349        
    minimum_angle_penalty: 0.9
    minimum_distance_penalty: 0.5
    use_response_expansion: true

Im using this transformations ros2 run tf2_ros static_transform_publisher -0.02 0 -0.02 0 0 0 base_link laser_link and ros2 run tf2_ros static_transform_publisher 0.02 0 -0.02 0 0 0 base_link camera_link.. Launching the slam_toolbox with ros2 launch slam_toolbox online_async_launch.py slam_params_file:=./src/amr_one/config/mapper_params_online_async.yaml use_sim_time:=false

The tf is
frames_2024-02-14_15 19 47_page-0001

But in RVIZ the map is overlapping. Any help?

@astronaut71 astronaut71 changed the title How to use slam_toolbox with a 2d lidar and odometry coming from icp matching and real sence camera? Map is overlapping when using a 2d lidar and odometry coming from icp matching and real sence camera? Feb 14, 2024
@astronaut71 astronaut71 changed the title Map is overlapping when using a 2d lidar and odometry coming from icp matching and real sence camera? Map is overlapping when using a 2d lidar and odometry coming from icp matching and real sence camera, how to fix it? Feb 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant