Programming question
Lesson Plan
1. Vehicle States
2. Vehicle Dynamics – Bicycle Model
3. Pure Pursuit Controller
2
Vehicle States
System Dynamics
•
•
•
A dynamic system is a mathematical description of the relation between an input u and an output y signal.
This description is usually given in form of an ODE (Ordinary Differential Equation).
The system has so-called states x. These are variables which allow us to formulate the system behavior
in form of a set of first-order ODE for each of this variables.
The standard system description consists of the system dynamics and the output equation. The former
describes the timely behavior of the states as a reaction to the inputs and the initial state. The latter
describes the relation between the state and the output.
System matrix
State
Input matrix
Input
Output
Output matrix
Vehicle States
●
Vehicle states define the vehicles behavior/position in its environment
●
Vehicle states can be derived from different sensor
○
GPS (Global Positioning System)
○
Odometry: Gyrometer, Wheel Speed Sensor
○
Camera
○
Lidar
System Dynamics
Input
System
Output
Input
System
Output
Vehicle State – Position
●
●
y
●
y= 3.8m
0
●
●
x= 2.5m
x
Position defines a coordinate vector
relative to a global or local coordinate
system
Respective to the vehicle’s center of
gravity (CoG)
The origin of the environment needs to be
known
Normally: X- and Y- position in meters
Only translation
Vehicle State – Heading
●
y
●
Θ= 105°
●
●
●
x
●
Heading defines an angle that
displays the direction, in which the
vehicle currently “looks at”
Respective to the x-axis of the
coordinate system
The origin of the environment needs
to be known
Heading angle Θ is displayed in
radians or degrees
Be careful: Heading can be displayed
in ranges from:
○ -π – π = -180° to 180°
○ 0 – 2π =
0° to 360°
Only rotation
Vehicle State – Heading
π/2 =
1,5790°
0
0°
45°
135°
0
π = 3,14
180°
0°/360°
2π = 6,28
315°
225°
270°
3/2 π = 4,71
-45°
45°
-π/2 = -1,57
π/2 =
1,57
90°
-90°
-135°
135°
180°
-180°
π = 3,14 -π = -3,14
Vehicle State – Steering Angle
●
δ= 11°
●
●
●
Steering angle defines an angle that displays
the direction, in which the front wheels
currently “looks at”
Its defined with respective to the center line
of the agent.
Steering angle is for both wheels the same
Steering angle δ is displayed in radians or
degrees
Vehicle State – Vehicle Dynamics
Additional Interesting vehicle states based on vehicle dynamics:
●
Sideslip angle β
●
Slip Angle α
●
Wheelslip s
β
Vehicle State – Vehicle Dynamics
Wheelslip s
Vehicle Dynamics Modelling
Vehicle Dynamics
C
B
Which trajectory is the car following?
A
We can not tell!
We need information about:
Speed
Acceleration
Steering angle
Vehicle: Mass, Length, Width, ….
Vehicle Dynamics – Why Modelling?
Inputs
(e.g. steering wheel
angle,motor torque)
Real Vehicle
Simplified Representation
Vehicle dynamics model
• Differential equations
• Characteristic curve correlations
Outputs
(e.g. speed,
accelerations)
Model Accuracy
Model Match
Accuracy
Vehicle Dynamic Model Types
Single Track
Model
Double Track
Model
Multi-Body
Simulation
Finite Elements
Simulation
Numbers of Parameters
Single Track Model
Simple vehicle model, model lateral dynamics
Simplifications:
• Wheels of one axle are combined
• Center of gravity is at road level
• no rolling
• no pitching
• no wheel load differences left/right
• Vehicle longitudinal speed is constant
• No vertical dynamics
• Small angles
C
G
Motion of a Rigid Body
A prerequisite for understanding the
bicycle model is the concept of the
instantaneous center of rotation
(ICR)
For a body undergoing planar movement
ICR is a point that has zero velocity at a
particular instant of time.
The ICR is the point in the moving plane
around which all other points are rotating at
a specific instant of time.
Instantaneous Center of Rotation
If we know the velocity vector direction (red arrow)
of a point (blue dot) in the rigid body, we can draw a
dashed line through the point (blue dot), which is
perpendicular to the velocity vector.
The ICR needs to lie on this dashed line,
If we know the velocity vector direction of at least
two points, we can find the ICR as the intersection
of the dashed lines.
Ackermann steering geometry
Definition of wheel steer angle for the
left (𝛿l) and right front wheel (𝛿r).
Note that the steering wheel angle is
different from the wheel steer angle.
The wheel steer angle is the angle of the
wheels, while the steering wheel angle is
the angle of the steering wheel (i.e. what
you command – from the servo)
Tire Slip
We want the wheels to just roll, but for
dynamic maneuvers or they will also slip.
If the velocity direction of a wheel center
(red arrow), does not coincide with the
orientation of the wheel (gray dashed arrow),
the tire is slipping to the side.
The angle between the velocity (red) and the
wheel orientation (gray dashed) is known as
the side slip angle, or just slip angle.
In general, it is different for each individual
wheel.
Kinematic Four Wheel Model
The slip angles of all four tires are assumed
to be zero
The wheel orientations are equal to the
wheel velocities
Rear wheels both have the same
orientation. So we only know that the ICR
is on the line moving through both rear
wheels.
ICR
Kinematic Bicycle Model
The two front wheels as well as the two rear wheels are lumped into one wheel each
+Assumption:
All slip angles are zero
Geometry of the Kinematic Bicycle Model
𝛿: Steering Angle
Convince yourself that this
angle is also 𝛿
R: Radius of curvature of the path traced by the rear axle
L: wheelbase
Geometry of the Kinematic Bicycle Model
tan(𝛿) = L/R
𝛿 = arctan(L/R)
State Equations: Putting it all together
States:
velocity
Position of rear
axle in map
frame
Heading in map
frame
State Equations:
Bicycle Model – Instantaneous Center of Rotation [ICR]
State: [x, y, θ, δ]
x. = v.cos(θ)
y. = v.sin(θ)
θ. = ⍵
⍵ = v/R
R = L/tan(δ)
Bicycle Model – The reference point is your choice
State: [x, y, θ, δ]
θ is the heading
δ is the steering angle
x, and y are the position
Needs a reference point:
– Center of the rear axle
– CoG
– Front axle
Pure Pursuit
Reference point
Waypoints
Assumptions
• Vehicle is given a sequence of 2D positions, i.e. waypoints, to follow
• Vehicle knows where the given waypoints are in the vehicle’s frame of
reference
– Underlying assumptions that the vehicle can localize itself
• Goal is to follow these waypoints
Geometric Interpretation
We want to control the
steering angle such that the
vehicle follows a given path
x
Goal point: (x, y)
L: lookahead distance
A parameter of the Pure Pursuit algorithm
y
Geometric Interpretation
x
Goal point: (x, y)
Follow the arc of a circle
to reach the goal point
Do you see a problem here?
y
L: lookahead distance
Geometric Interpretation
x
Goal point: (x, y)
But the arc is not unique
How do we make it unique?
y
L: lookahead distance
Finding the right arc to the next waypoint
Target point (TP) on the desired path is identified, which is a look-ahead ld distance away from the vehicle
Target Point
Reference Path
Path the car takes due
to the steering angle
Bicycle Model
Lookahead Distance
Finding the right arc to the next waypoint
We want to choose δ such that the orange vehicle trajectory will move to the target point.
δ = 25°
Finding the right arc to the next waypoint
We want to choose δ such that the orange vehicle trajectory will move to the target point.
δ = 20°
Finding the right arc to the next waypoint
We want to choose δ such that the orange vehicle trajectory will move to the target point.
δ = 15°
Finding the right arc to the next waypoint
Success !
δ = 11.3°
But there is a more
elegant solution than just
trying out a bunch of
different values.
Calculating the steering angle
𝛼
Calculating the steering angle
Due to the ICR
definition.
Magenta triangle is
isosceles
Calculating the steering angle
From law of sines
Law of sines
Calculating the steering angle
From law of sines
Calculating the steering angle
Recall:
𝛿 = arctan(L/R)
Therefore:
Pure Pursuit steering angle
𝛼 : Angle between heading direction and the line to the Target Point
Wheelbase L
(We know this)
Do we know 𝛼 ?
Lookahead Distance
to target point (TP)
(You choose this)
Calculating Alpha
Lateral Distance to
Target
In the frame of
reference of the car
yt
x
𝛼
Target point: (xt, yt)
R
ld: lookahead
distance
y
y
𝛼 : Angle between heading direction (x) and the line to the Target Point
Picking a goal point
Now that we know how to find the arc to a given waypoint,
how to we pick a current waypoint from a list of waypoints?
Picking a goal point
A waypoint
L
1.
Pick the waypoint that is closest to
the vehicle
2.
Go up to the waypoint until you get
to one that is one lookahead
distance away from the car
3.
Use that as the current waypoint
Picking a goal point
What if there’s no waypoints exactly L
away from the car?
L
Picking a goal point
What if there’s no waypoints exactly L
away from the car?
L
Interpolate between the two
waypoints that sandwich the distance L
What should be the value of L in your
curvature calculation in this case?
Picking a goal point
What if there’s no waypoints exactly L
away from the car?
L
Interpolate between the two
waypoints that sandwich the distance L
What should be the value of L in your
curvature calculation in this case?
Updating the goal point (one way to do it)
Each time we have a new pose of the car, we could:
1. Find the current waypoint
2. Actuate towards that waypoint with calculated
steering angle
3. Localize to find the new pose, repeat
Updating the goal point
L
Updating the goal point
L
Updating the goal point
L
Updating the goal point
L
Updating the goal point
L
Updating the goal point
L
Finding the goal point – Easy Way
Particle filter pose
Finding the goal point – Easy Way
Find distance to nearby
points
Particle filter pose
Finding the goal point – Easy Way
Find the index of the
closest point on the
reference path:
Base Projection
Base Projection
Particle filter pose
Finding the goal point – Easy Way
Start from the Base Projection
Calculate the index of the point
on the reference path that is
approx lookahead distance
away.
Base Projection
Goal Point
path_resolution
Is the approx distance between
two successive points on the
reference path
Particle filter pose
Pure-Pursuit
Goal Point
(xg , yg) (in map frame)
y coordinate for goal in car frame
lo
ok
ah
ea
d
ld
Base Projection
𝛼
𝜽pf
(xpf , ypf, 𝜽pf)
map origin (0,0)
Particle filter pose
(in map frame)
Effects of Changing the Lookahead Distance
● The parameter L (lookahead distance) is a parameter of pure pursuit.
● Smaller L leads to more aggressive maneuvering to track tighter arc, and the
tighter arcs might be against dynamical limits of the car.
● Larger L leads to smoother trajectory but larger tracking errors, might lead
to close calls with obstacles.
Effects of Changing the Lookahead Distance
Question:
What is the most optimal lookahead that will result
in the fastest lap?
Short lookahead:
•Best tracking (minimized lateral error)
•Slowest speed (otherwise, oscillations observed)
Long lookahead:
•Bad tracking (might lead to corner cutting at turns)
•Highest speed (long distance to target, smoother
steering)
Use Lookahead tuning at the waypoint level
Adaptive Lookahead Labeling
Lookahead can be a function of speed
Lateral Controller (racecar’s perspective)
Lateral Controller
(Adaptive Steering Limits – Visual)
θ1:
max steering threshold
min speed (v1: ~35 mph)
θ2:
min steering threshold
min speed (v2: ~100 mph)
Between thresholds
θ = (v – v1)/(v2 – v1) * (θ2 – θ1)
v: current velocity
θ1
θ2
Real World
Las Vegas Motor Speedway
Real World
Las Vegas Motor Speedway
Raceline
Dynamics and Control – What we Do
We have a trajectory
How do we follow it?
Dynamics Model
Need smooth steering and
accurate path tracking
Model Predictive
Control
Dynamic Bicycle Model
𝜶f
𝜹
Ffy
ɭf
𝛃
𝛚
ɭr
𝜶r
Fry
y
x
Frx
𝜓
Vehicle Dynamics Model – Longitudinal Model
Model Predictive Control
MPC Overview – Objective Function
N – Prediction Horizon = Control
Horizon
recast as Quadratic Program using hpipm in the form
A – Dynamics Matrix
B – Dynamics Matrix
b – Dynamics Vector
Q – Quadratic Cost Matrix
S – Linear Cost Matrix
R – Input Penalty Matrix
q – Linear Output Cost Vector
r – Linear Input Cost Vector
s – slack variable
Zl – Lower Bound Cost for Slacks
Zu – Upper Bound Cost for Slacks
Hector
SLAM
Objectives
– What is Simultaneous Localization and Mapping ?
– What are Occupancy Maps ?
– How are SLAM and Scan Matching related ?
– SLAM in ROS – Hector Mapping
– Google Cartographer SLAM Overview
3
Problem Setting
4
A brief history of SLAM
Why do we need a map?
•
In order to support path planning
•
Limiting the error in state estimates, by providing the opportunity to ‘reset’
•
Later… do we really need a map?
Historical Development (1986-2004): Probabilistic Foundations
•
EKF (you will still find this in visual inertial odometry)
•
Particle Filter (very efficient localization)
•
We will cover these methods next class in the context of localization
Modern Era (2004-Now): Algorithmic Improvements
•
Maximum a-posteriori Estimation
•
Other names: factor graph optimization, graph-SLAM, smoothing and mapping (SAM), bundle adjustment
5
Limitations : Basic Path Planning
• High Level Path Assignments
• 2nd right, 2nd right, 1st right, 1st left, 1st right
6
Race Lines
7
Occupancy Grids
Bresenham’s
Rasterization
Simultaneous Localization and Mapping (SLAM)
Registering the first Scan
57
Registering the first Scan
58
Scan Matching
Pose of the Car at t = t1
Laser Scans w.r.t car at Time t = t1
Laser Scans w.r.t car at Time t = t2
59
Scan Matching
Pose of the Car at t = t1
Laser Scans w.r.t car at Time t = t1
Pose of the Car at t=t2
Laser Scans w.r.t car at Time t = t2
60
Baseframe Axes
Raw LiDAR Scans
61
Scans after transforming by ∆ξ at each stage
Mapframe Axes
62
Map Update
63
Hector SLAM – ROS
Multi-Resolution Map Representation
20 cm Grid Cell
10 cm Grid Cell
5 cm Grid Cell
65
Saving the map
66
Saving the map
• ROS Package called
MAP Server
• Allows saving a map
currently being published
over /map topic
• The saved map can be
loaded for future tasks.
67
System Tf tree
Map Frame
Tf Provided by
Hector odometry
Odom Frame
Base Frame
Tf required by
Hector package
Laser Frame
69
Parameters for Hector SLAM : ROS
• map resolution
• map_update_distance_thresh
• map_update_angle_thresh
• laser_max_dist
• update_factor_free
• update_factor_occupied
70
Subscribes to the scan topic
Publishes:
• Occupancy Grid Map
• Robot Pose
Tf frame information
Map Geometry
Map update
thresholds
Log odds
LIDAR properties
Hector
SLAM
Google Cartographer Overview
The Problem
Why did Google (not Waymo) make a SLAM package?
77
What’s different about Cartographer
The contribution of this paper is a novel method for reducing the
computational requirements of computing loop closure
constraints from laser range data.
78
Loop-closure
Two regions in the map are found to be the same region in the world
even though their position is incompatible given the uncertainty estimate
in the map — the classic loop closure problem.
The system must then be able to calculate the transformation needed to
align these two regions to ‘close the loop’.
79
Loop-closure
80
System Overview: Sensor Inputs
Up to four types of sensor
measurements: (1) Range (II)
Odometry (III) IMU (IV) Fixed
Frame Pose
The F1/10 vehicle supplies
two sensors:
2D LIDAR
(Range)
VESC
(Odometry)
81
System Overview: Frontend
Cartographer consists of two
separate subsystems: local
SLAM (frontend or trajectory
builder) and global SLAM
(backend).
The job of local SLAM is
to generate good
submaps.
The job of global SLAM is to
tie the submaps consistently
together.
82
System Overview: Backend
Cartographer consists of two
separate subsystems: local
SLAM (frontend or trajectory
builder) and global SLAM
(backend).
The job of local SLAM is to
generate good submaps.
The job of global SLAM
is to tie the submaps
consistently together.
83
Frontend: Local SLAM
What is a submap?
Submaps are small chunks of the world
filled with a fixed amount of registered
range data.
A submap is considered as complete
when the local SLAM has received a given
amount of range data.
The addition of
odometry helps
deal with
submaps like
this….
Submaps must be small enough so that
the drift inside them is below the
resolution of the occupancy grid, so that
they are locally correct.
On the other hand, they should be large
enough to be distinct for loop closure to
work properly. More on this later
The amount of
range data
should be large
enough that
submaps are
distinguishable…
85
What is a submap?
Submaps each have their own static
transform and contain a collection of
registered range-measurements.
Localization: the robot’s
trajectory in a submap is
computed via scan matching.
Consecutive measurements are
connected by constraints which are
‘local’.
Here local means derived from odometry
or recent scan overlaps and resultant
scan matches.
86
What is a submap?
Submaps each have their own static
transform and contain a collection of
registered range-measurements.
Once a motion between two scans is found by the scan
matcher, it goes through a motion filter. A scan is inserted
into the current submap only if its motion is above a certain
distance, angle or time threshold.
Consecutive measurements are
connected by constraints which are
‘local’.
Here local means derived from odometry
or recent scan overlaps and resultant
scan matches.
87
Submap Representation
Submaps can store their range data in a probability grid. For 2D a signed
distance field representation is also possible.
Probability grids are a 2D table where each cell has a fixed size and
contains the odds of being obstructed.
Odds are updated according to “hits” (where the range data is measured) and
“misses” (the free space between the sensor and the measured points).
Updating the submap
1.
2.
3.
For every hit, we insert the closest grid point
into the hit set.
For every miss, we insert the grid point
associated with each pixel that intersects
one of the rays between the scan origin and
each scan point, excluding grid points which are
already in the hit set.
If a grid point has yet to be observed it is
assigned a value phit or pmiss depending on which
set it is in.
Define the submap as:
Then the map is updated according to the following operation:
Recall the definition of the odds function:
Compute new p from odds…
Loop-closure
90
Does Cartographer use ICP?
No. Cartographer needs to support 3D LIDARs and the correspondence problem is
insidious.
●
Scan matching variants:
○ Iterative closest point (ICP)
○ Scan-to-scan
○ Scan-to-map
○ Map-to-map
○ Feature-based
○ RANSAC for outlier rejection
○ Correlative matching
Cartographer uses the Ceres-solver to formulate a nonlinear leastsquares correlative scan matching problem.
Correlation-based Scan Matching
●
In ICP, correspondences between two scans are explicitly computed, allowing a
rigid-body transformation to be computed.
●
Susceptible to local minima; poor initial estimates lead to incorrect data
associations and divergence.
●
Correlation based methods search for a rigidbody
transformation (without computing correspondences) that
projects one set of LIDAR points (the query scan) on top of a
reference map.
●
The reference map is generally implemented as a look-up table that assigns a cost
for every projected query point.
Correlation-based Scan Matching
Sum over each scan
point
Pose in local submap
reference frame
Transform scan point
from scan frame to
local submap frame
Bicubic interpolation
of probability grid
Coordinate of
scan return
M-smooth
94
Backend: Global SLAM
Closing Loops
Goal: is our current scan in one of the
submaps we have already seen?
Pose of the submap is now a decision
variable.
Constraints take the form of relative
poses ξij and associated covariance
matrices Σij . Relative poses now
include both submap and scan
poses.
Probably won’t
get a loop
closure here.
For a pair of submap i and scan j, the
pose ξij describes where in the submap
coordinate frame the scan was matched.
This would be a
good candidate
for loop closure.
96
Loop Closure
Sum over each scan,
submap combination
Submap poses Scan poses
Residual,
see paper
Huber loss,
outlier
rejection
Covariance
matrix
Relative pose
between submap
and scan
4
Localization: Scan Matching
Challenge:
Where is the robot with respect to
the previous frame
Learning Outcome:
LIDAR Scan Matching
Iterative closest point algorithm
Simultaneous Localization and Mapping with Cartographer
Challenge:
How to use tools for map
building.
Learning Outcome:
Understanding Occupancy Grid
Mapping
Algorithm: Hector SLAM
Assignment: ROS Cartographer
Localization: Particle Filter
Challenge:
Given a map of the world and multiple
sensor observations, what is the pose
of my robot?
Learning Outcome:
Understanding particle filter, which is a
version of a bayesian filter
Pure Pursuit
Challenge:
How to track a reference trajectory
given a map the ability to localize?
Learning Outcome:
Closed form geometric approach and
alternatives.
Indy Autonomous Challenge
9
Lesson Plan
1. Localization with odometry
2. Localization with scan matching
3. Iterative Closest Point algorithm
4. Iterative Closest Point-to-Line algorithm
5. Lab overview
10
Localization
•
Determine the state (position and orientation) of a robot with
respect to the environment
•
Localization can be within a global frame of a map, or relative
to one’s starting point
•
Mapping → Localization → Precise Planning → Profit!
11
Localization with odometry
Odometry
• Greek words ὁδός [odos] (route) and μέτρον [metron]
(measurement)
• Odometry – measurement of the route
(credit: Luca Carlone, MIT 6.141/16.405)
13
First attempt: Motion Integration
Odometry: Start at known pose and integrate control and
motion measurements to estimate the current pose
• Integrate dynamics using information from VESC, wheel
encoders, IMU, etc.
(credit: Luca Carlone, MIT 6.141/16.405)
14
First attempt: Motion Integration
Odometry: Start at known pose and integrate control and
motion measurements to estimate the current pose
• Integrate dynamics using information from VESC, wheel
encoders, IMU, etc.
(credit: Luca Carlone, MIT 6.141/16.405)
15
First attempt: Motion Integration
Odometry: Start at known pose and integrate control and
motion measurements to estimate the current pose
• Integrate dynamics using information from VESC, wheel
encoders, IMU, etc.
(credit: Luca Carlone, MIT 6.141/16.405)
16
First attempt: Motion Integration
Odometry: Start at known pose and integrate control and
motion measurements to estimate the current pose
• Integrate dynamics using information from VESC, wheel
encoders, IMU, etc.
(credit: Luca Carlone, MIT 6.141/16.405)
17
First attempt: Motion Integration
Odometry: Start at known pose and integrate control and
motion measurements to estimate the current pose
• Integrate dynamics using information from VESC, wheel
encoders, IMU, etc.
(credit: Luca Carlone, MIT 6.141/16.405)
18
First attempt: Motion Integration
Odometry: Start at known pose and integrate control and
motion measurements to estimate the current pose
• Integrate dynamics using information from VESC, wheel
encoders, IMU, etc.
(credit: Luca Carlone, MIT 6.141/16.405)
19
First attempt: Motion Integration
Odometry: Start at known pose and integrate control and
motion measurements to estimate the current pose
• Integrate dynamics using information from VESC, wheel
encoders, IMU, etc.
• Odometry = open loop estimation = error increases over time
(credit: Luca Carlone, MIT 6.141/16.405)
20
First attempt: Motion Integration
Odometry = open loop estimation = error increases over time
Let’s start here →
(credit: Luca Carlone, MIT 6.141/16.405)
21
First attempt: Motion Integration
Odometry = open loop estimation = error increases over time
Let’s say we start here →
(credit: Luca Carlone, MIT 6.141/16.405)
22
First attempt: Motion Integration
Odometry = open loop estimation = error increases over time
Let’s say we start here →
Notice how
uncertainty
increases
(credit: Luca Carlone, MIT 6.141/16.405)
23
First attempt: Motion Integration
Odometry = open loop estimation = error increases over time
Let’s say we start here →
(credit: Luca Carlone, MIT 6.141/16.405)
24
First attempt: Motion Integration
Odometry = open loop estimation = error increases over time
Let’s say we start here →
No longer able to
determine
correct position
(credit: Luca Carlone, MIT 6.141/16.405)
25
Mapping with odometry… meets reality
Open loop with no feedback
● Motion is noisy, we cannot ignore it
● Assuming known poses fails!
● Often, the sensor is rather precise
Courtesy: D. Hähnel & Cyrill Stachniss
26
Mapping with LiDAR
Localization using sets of range measurements collected across time,
e.g. Lidar scans
Bad!
Better!
27
Pose Correction Using Scan-Matching
Maximize the likelihood of the current pose relative to the
previous pose and map
sensor observation model motion model
current measurement
robot motion
map constructed so far
28
Incremental Alignment
Courtesy: E. Olson
Incremental Alignment
→ maximum value
→ best alignment of two scans
Courtesy: E. Olson & Cyrill Stachniss
30
Different ways to do Scan Matching
• Iterative closest point (ICP)
• Scan-to-scan
• Scan-to-map
• Map-to-map
• Feature-based
• RANSAC for outlier rejection
• Correlative matching
31
Mapping with LiDAR
Localization using sets of range measurements collected across time,
e.g. Lidar scans
32
Motion Model for Scan Matching
Raw Odometry
Scan Matching
Courtesy: D. Hähnel & Cyrill Stachniss
Dead Reckoning
34
Dead Reckoning
35
Dead Reckoning
36
Dead Reckoning
37
Dead Reckoning
38
Dead Reckoning
39
Dead Reckoning + Landmarks
40
Dead Reckoning + Landmarks
41
Dead Reckoning + Landmarks
42
Dead Reckoning + Landmarks
43
Dead Reckoning + Landmarks
44
Dead Reckoning + Landmarks
45
Summary of Scan Matching so far…
•
Scan-matching improves the pose estimate (and thus mapping)
substantially
•
Locally consistent estimates
•
Often scan-matching is not sufficient to build a (large)
consistent map
46
Localization with Scan Matching
Application to F1TENTH
Example: Aligning Two Scans
Courtesy: P. Pfaff
48
LIDAR for odometry: Scan matching
Left Wall
Right Wall
49
LIDAR for odometry: Scan matching
Left Wall
Right Wall
x
x
y
y
Car
reference
frame
Scan 1
World reference frame
50
LIDAR for odometry: Scan matching
Left Wall
Calculate most likely
position of car given
previous scene and
current scan data.
Right Wall
Scan 2
x
y
World reference frame
Scan 1
LIDAR for odometry: Scan 1
A1
x
x
y
Car reference
frame
y
World reference frame
52
LIDAR for odometry: Scan 1
A1
x
x
y
Car reference
frame
y
World reference frame
53
LIDAR for odometry: Scan 2
A1
x
x
y
World reference frame
y
Car reference
frame
54
LIDAR for odometry: Scan 2
Premise:
most likely car position at
Scan 2 is the position that
gives best overlap between
the two scenes
A1
x
x
y
Car reference
frame
y
World reference frame
55
LIDAR for odometry: Scan 2
Matching
A1
x
x
y
Car reference
frame
y
World reference frame
56
LIDAR for odometry: Scan Matching
x
y
World reference frame
57
Scan matching: optimization
Impact coordinates of ith step in world frame
Total of n steps (n = 1084)
S1
S2
x
Measure of match
y
58
Scan matching: requirements
Left Wall
Right Wall
● Sufficiently fast scan rate
● A slow scan rate can lead to
few matches between scans.
● Not really a risk for today’s
LIDARs
Scan matching: requirements
Left Wall
Right Wall
Non-smooth, or
heterogeneous, surfaces.
?
Best match between scans
is if car did not move.
61
Scan matching: requirements
Left Wall
Right Wall
Non-smooth, or
heterogeneous, surfaces.
Smooth surfaces all look
the same to the matching
algorithm.
!
Failed scan matching
63
Scan matching in a map
64
In the real world
65
In the real world
66
Localization with Scan Matching
Intuition
Scan matching
Localization using sets of range measurements collected across time,
e.g. Lidar scans
68
Scan Matching using Iterative Closest Point
Goal: Estimate the transformation to move one cloud so that it
is aligned with the other
Take two scans of your 3D or 2D point cloud
Courtesy: Igor Bogoslavskyi
https://nbviewer.org/github/niosus/notebooks/blob/master/icp.ipynb
Compute the center of mass and shift the
point clouds on top of each other
69
Data Association step
Minimize the distance between point pairs
Compute correspondences from P to Q,
For every pi we search the closest qj to it.
Courtesy: Igor Bogoslavskyi
https://nbviewer.org/github/niosus/notebooks/blob/master/icp.ipynb
Transformation step
Compute the rotation and translation
Compute rotation using SVD to get t
70
Recompute data association and alignment
Repeat until convergence so that clouds are aligned
Courtesy: Igor Bogoslavskyi
https://nbviewer.org/github/niosus/notebooks/blob/master/icp.ipynb
71
Dealing with outliers for faster convergence
Courtesy: Igor Bogoslavskyi
https://nbviewer.org/github/niosus/notebooks/blob/master/icp.ipynb
72
Localization with
Iterative Closest Point Scan Matching
Problem Setup
Let’s say my robot is in some environment with
landmarks A, B, C
Landmarks
Robot
74
Problem Setup – landmarks
Problem Setup
At t=0, measurements of distances to A,B,C are taken:
GLOBAL FRAME
76
Problem Setup
At t=0, measurements of distances to A,B,C are taken:
GLOBAL FRAME
77
Problem Setup
At t=0, measurements of distances to A,B,C are taken:
GLOBAL FRAME
LOCAL FRAME
78
Problem Setup
At t=0, measurements of distances to A,B,C are taken:
GLOBAL FRAME
LOCAL FRAME
79
Problem Setup
At t=1, moving in some unknown direction, the
distances to the landmarks will have changed:
80
Problem Setup
At t=1, moving in some unknown direction, the
distances to the landmarks will have changed:
81
Problem Setup
We want to find transform R, that transforms the two
set of points to be the closest:
R?
82
Problem Setup
Why? The transform R represents how much the robot
has moved in time:
R?
83
Problem Setup
Why? The transform R represents how much the robot
has moved in time:
R?
84
How do we find R?
Challenge:
• If we knew A, B, C (landmarks) exactly, then this would be
easy
• We don’t know which measurement is for which landmark
85
How do we find R?
Challenge:
• If we knew A, B, C (landmarks) exactly, then this would be
easy
• We don’t know which measurement is for which landmark
Solution:
• Assume closest points correspond to each other →
“correspondence match”
• Iteratively find the best transform R
86
Iterative search for best transform
1. Make some initial “guess” of R (current guess)
2. For each point in new scan (t=k+1), find closest point in
previous set (t=k) (correspondence search)
3. Make another “better guess” of R (next guess)
4. Set next guess to be the current guess, repeat steps 2-4 until
converges
87
Iterative search for best transform
1. Make some initial “guess” of R (current guess)
2. For each point in new scan (t=k+1), find closest point in
previous set (t=k) (correspondence search)
3. Make another “better guess” of R (next guess)
4. Set next guess to be the current guess, repeat steps 2-4 until
converges
What makes a “better guess”? → Need a metric
88
How do you choose between 1 or II?
(Assume correspondences have been found)
(2)
(1)
89
How do you choose between 1 or II?
(Assume correspondences have been found)
(2)
(1)
90
How do you choose between 1 or II?
(Assume correspondences have been found)
(2)
(1)
How do you choose between transform 1 or transform 2?
91
How do you choose between 1 or II?
(Assume correspondences have been found)
(2)
(1)
How do you choose between transform 1 or transform 2?
92
How do you choose between 1 or II?
(Assume correspondences have been found)
(2)
(1)
How do you choose between transform 1 or transform 2?
93
Naive method: sum up squared distances
How do you choose between 1 or II?
(Assume correspondences have been found)
(2)
(1)
How do you choose between transform 1 or transform 2?
94
Naive metric: sum up squared distances
How do you choose between 1 or II?
(Assume correspondences have been found)
Naive metric: (2) < (1)
(2) is better
(2)
(1)
How do you choose between transform 1 or transform 2?
95
Naive metric: sum up squared distances
How do you choose between 1 or II?
(Assume correspondences have been found)
(2)
Naive metric: (2) < (1)
(2) is better
Sensitive to noise when
correspondences are
unknown
(1)
How do you choose between transform 1 or transform 2?
96
Naive metric: sum up squared distances
Better Candidate Match Function
Point to projected point (point-to-point) metric:
Closest point to B is 3
97
Better Candidate Match Function
Find projection of point onto reference surface:
Closest point to B is 3
Find projected point on line
segment 2-3, distance is b’
98
Better Candidate Match Function
Squared sum of distances (point-to-point)
Closest point to B is 3
Find projected point on line
segment 2-3, b’
Square the sum of projected
distances
99
How do you choose between I or II
Choose (2)
Score (1) > Score (2)
(2)
(1)
How do you choose between transform 1
or transform 2?
100
Vanilla Iterative Closest Point (ICP)
Uses point-to-point metric
Algorithm makes an initial guess of transform
101
Vanilla Iterative Closest Point (ICP)
Uses point-to-point metric
Iteratively improves the guess to
reduce the value of the metric
102
Vanilla Iterative Closest Point (ICP)
Uses point-to-point metric
Repeats iterations of algorithm
until convergence to solution
103
ICP Example
104
Find Local Transformation to Align Points or Surfaces
This section is courtesy of Cyrill Stachniss
The Basic Alignment Problem with known correspondences
• Given two point sets:
with correspondences
• Wanted: Translation and rotation that minimize
the sum of the squared errors:
Key Idea
If the correct correspondences are known, the correct
relative rotation/translation can be computed directly
Single step
Special Case of the Absolute Orientation Problem
• In the absolute orientation problem, we look for the
similarity transform
• transforming 3D point sets
• we need only the rigid body transform, i.e.,
that
, so
Formal Problem Definition
• Given corresponding points:
• and weights (optional):
• Find the parameters
of the rigid body transform
so that
• so that the squared error is minimized
Solution for Computing the Rigid Body Transform
• Rotation:
• Translation:
• with
110
SVD-Based Alignment (1)
• Compute means of point sets
• Compute mean-reduced coordinates
• Compute cross covariance matrix
SVD-Based Alignment (2)
• Compute SVD
• The rotation matrix is given by
• Translation vector is given by:
• Translate and rotate points:
Point-to-Line Iterative Closest Point
(PL-ICP algorithm by Andrea Censi, 2009)
Main improvements
Point-to-point metric → point-to-line metric
2. Clever “tricks” in correspondence search
1.
114
Point-to-point
Given a point in current scan and corresponding line segment
Point from current scan
Points from the previous scans
115
Point-to-point
Point-to-point measures the distance as the distance to the
nearest point on the segment
Projection onto a line segment
116
Point-to-point vs Point-to-line
Point-to-line measures the distance as the distance to the
nearest line containing the segment
117
Point-to-line
Point-to-line measures the distance as the distance to the
nearest line containing the segment
118
Point-to-line
Point-to-line measures the distance as the distance to the
nearest line containing the segment
119
Why is it faster?
From perspective of one projected point
With point-to-point metric,
the contours of the level sets
are concentric and centered at
the projected point
(Censi 2009, Figure 1(b))
120
Why is it faster?
Level sets approximate actual surface better
(Censi 2009, Figure 1(c))
With point-to-line metric, the
contours of the level set are
lines.
Thus, the level sets
approximate surface better
This means we have a more
accurate approximation of the
error.
Faster convergence
121
Why is it faster?
•
122
Why is it faster?
•
Assume transforms are vertical translations
(points move up and down only)
123
Why is it faster?
•
124
Point-to-point Metric
125
Point-to-point Metric
•
126
Point-to-point Metric
•
127
Point-to-point Metric
•
128
Point-to-point Metric
•
129
Point-to-line Metric
130
Point-to-line Metric
131
Point-to-line Metric
132
Point-to-line Metric
133
Point-to-line Metric
134
Point-to-line Metric
135
Point-to-line Metric
136
Point-to-line Metric
137
Point-to-line Metric
138
Point-to-line Metric
139
Point-to-line Metric
All transforms from
q0 until q2 will have
the same score
140
Point-to-line Metric
With point-to-line,
next iteration will go
directly from q0→ q2
141
Why is this better?
(Censi 2009, Figure 2(b))
142
Why is this better?
(Censi 2009, Figure 2(b))
143
Why is this better?
(Censi 2009, Figure 2(b))
144
Why is this better?
(Censi 2009, Figure 2(b))
145
Why is this better?
Space of transforms is reduced → faster convergence
(Censi 2009, Figure 2(b))
146
147
148
149
150
152
153
F1tenth Autonomous Racing
CS 4501
ROS Cartographer SLAM, AMCL Particle Filter,
and Pure Pursuit Race Trajectory Following
Assignment Overview:
[Section A] SLAM: You will first create a 2D occupancy grid map for the Rice 242 track using ROS
Cartographer SLAM. Section A in this assignment, walks you through how to do this and save a map. We have
provided everything needed for this.
[Section B] AMCL Localization: Having build the map, you will use AMCL particle filter to localize the
F1tenth racecar on that map. We have provided everything needed for this.
[Section C] Waypoint Logging: Once you are able to obtain the pose of the car from AMCL, you can
teleop the car around the track to roughly drive a good racing line. While this is happening, you will log the
poses of the car on the map into a trajectory of waypoints. We have provided everything needed for this.
[Section D] Pure Pursuit: Now you have a map, a reference trajectory (the saved waypoints), and the ability
to localize the car in that map. You will implement the pure-pursuit trajectory following algorithm to
optimize your lap time.
F1tenth Autonomous Racing
CS 4501
Deliverables and Grading
>> What do you need to submit ?
Submit the following
1. PDF assignment report – A single PDF document addressing the questions Q1-Q3 below.
2. The base_map.pgm file for the map
3. The f1tenth_purepursuit package you created with the pure_pursuit.py node and any launch files you
created.
4. Link to a youtube video showing: Add the video link to a text file
(a) Map creation: The video should show both a screen recording from Rviz (with all the relevant topics
added for vizulaization) as well as a video of what is occurring with the car on the track.
(b) Particle Filter Localization: The video should show both a screen recording from Rviz (with all the
relevant topics added for vizulaization) as well as a video of what is occurring with the car on the track.
c) Pure Pursuit: The video should show both a screen recording from Rviz with the reference path,
pose, target and the steering angle visualized synchronized with the video of what is occurring with the car on
the track.
The relevant Rviz vizualization topics are described in their respective sections.
>> What do you need to demo ?
During the demo, you will launch your pure_pursuit node and Rviz using a single launch file called
pure_pursuit.launch as part of the f1tenth_purepursuit package.
This means that all the arguments needed for the pure_pursuit node to run properly are passed from the
launch file itself and the same launch file launches Rviz showing, the map, the pose of the robot from AMCL, the
reference path, the base projection point, the target point, and the steering direction computed by
pure_pursuit. The launch files for move_base, and localization can be separate.
Let the car run multiple laps as we ask some questions about your implementation.
F1tenth Autonomous Racing
CS 4501
>> Prepare a PDF report with the following:
Q1: [Mapping]: An image of the map that you created using ROS cartographer. An Rviz screenshot is fine
similar to the example we have provided at the end of Section A. Be sure to add the ‘Trajectory >> Path’ topic
as described in Section [A] later.
Q2: [Localization and Raceline] An image (screenshot) of the car being localized by the particle filter on
the same map as above (Section B) but also showing the reference path you created using trajectory_builder
(Section C).
Q3: [Pure Pursuit] How did you arrive at the value of the lookahead distance for pure pursuit ?
>> Detailed Grading:
⃞ [50 pts] PDF Report (10 (mapping) + 20 (localization + raceline) + 20 (lookahead))
⃞ [20 pts] base_map.pgm file submitted
⃞ [60 pts] Video Submission (20 (mapping) + 20 (localization) + 20 (pure pursuit))
⃞ [20 pts] Launch File for Pure Pursuit
⃞ [130 pts] Pure Pursuit Implementation and Demo – split into
⃞ [10 pts] Base Projection Calculation
⃞ [10 pts] Target/Goal Calculation
⃞ [10 pts] Look Ahead Distance Tuning
⃞ [40 pts] Pure Pursuit Steering Angle Calculation
⃞ [40 pts] Pure Pursuit Demo
⃞ [20 pts] Pure Pursuit Rviz vizualization during the demo
⃞ [30 pts] Dynamic Velocity Scaling
F1tenth Autonomous Racing
CS 4501
Preparing the F1tenth stack for this assignment
First, we need to install and configure a few launch files on the F1tenth vehicle (denoted by SSH notation)
as well as the local Ubuntu machine (denoted by keyword Local Machine)
On your Ubuntu Local Machine:
Follow the instructions below before you connect to the car as these require an Internet connection.
We will prepare the Local Machine to run cartographer.
1. Open a terminal instance and run the following commands: This is installing a few tools to help build
the cartographer package and help us save maps.
a. sudo apt-get update
b. sudo apt-get install -y python-wstool python-rosdep ninja-build stow
c. sudo apt-get install ros-melodic-map-server
2. Now git pull the latest changes from the course labs repository:
https://github.com/linklab-uva/f1tenth-course-labs
3. Three additional packages have been added to this repo:
a. cartographer
b. cartographer_ros.
c. f1tenth_purepursuit
4. Move these 2 Cartographer packages to the catkin_ws/src folder on the local machine. This assumes
that a ROS catkin_ws already exists on the local machine. If not create that first using
catkin_init_workspace.
F1tenth Autonomous Racing
CS 4501
5. Your ROS workspace on the Local Machine should look like:
catkin_ws >>
devel
build
src >>
cartogrpaher
cartographer_ros
…
6. Now you need to install cartographer_ros dependencies. First, we use rosdep to install the required
packages. The command ‘sudo rosdep init’ will print an error if you have already executed it since
installing ROS. This error can be ignored.
From the catkin_ws root folder run the following commands one at a time:
a. sudo rosdep init
b. rosdep update
c. rosdep install –from-paths src/cartographer* –ignore-src –rosdistro=melodic -y -r
7. Cartographer uses the abseil-cpp library that needs to be manually installed using this script (from the
catkin_ws folder) run :
a. src/cartographer/scripts/install_abseil.sh
b. sudo apt-get remove ros-${ROS_DISTRO}-abseil-cpp
8. Finally Build and install: Regular catkin_make does not work for the ROS cartographer package so we
need to use the following command from the catkin_ws root folder:
a. catkin_make_isolated –install –use-ninja
Note: From this point onwards, if you want to run catkin_make and build any package in the
catkin_ws folder, we will need to use the catkin_make_isolated command since we have copied
the cartogrpaher packages within the catin_ws folder.
This is all we need to do on the Local machine, we can now connect to the F1tenth car and start preparing the
car for this assignment.
On your F1tenth Racecar (SSH) :
Connect to your car.
Ssh into your team’s car and make the following edits:
1. First install the map saver package on the racecar.
a. [SSH Session] sudo apt-get install ros-melodic-map-server
2. Verify/Edit base_mapping.launch:
F1tenth Autonomous Racing
CS 4501
Be sure to source the workspace first:
$ source ~/depends_ws/devel/setup.bash
a. In a SSH session: $ rosed base_mapping base_mapping.launch
b. Change default value of argument ‘car_name’ to ‘car_X’, where ‘X’ is your car’s number. This might
already be changed correctly on your car, but verify that it is correct.
3. Verify/Edit localization.launch:
a. In a SSH session: $ rosed particle_filter localization.launch
b. Verify/Change ‘car_name’ default value to ‘car_X’
c. Verify/Change ‘scan_topic’ default value to ‘car_X/scan’
d. Verify/Change ‘odom_topic’ default value to ‘car_X/odom’
4. Change configurations of VESC board. Open the following file on the car
~/depend_ws/src/F1tenth_car_workspace/move_base/config/vesc.config, and update vehicle RPM
values. This will lower the limits of vehicle speed, meaning that when you control the throttle from the joycon
for teleoperation, the car will be less sensitive to the throttle and will be easier to driver around at low speeds.
● brake_min: -4000.0
● brake_max: 4000.0
● speed_min: -4000
● speed_max: 4000
Triple CAUTION!!!: Be careful while editing the vesc config file. Any unintended edits can break the ability of
the car to drive properly.
5. When you performed the git pull from https://github.com/linklab-uva/f1tenth-course-labs the third package
f1tenth_purepusuit will be copied onto the car.
Copy the f1tenth_purepursuit package from your local machine to the catkin_ws/src on the car. You can use
sftp or scp for this.
Your ROS workspace on the car (SSH) should look like:
catkin_ws >>
devel
build
src >>
f1tenth_purepursuit
packages you created for other assignments such as ‘race’
…
$ run catkin_make on the car and build f1tenth_purepursuit on the car.
Now you are all set to start the assignment
F1tenth Autonomous Racing
CS 4501
[A] 2D Occupancy Grid Map – ROS Cartographer SLAM
1. Connect to the race car:
$ ssh nvidia@192.168.1.1
2. Launch the move_base launch file on the car:
$ roslaunch move_base move_base.launch [SSH session] (Remember to source your env)
3. On the Local Machine, run the following command: (so cartographer will run on the Local Machine)
$ roslaunch cartographer_ros f110_cartographer.launch
4. A Rviz window should automatically open on your local machine and you should see something like this:
If you zoom in to the Rviz grid, you can infact see that a coordinate frame (0,0) is placed at the starting point.
F1tenth Autonomous Racing
CS 4501
5.1 Add the following additional topic to Rviz:
Add by topic: ‘Trajectory >> Path’
6. On the local machine, in another terminal session launch remote_teleop.launch
$ roslaunch move_base remote_teleop.launch [Local Machine]
Drive around and build the map: Use the joystick controller to drive the car around track slowly (the
VESC cnfig you changed should help with this). You can keep track of the map building progress via Rviz. Avoid
making sharp turns, and driving backwards. You can make multiple laps giving cartographer chances for loop
closure.
Fig. Your map will look something like this
6. [Saving the map] We will now use map_saver on the car to save the map. The ROS cartographer session
can continue to run while you save the map. In a new SSH session, we will save the map at a specific location on
the car.
First source the depends_ws ROS workspace[SSH session]
$ source ~/depends_ws/devel/setup.bash
$ roscd base_mapping/map/
(if the map folder is not present, you can create it inside the base_mapping folder first)
$ rosrun map_server map_saver -f base_map
Fig.4 Save the map, and name it as “base_map”
The 2D occupancy grid map built by cartographer has been saved in the files base_map.pgm and base_map.yaml in the
base_mapping/map/ folder on the car
You can stop the cartogrpaher nodes now and close Rviz.
F1tenth Autonomous Racing
CS 4501
[B] AMCL Particle Filter Localization
Now we have a map, we will localize the car within that map using AMCL particle filter.
1. $ roslaunch move_base move_base.launch [SSH session] (ignore this command if move_base is already running)
2. $ roslaunch particle_filter slam.launch [SSH session]
This starts the AMCL package nodes.
● The map (previously saved) is loaded.
● The Odometry message is recognized
● The LIDAR scans are subsscribed.
You should wait until you see the Received Odometry and LIDAR message confirmations as shown below:
Fig.5 received odom and lidar telemetry slam particle filter
3. Open RVIZ on your [Local Machine], and add following topics:
By topic:
● ‘map’
● ‘laser_scan’, change settings: ‘size’: 0.05; ‘Color Trans…’: FlatColor; ‘Color’: preferably Red
● Particle_filter >> viz:
▪ inferred pose >> Pose (set its color to green)
▪ particles >> posearray
F1tenth Autonomous Racing
CS 4501
Fig. 6 Settings in RVIZ for AMCL particle filter
3. $ roslaunch move_base remote_teleop.launch [Local computer]
Drive the car around slowly, and you should be able to see the pose estimates generated from AMCL. You
should also be able to see the particle arrays as well (red arrows).
The AMCL particle filter reports to the pose of your car (in the map frame) on the following topic
/car_x/particle_filter/viz/inferred_pose
F1tenth Autonomous Racing
CS 4501
[C] Waypoint Logging – Racing Trajectory
Now that the map is built and you can loclaize the car within the map and obtain its pose, we can start logging
a racing line trajectory.
The AMCL particle filter reports to the pose of your car (in the map frame) on the following topic
/car_x/particle_filter/viz/inferred_pose
In order to generate a reference trajectory, the idea is to drive the car around the track roughly at the
trajectory you want to take, and simply log the poses reported by the localization node (AMCL).
To do this, we have provided a node called the trajecotry_builder.py as part of the f1tenth_purepursuit
package. If you followed the stack preparation instruction, this node should be located inside the
f1tenth_purepursuit/utils folder in the f1tenth_purepursuit package that you have git pulled previously and build
in your catkin_ws/src/ folder on the car.
You should go through the code for this node, since you need to specify certain paths based on your local
machine. Briefly, this node does the following:
●
●
●
●
●
It subscribes to the /car_x/particle_filter/viz/inferred_pose topic from AMCL
It reads the reported pose of the car
Logs new waypoint if the distance from the previous waypoint is greater than the a threshold called
path resolution that you can set.
The path that you drive can be visualized in Rviz while AMCL is running
The logged path is saved in a csv file that is created when you kill this node.
1. Before running this node. Ensure that the path where the csv file will be stored is correctly specified in
the node on Line 41. The name of the csv file is provided as an argument when you run the node
(more on this below). The path should be such that you save the csv in the f1tenth_purepursuit/path/
folder.
2. [SSH Session] Start the move_base node on the car (ignore if already running)
$ ssh nvidia@192.168.1.1
$ source depend_ws/devel/setup.bash
$ roslaunch move_base move_base.launch
3. [SSH Session] Start particle filter (AMCL) on the car (ignore if already running)
$ ssh nvidia@192.168.1.1
$ source depend_ws/devel/setup.bash
$ roslaunch particle_filter slam.launch.py
4. [Local Machine] Start the remote teleop node and verify you can drive the car around before
starting the trajecotry_builder node.
$ source depend_ws/devel/setup.bash
$ roslaunch move_base remote_teleop.launch
5. [SSH Session] Start trajectory builder. When we run this node we need to provide 3 arguments to
the node:
a. Argument 1 is the where X is your team number e.g. car_9
F1tenth Autonomous Racing
CS 4501
b. Argument 2 is the csv filename where logged points will be stored – this file will be created by
the trajecotry_builder itself and stored at the path specified in line 41.
c. Argument 3 is a Boolean flag to indicate whether you would like to visualize the logged path as
it is being generated – you cna set this to true or false. If set to true, you will see a topic
/trajectory_builder/visualize_path created that can be added to Rviz as you drive around.
putting this all together:
$ ssh nvidia@192.168.1.1
$ source depend_ws/devel/setup.bash
$ rosrun f1tenth_purepursuit trajectory_builder.py car_9 raceline true
Now drive a loop around the track using the remote teleop, then kill the trajectory_builder node – the
csv file is created upon node exit.
F1tenth Autonomous Racing
CS 4501
[D] Implement the Pure-Pursuit Trajectory Following Method
Now you will implement the pure pursuit trajectory following method as discussed in the lectures.
We have provided template code for a node called pure_pursuit.py that can be used as a starting point for this
this part.
This code is heavily commented and you should carefully review the template code for detailed instructions.
In this document, we only list the high level things that you need to do:
1. First, modify Line 36 in the construct_path() function to point to exactly where the csv file created by
trajecotry_builder is located.
2. TODO 1 – Using the reference path, find the base_projection of the car on the reference line.
3. TODO 2 – Fine tune the lookahead distance (default is set to 1m)
4. TODO 3 – Using the base projection point and the lookahead distance, compute the target/goal point
for pure pursuit.
5. TODO 4 – Implement the pure pursuit calculations for computing the steering angle using the pose of
the car (x,y, heading), and the coordinates of the target point.
6. TODO 5 – convert the computed steering into a value between [-100,100] to be sent to the
steering_ablge command.
7. TODO 6 – Implement dynamic velocity scaling
Once you fill in the missing pieces, this node can be run using the following:
$ ssh nvidia@192.168.1.1
$ source depend_ws/devel/setup.bash
$ rosrun f1tenth_purepursuit pure_pursuit.py car_9 raceline
Note that pure_pursuit.py also takes 2 arguments as inputs:
– car_x: your
number
–
Finally, this node will also publish a topic for you to visualize the pose of the car, the base projection, the target
point etc. in Rviz (see the template for details). You should run Rviz to and add the following topic to visualize
pure_pursuit points: /car_x/purepursuit_control/visualize.
Note: While its good to run slowly while debugging, remember we purposely limited the VESC to aid with
mapping. You should set the vehicle RPM back to default values in the vesc.config file to go faster:
● brake_min: -20000.0
● brake_max: 20000.0
● speed_min: -20000
● speed_max: 20000