MOBRO Localisation Slides

Post on 28-May-2017

237 views 0 download

Transcript of MOBRO Localisation Slides

Mobile RobotsLocalisation

Gaëtan GARCIAÉcole Centrale de Nantes

2Objectives

Understand the general concepts of localization.

Understand the interest of hybrid localization algorithms.

Grasp the basics of the Kalman filter, which is the most common algorithm in localization problems.

Be able to use observability criteria to analyze localization problems.

3Course outline

Chapter I – Introductory material– Definitions– Application examples

Chapter II – Absolute localisation methods Chapter III – Relative localisation methods:

odometry Chapter IV – Example of sequential use of

relative and absolute localisations.

4Course outline

Chapter V – Hybrid localisation method using a Kalman filter

Chapter VI – Localisation and observability

Chapter VII – The GPS system

5Relations with other modules

The kinematic model is the basic model used in localization.

The last part of the course (observability) has relations with non linear systems.

The Kalman Filter has relations with Signal Processing.

But, as a whole, this part of the course is fairly independant from the rest.

Chapter I

Introductory material

7

Mobile robot - Functional diagram

Localisation/mapping

Planning

ControlPerception

A prioriknowledge

Mission

« Position », map

Desiredtrajectories

En

vironmentExteroceptivedata

Proprioceptivedata

Position,speed,

actuatortorques

Distances, anglesObstacle positions ...

Speed,accel...

« Position »

Definitions

Pose of a solid: its position and orientation in 3D, characterized by 6 independent parameters.

Localisation: determination of some or all variables of the pose (usually only those necessary to execute a given task).

Several ways to represent the pose. Not always 6 dof to determine: for example

localisation in 2D.

The classical frames

Classes of localisation Static localisation: the vehicle is

motionless while sensor data necessary for localisation is read and processed.

Dynamic localisation: the vehicle moves while the sensor data is read and processed.

Quasi-static localisation: the vehicle does move but while sensor data are read and processed, but the motion is negligible.(always check the hypothesis).

Other classes of localisation Relative localisation: performed wrt the

(known) initial location– sensor data allows to calculate elementary

displacements of the vehicule, which are integrated.

– Uses proprioceptive sensors (encoders, accelerometers, gyrometers, Doppler radars...)

Absolute localisation: performed wrt a fixed frame in the environment.

– Uses exteroceptive sensors (goniometers, range finders...).

Absolute localisation: characteristics

Localisation wrt environment. Frequency of sensor readings often low. Sensor data availability often asynchronous

(sometimes due to vehicule motion). Data processing sometimes creates time

delays (e.g. Vision). Number of external measurements can vary

(e.g. loss of a GPS satellite). Precision varies with the number of beacons

and their position.

Relative localisation: characteristics

No reference to environment, and hence always available (e.g. even in the fog for the sailor).

Possibility to use high frequencies. Possibility to use a constant calculation

period. Processing time of data usually short. Precision degrades with time or traveled

distance.

The (old) sailor method:relative and absolute localisations

Initialposition

Real path

t0

Estimatedpath

tkN

t1 vi

µi

sensors:log + compass(+ map of currents)

¸1

¸2

Hybrid localisation: intermixing absolute and relative

Idea: associate external and internal sensors. Goal: Retain advantages of absolute and relative

localisations, which are complementary.begin Calculate initial location loop (fast) Read internal sensors (e.g. Encoders) Calculate relative localisation if external sensor data available then Update estimated location endif endloopend

Chapter II

Absolute localisation methods

The surveyor's problem

Localisation of a point from two known ones.

®2®1

Known point

Knknown point

A B

P

®1 (resp. ®

2) is measured by setting a goniometer above A

(resp. B) and measuring the angular separation between A (resp. B) and P.

How to calculate the position of P ?

The surveyor's method

x

y

dBaseline (known)

®2®1

b1 b2

Known point

Unknown point

Precision of the surveyor's method

the measurement vector (inputs)

the covariance matrix of M

the position vector of point P

NB : This method is general.

Things to remember

It is often possible to simplify calculations by choosing a convenient local frame.

In the case of goniometric systems, it is often useful to use distances as intermediate variables.

By linearizing an input-output relation, it is possible to calculate the covariance matrix of the errors on output variables as a function of the covariance matrix of input variables.

System:

Localisation usingdistances to known points

Known point

Unknown point

2nd solutionwith twomeasures

2D static localisation usingazimuth angles

beacons

azimuth angles

Unknownrobotposture

How to calculate the posture

Beacon coordinates in robot frame Rm

:

Inter-beacon distances are invariant (3 equations):

Note: the use of distances as intermediate variables yields a simpler system (try with (x,y,µ) if you're not convinced)

• Find b1, b2 and b3.• Deduce beacon coordinates in Rm.• Calculate mTb.• Calculate 0Tm.

6D static localisation usingazimuth and elevation angles

Problem: knowing ¸1, ¾1, ¸2, ¾2, ¸3, ¾3, calculate the positionand attitude of the sensor frame RS for know beacon positions B1, B2, B3.

A possible implementation

CCD

rotation axisBeacon(knownposition)

elevation

optic axis

encoder

¸azimuth

¾

¾

How to calculatethe position and attitude

Beacon coordinates in robot frame Rm:

Inter-beacon distances:

Then b1, b2, b3 yield 0Tm

as in the 2D case.

Using the intermediate variables allows to solvea nonlinear system with only 3 unknowns instead of 6!

Dynamic localisationby a deterministic method

Goal: localise the robot while it moves, with the beaconsdetected at different locations.Requires: hypotheses on the motion.

O

t r a j e c t o i r e

B

B T C iO T B

C i

M i

C i T M i

O T M i

( c o n n u e ) ( c a l c u l é e )

( t r a n s l a t i o n  c o n s t a n t e  

c o n n u e )

( r é s u l t a t   f i n a l )

(known) (calculated)

trajectory

(constantknowntranslation)

(final result)

The measurementsin the dynamic case

The graph corresponds to a straight line motion between measurement instants t0, ti, tj.

The simplest hypotheses

Locally:– The rolling surface is approximated by a plane.– The motion is a straight line at constant speed.

– The speed is parallel to axis xm.

Consequences:– Seven unknowns: position + attitude + speed.– The hypotheses must be satisfied during the

time needed to detect a sufficient number of beacons to determine the seven unknowns.

How to solve

It is necessary to detect four beacons (one of three beacons seen twice at different locations)

Similar to previous cases:– Write beacon coordinates in sensor frame

corresponding to time t0 (first beacon detection).

– Write dij2 (there are more than necessary).

– Solve in the least squares sense.

– Determine position and attitude at time t0 plus speed.

What are the problems? If the model of vehicle motion is very

simple, it may not correspond to reality.

If the model is more general, the number of unknowns grows. Hence, the model must be valid for a longer time period (detecting more beacons is necessary to obtain more equations).

In any case, the need for a motion model limits the motion dynamics that can be handled by the system.

Chapter III

A relative localisation method:odometry

Recall: Kinematic modelof a (2,0) type robot

Under pure rolling hypothesis(see kinematic modelling course):

Model under discrete formEquations of odometry

Beware: there are several forms for the equations of odometry (first order equivalence).

Alternate forms

Posture errors in robot frame

xestexey

Preal

Pest

eµ : heading error

ex : longitudinal error

ey : lateral errorInterpretation in the case of small eµ:

yest

Posture errors in robot frame

Reference trajectory

0 1 2 3 4 5 6 7­ 1

­ 0 . 8

­ 0 . 6

­ 0 . 4

­ 0 . 2

0

0 . 2

0 . 4

0 . 6

0 . 8

1x R e f   a n d   y R e f

t   ( s )

xRef

, yR

ef (m

)

­ 1 ­ 0 . 5 0 0 . 5 1

­ 0 . 8

­ 0 . 6

­ 0 . 4

­ 0 . 2

0

0 . 2

0 . 4

0 . 6

0 . 8

x   ( m )

y (m

)

Real robot path x(t) and y(t)

Initial location

• The real trajectory is always the circle: imagine that something forces the robot to follow the circle at a constant speed.• The various estimated trajectories will be different.

Odometry in a near-perfect world

No model errors. High sampling rate. High encoder resolution

x (m)

e µ (

°)

y (m

)e x

(m

)

e y (

m)

t (s)

t (s)

t (s)

­ 1 ­ 0 . 5 0 0 . 5 1

­ 0 . 5

0

0 . 5

1

0 2 4 6 80

0 . 0 0 5

0 . 0 1

0 2 4 6 8­ 1 0

­ 5

0

5x   1 0

­ 3

0 2 4 6 8­ 1

­ 0 . 5

0

0 . 5

1x   1 0

­ 3

Odometry with a wheel radius error

+1% wheel radius error (right wheel) is already a severe error !

x (m)

e µ (

°)

y (m

)e x

(m

)

e y (

m)

t (s)

t (s)

t (s)

­ 1 ­ 0 . 5 0 0 . 5 1

­ 0 . 5

0

0 . 5

0 2 4 6 8­ 0 . 2

­ 0 . 1 5

­ 0 . 1

­ 0 . 0 5

0

0 2 4 6 8­ 0 . 0 6

­ 0 . 0 4

­ 0 . 0 2

0

0 . 0 2

0 2 4 6 8­ 1 5

­ 1 0

­ 5

0

Odometry with a wheelbase error

5 mm / 400 mm error on wheelbase. Similar problems.

x (m)

e µ (

°)

y (m

)e x

(m

)

e y (

m)

t (s)

t (s)

t (s)

­ 1 ­ 0 . 5 0 0 . 5 1­ 1

­ 0 . 5

0

0 . 5

1

0 2 4 6 80

0 . 0 2

0 . 0 4

0 . 0 6

0 . 0 8

0 2 4 6 8­ 0 . 0 1

0

0 . 0 1

0 . 0 2

0 . 0 3

0 2 4 6 80

1

2

3

4

5

Odometry: low resolution encoders

Low resolution encoders (here 100 dots per revolution)generate small errors compared to geometrical model errors

x (m)

e µ (

°)

y (m

)e x

(m

)

e y (

m)

t (s)

t (s)

t (s)

­ 1 ­ 0 . 5 0 0 . 5 1

­ 0 . 5

0

0 . 5

1

0 2 4 6 80

0 . 0 0 5

0 . 0 1

0 2 4 6 8­ 2

­ 1

0

1

2x   1 0

­ 3

0 2 4 6 8­ 1

­ 0 . 5

0

0 . 5

1

Odometry: low sampling rate

Odometry can do with low sampling rates (here 50 Hz).And calculations are simple.

x (m)

e µ (

°)

y (m

)e x

(m

)

e y (

m)

t (s)

t (s)

t (s)

­ 1 ­ 0 . 5 0 0 . 5 1

­ 0 . 5

0

0 . 5

1

0 2 4 6 80

0 . 0 0 5

0 . 0 1

0 . 0 1 5

0 . 0 2

0 2 4 6 8­ 0 . 0 1

­ 0 . 0 0 5

0

0 . 0 0 5

0 . 0 1

0 2 4 6 8­ 0 . 0 1

­ 0 . 0 0 5

0

0 . 0 0 5

0 . 0 1

Speed estimation:high resolution decreases noise

A higher resolution (1000 dots in green, 100 in red) reduces noise on speed estimate by numerical difference. Useful if you use speed estimation in your control for example.

0 0 . 1 0 . 2 0 . 3 0 . 4 0 . 5 0 . 6 0 . 7 0 . 8 0 . 9 16

7

8

9

1 0

1 1

1 2

1 3

Actual speed

Estimated speed(100 dots)

Estimated speed(1000 dots)

(

rd/s

)

t (s)

Speed estimation:high sampling rate increases noise

A higher sampling rate (1000 Hz in red, 100 Hz in green) increases noise on speed estimate by numerical difference.

Actual speed

Estimated speed(1000 Hz)

Estimated speed(100 Hz)

0 0 . 0 1 0 . 0 2 0 . 0 3 0 . 0 4 0 . 0 5 0 . 0 6 0 . 0 7 0 . 0 8 0 . 0 9 0 . 16

7

8

9

1 0

1 1

1 2

1 3

(

rd/s

)

t (s)

Classes of odometry errors Systematic errors:

– Wheel radius and wheelbase errors– Misalignment of wheels– Finite resolution and sampling rate

Non systematic errors:– Uneven floor (including unexpected objects)– Wheel slippage due to:

• Slippery floor• Overacceleration and fast turning (skidding).• External forces / internal forces (castor wheels)• Non-point wheel-to-floor contact.

Relative importance of errors

Systematic errors:– They are very serious because their effects

accumulate over time.– Usually dominant on smooth indoor surfaces.

Non systematic errors:– Dominate on rough, irregular terrains.– Difficult to handle because unexpected.– Can cause complete failure of error evolution

models based on statistical worst case analysis (for example used to empirically determine inter-landmark distances).

Unidirectional square-path test

• Use distances fromthree robot points to the wall to measure actual position.

• Compare odometry estimated position to actual position for several tests: set of return position errors.

• Do not compare return position to start position (would include controller errors).

error = final position – calculated position

Examples from Borenstein and Feng

Unidirectional square-path test

Two different problems yield the same result...

Unidirectional square-path test • By modifying thewheelbase only, the user obtains a very small final error.

• But there are intermediate errors. Errors compensate in the final position, due to the path being performed in a single direction.

• The unidirectional square-path test is not a good test.

Bidirectional square-path test

• By performing the test in both directions, the previous problem is overcome.

• The errors which compensated in one direction sum up in the other.

•This test imagined by Borenstein & Feng is also called the University of Michigan Benchmark or UMBmark.

Practical considerations

Orientation errors grows faster for vehicules with short wheelbase.

Castor wheels can induce slippage when reversing direction, especially when they bear a large part of the load.

Odometry wheels should be thin and not compressible, which may not be compatible with load bearing. Usually possible for small, lightweight robots with little power.

Odometry with auxiliary wheels

Principle: add a pair of thin, non load bearing auxiliary wheels, typically made of metal with a thin rubber band.

Figure from Borenstein & Feng

The encoder trailerIn some cases, like tracked vehicles, it is extremely difficult to use odometry. The encoder trailer is a solution... if ground allows!

The University of Michigan encoder trailer

The robot « Melody »

Results with Melody

The experiment was conducted on a lawn on uneven, wet terrain. Error 1.15 m for a 26 m path (approx. 4.4 %)with non zero initial error.

7 5 8 0 8 5 9 0 9 5 1 0 09 0

9 5

1 0 0

1 0 5

1 1 0

1 2 0

x ( m )

.

.

.B 1 B 2

B 3

o d o m é t r i es e u l e

f i l t r e 2 D

C h e m i n

y ( m )

1 1 5

120

90

95

100

105

110

115

Path

Odometry

Exact path

y (m)

75 80 85 90 95 100x (m)

B3

B1

B2

Odometry on non planar surfacesPosition and attitude parameters

Comparison with Y, P, R:yaw = pitch = -dc

Interest of dc, dv: directly measurable by inclinometers.

O

X 0

x

Y 0

Z 0

y

z

v

d cd v

d c = g r a d i e n td v = o r t h o g o n a l c r o s s - f a l l

h o r i z o n t a l p l a n e

l e f t - h a n d e do r t h o g o n a l

t r i h e d r o n

Horizontal plane

Left handedorthogonaltrihedron dc

dv

dc: declivitydv: orthogonal cross-fall

State:

w.r.t. x0w.r.t. horizontal plane

Odometry on non planar surfacesInternal sensors setup

Pendularinclinometer

Pendulum

Frame

Viscousfluid

O

X

x

Y

Z

y

z v

r o l l i n g p l a n e b e t w e e n t i m ei n s t a n t s i a n d i + 1

x

z yi + 1

i + 1

i + 1

i + 1

i

i

i

i S

R R

Sa

n

s

Odometry on non planar surfacesProblem: find relations

inclinometer measurements

PrincipleEncoders give U

Zi known → 0TSi known → 0R¼ known → rolling plane ¼→ in this plane, odometric displacement using a 2D model→ projection onto R0 gives ±x, ±y, ±z, ±Ã and hence Zi+1.

Elementary translation:

Odometry on non planar surfaces

Elementary rotation:

Odometry on non planar surfaces

The term is always defined

since

Chapter IV

Example of sequential use ofrelative and absolute localisations

System description

• Magnetic blocks are inserted in the ground at known locations.• The robot is equipped with a linear detector which measures the position of the block wrt the center of the detector, when crossed.

dM

Permanentmagnet

Lineardetector

ym

xme

Principle of operationThe principle of operation is similar to the (old) sailor method.

Between two passages over magnetic stations, the robot uses its wheel encoders for relative localisation.

When it crosses a magnetic station, it calculates its posture relative to the station (and hence its absolute posture) and forgets its previous relative localisation.

Distance between stations depends on odometry performance.

Absolute localisation equations

Rm(t1)

Rm(t2)

¢D

P1(xp1,yp1)

P2(xp2,yp2)xm(t2)

ym(t1) xm(t1)

µrel

Measurement d2 (here > 0)

Measurement d1 (here < 0)

Hypothesis: the robot crosses the station in a straight line motion

M2(x2,y2)

Absolute localisation equations

Robot point above magnetic block 2:

Localisation result:

Localisation resultsy

(m)

Results with right radius error of 10%

x (m)

e µ (

°)

t (s)

e y (

m)

e x (

m)

t (s)

­ 1 ­ 0 . 5 0 0 . 5 1­ 1

­ 0 . 5

0

0 . 5

1

0 2 4 6 8­ 0 . 1

­ 0 . 0 5

0

0 . 0 5

0 . 1

0 2 4 6 8­ 0 . 1 5

­ 0 . 1

­ 0 . 0 5

0

0 . 0 5

0 2 4 6 8­ 1 5

­ 1 0

­ 5

0

5

Further results

Let's switch to an interactivedemo under Matlab...

Chapter V

Hybrid localisation methodusing a Kalman filter

The robot and its sensors

The sensor is a rotating sensor. It detects only one beacon at a time, and not frequently (it is slow).

Its motion between two measurements cannot be neglected.

xm

ym

¸2

B1B2

B3

µ

M

x

y

x0

y0 ql

qr

What do we want to know?

We want to determine the vehicule position and heading X=[x, y, µ]T.

We shall be satisfied with knowing X at discrete time instants: Xk=[xk, yk, µk]t with tk+1-tk=T.

We suppose that X0 is known with a certain precision:Just start motionless and use the method seen in chapter II.

When no beacon is detected

Under a more synthetic form:

Use odometry:

These equations are the standard form of an evolution model.

X is called the state vector.

But these equations are not perfect

A standard way to take into account these imperfectionsis to represent them by an additive noise.

Typically, ®k is characterised by its covariance matrix Q®.

But what to do with Q® ?

What we want: reflect the fact that our confidence in Xdecreases with successive applications of odometry equations without external measurements.

Modelling how precision evolvesduring odometry

Remember: if X and Y are two random vectors witha linear relation between them Y = M.X, then :

Py = M. Px. MT

The relation between Xk+1 and Xk is not linear, but it ispossible to linearize it.Around which point ? Not much choice: around Xk.

The sensor detects a beaconMeasurement: ¸j

The measurement depends on the actual position and heading of the vehicule, in a way we can write explicitly:

What to do with ¸j ? What information does it bear ?

If actual and estimated positions were equal what would we see ?We would measure:

Why not correct X proportionnally to … ?

What terms shouldthe correction include ?

• : the innovation (Yj, size mx1 in the general case)• The quality of the measurements, characterised by Q¯ (mxm)

• The last available estimate: (nx1)

via its supposed quality: (nxn)

• How a small vehicle displacement influences the measurement

around the current estimated position:

(mxn)

The update equations

Covariance matrix of (mxm)

The correction gain should be inversely proportional to this term.

Classical Kalman filter equations:

Comment on update gain

B

The update calculation will not influence state variable x

x

y

Overall algorithmWhat happens at time tk+1

// Prediction phase:

If an external measurement Y has been obtained during last periodthen

// Predict state :

// Calculate linear approx. of system:

// Read sensors and calculate input:

// Propagate error:

// Consider Y corresponds to time // tk+1. Perform estimation phase:

// Calculate gain:

// Update state:

endif

Coherence testConsider the sensor which detects light beacons.What happens if it detects the reflection of a beacon on a window?

It is possible to test the coherence of a measurement by calculatingthe associated Mahalanobis distance:

If d < threshold, the measurement is coherent and can be usedin the estimation phase.

Note that d depends on the beacon (via C). Calculating thevalues of d for each beacon allows to identify which one yieldedthe measurement: no need for signed beacons

Experimental setup

1 . 1 5 0 m

0 . 1 m

0 . 2 8 5 m

S I R E M

A v a n t

Because the sensor is not above point M, the equations used are slightly different from those presented before.

Experimental setup

S M

W h i t e r o p e

L i n e a r c a m e r a t o t r a c k t h e w h i t e r o p e

s e n s o r

C a s t o r w h e e l

• The robot tracks a white rope tightened between know points.

• The points have been precisely localised by surveyors with sub-centimeter accuracy.

c a m é r a

c a m e r a

Sensor characteristics

0.09° in azimuth

0.01° in elevation (not used in 2D localisation)

For a beacon at 40 m, it corresponds to 6.5 cm error in x-y and 8 mm in z.

Slow sensor: 1 revolution in 5 s.

Experimental results

5 0 1 0 0 1 5 0 2 0 0 2 5 0 3 0 0 3 5 0 4 0 0 4 5 0 - 0 . 1

- 0 . 0 8

- 0 . 0 6

- 0 . 0 4

- 0 . 0 2

0

0 . 0 2

0 . 0 4

0 . 0 6

0 . 0 8

t ( s )

( m )

+3syy

curve

-3syy

curve

• The (estimated) ¾ is extracted from the covariance matrix.• The error signal should always be in the +/- 3¾ interval.• The increase of ¾ at the end of the path reflects a less favorable beacon-vehicle configuration.• B2 is sometimes missed at the end: things get worse...

7 5 8 0 8 5 9 0 9 5 1 0 09 0

9 5

1 0 0

1 0 5

1 1 0

1 2 0

x ( m )

.

.

.B 1 B 2

B 3

o d o m é t r i es e u l e

f i l t r e 2 D

C h e m i n

y ( m )

1 1 5

Path

Odometry

Exact pathB2

Experimental results

0 2 0 4 0 6 0 8 0 0

0 . 0 2

0 . 0 4

0 . 0 6

0 . 0 8

0 . 1

N u m b e r o f m e a s u r e m e n t s o n B 1

Mahalanobis distance calculated with B1

when B1 is actually detected

6 0 0 0

8 0 0 0

1 0 0 0 0

1 2 0 0 0

1 4 0 0 0

1 6 0 0 0

1 8 0 0 0

Mahalanobis distances calculated with B2 and B3

when B1 is actually detected

Experimental results

0 1 00

2 0 0 3 0 0 4 0 0 5 0 0 - 0 . 8

- 0 . 6

- 0 . 4

- 0 . 2

0

0 . 2

0 . 4

0 . 6

0 . 8

t ( s )

( d e g )

0 1 00

2 0 0 3 0 0 4 0 0 5 0 0 - 8

- 6

- 4

- 2

0

2

4

6

8

( d e g )

t ( s )

Heading errorHeading error

with corresponding estimated +/- 3¾.

Experimental results

- 0 , 0 3

- 0 , 0 1

0 , 0 1

0 , 0 3

9 6 8 2 . 5 9 1 . 5 8 7 7 8 7 3 . 5 6 9

( m )

E s t i m a t e d X

Y errors are repeatable over three tests along the same path

M

r o l l a n g l e S

1° of roll angle generates 3 cm

lateral error

Experimental results

0 1 0 0 2 0 0 3 0 0 4 0 0 5 0 0 6 0 0 7 0 0 8 0 0 - 0 . 4

- 0 . 3

- 0 . 2

- 0 . 1

0

0 . 1

0 . 2

f i r s t r e v o l u t i o n 2 n d r e v o l u t i o n

K a l m a n f i l t e r o d o m e t r y

L o c a l i s a t i o n w i t h o u t t a k i n g i n t o a c c o u n t s e n s o r b a c k w a r d s h i f t

( m )

t ( s )

8 0 9 0 1 0 0 9 0

9 5

1 0 0

1 0 5

1 1 0

1 1 5

1 2 0

B 1 B 2

B 3

y ( m )

x ( m )

Tracked path

General equations

Fundamental: The output vector Y must be a function of thestate vector only. Any parameter in the expression of gwhich is not from X must be a known constant.

Exercisexm

ym µ

M

x

y

x0

y0

B1(0,0) B2(d,0)

d1

d2

State : [x,y,µ]T

Write the observation equation.Calculate matrix C of the Kalman filter

Same exercisexm

ym µ

M

x

y

x0

y0

B1(0,0) B2(d,0)

d1d2

a

Chapter VI

Localisation and observability

Notion of observability

Observability: possibility to reconstruct the state x knowing the inputs u and the outputs y of the system.

Questions:– Is state estimation always possible?

– Are there situations when the estimator diverges?

Observability tests help determine the dangerous situations.

Continuous modelof our localisation problem

With three beacons

Generic observabilityof Diop and Fliess

A system is observable if and only if it is possible toexpress the state vector as a function of the input vector, the output vector and their time derivatives.

The system is generically observable if the set of points for which this condition is not satisfied is azero-measure set (or null set).

Note:

Lie derivative

Let g be a differentiable real function, and f a realvector function. The Lie derivative of g with respectto f is the vector field:

The derivatives of the output with respect to time yield:

Observability rank condition

Let the observability matrix O be defined as:

Where is the gradient vector

of real-valued function h with respect to state vector X.

If the matrix O has full rank (n) then the system is observable.

Observabilitywith three beacons

Consider the sub-matrix O1:

Whenever det(O1) is non zero, observability is granted.

With:

det(O1)=0 on the circle (C) which passes through the three(non aligned) beacons. Elsewhere, observability is granted.On the circle, we do not know yet.

¸1

¸2

¸3

B1

B1

B3

xm(C)

Rank condition on the circle

If v =0 then and

Then

Hence if the translational speed is zero,det(O) = det(O1) and rank(O) < 3.

Still, this does not prove that the system is not observable,because the rank condition is a sufficient condition but is not a necessary condition.

Simulationmotionless robot on the circle

0 1 0 0 2 0 0 3 0 0 4 0 0 5 0 0 6 0 0 - 0 . 6

- 0 . 4

- 0 . 2

0

0 . 2

0 . 4

0 . 6

0 . 8

1

1 . 2 d e g r e e

H e a d i n g e r r o r s t i n s

The estimator does not converge: it cannot drive theinitial error to zero.

Simulation - moving on the circle

After a transient phase, the error curves with and without initial error are « the same ». Precision is not as good as outside the circle.This example illustrates the influence of the input on observability, for a non linear system.

0 1 0 0 2 0 0 3 0 0 4 0 0 5 0 0 6 0 0 - 0 . 8

- 0 . 6

- 0 . 4

- 0 . 2

0

0 . 2

0 . 4

0 . 6

0 . 8

1 d e g r e e

H e a d i n g E r r o r s

t i n s C o n s t a n t s p e e d = 0 . 2 m / s

System with two beacons

As p < n, observation cannot be obtained using the azimuth angles (outputs) alone. The input (speed and rotation speed) will have to play a part...

If v = 0:

The sufficient condition will never be met if the vehicule does not move. Pretty logical, though. But what if it moves?

Two beacons and moving robotThe analysis of the rank of O becomes complex as higher derivatives need to be used.By studying the determinants of the various square sub-matrices of O with a formal calculation software, we found three potentially dangerous vehicle motions:

¢3

¢2¢1

B1

¢3

B2

Simulations on D1

0 1 0 0 2 0 0 3 0 0 4 0 0 5 0 0 6 0 0- 0 . 5

0

0 . 5

1

1 . 5 d e g r e e

H e a d i n g E r r o r s t i n s

W i t h I n i t i a l E r r o r

8 0 . 9 8 1 8 1 . 1 8 1 . 2 8 1 . 3 8 1 . 4 8 1 . 5 8 1 . 6 8 1 . 79 2

9 4

9 6

9 8

1 0 0

1 0 2

1 0 4

1 0 6

1 0 8

e s t i m a t e dp a t h

r e a lp a t h

b e a c o n

When the vehicle moves along ¢1, the observer does not converge.

Remark: our observer converges for other straight line paths through B1, other than ¢1. Other researchers tested a least-square observer which did not converge: observability does not guarantee that a particular observer will converge...

System with one beacon

You didn't expect a miracle, did you?

The system is never observable.

In simulation, the algorithm seems to diverge slower than odometry. After all, some information is better than no information...

Some conclusions

Observability analysis allows to find dangerous situations which are sometimes not easy to suspect intuitively.

Observability is not a « binary » phenomenon. When the vehicle comes close to non observable situations, the precision degrades and numerical problems can appear. So much for zero-measure sets!