Lie Theory for State Estimation in Robotics
- Solve an Optimization Problem on SO(3) with Exponential Map ¶
- More about Lie Theory for State Estimation in Robotics ¶
- Summary ¶
Solve an Optimization Problem on SO(3) with Exponential Map ¶
We have robot orientation at time $t$ as $R^{nb}_t$, and given $R^T R = \mathcal{I}$: $$ \begin{align} d\frac{R_t}{dt}R_t^T + R_td\frac{R_t^T}{dt} &= 0 \\ d\frac{R_t}{dt}R_t^T + (d\frac{R_t}{dt}R_t^T)^T &= 0 \\ \end{align} $$ Define $S_t = d\frac{R_t}{dt}R_t^T$, then we have $S_t+ S_t^T = 0$. So $S_t$ is a skew matrix as: $$ S_t= \begin{bmatrix} 0,& -\omega_{t,3},& \omega_{t,2} \\ \omega_{t,3},& 0,& -\omega_{t,1} \\ -\omega_{t,2},& \omega_{t,1},& 0\\ \end{bmatrix} $$.
If $S_t$ is a constant, we can solve this differentiation equation. But when can $S_t$ be a constant? Suppose a rigid body rotate around a normalized $\phi$ constantly with rotation speed as $\|\phi\|$, then we can have $R\phi = \phi$ since $\phi$ would be the eigen vector of $R$, then: $$ \begin{align} R_t \phi &= \phi \\ d\frac{R_t}{dt}\phi + R_t d\frac{\phi}{dt} &= 0 \\ d\frac{R_t}{dt}\phi &= 0 \\ S_tR_t\phi &= 0 \\ S_t \phi &= 0 \\ \begin{bmatrix} 0,& -\omega_{t,3},& \omega_{t,2} \\ \omega_{t,3},& 0,& -\omega_{t,1} \\ -\omega_{t,2},& \omega_{t,1},& 0\\ \end{bmatrix} \begin{bmatrix} \phi_0 \\ \phi_1 \\ \phi_2\\ \end{bmatrix} &= 0 \end{align} $$ In order to make the $S_t \phi = 0$ always true, $\omega$ should be the same as $\phi$ so $[\omega]_\times \phi = 0$.
Now we can assume we have a constant $S_t$ as $S$ when a rigid body rotate constantly around a vector $\omega$, we can get $$ \begin{align} S_t &= [\omega]_\times \\ &=\begin{bmatrix} 0,& -\omega_3,& \omega_2 \\ \omega_3,& 0,& -\omega_1 \\ -\omega_2,& \omega_1,& 0\\ \end{bmatrix} \end{align} $$
By solving $d\frac{R_t}{dt} = SR_t$, we get $$ \begin{align} R_t &= R_0exp([\omega]_\times t) \\ \end{align} $$
Now we can have two very interesting conclusions:
- If $R_0 = \mathcal{I}$, then $R_t = exp([\omega]_\times t)$. This means we can map a rotation axis $\omega$ to a rotation matrix with the exponential operation. This is actually exactly same as Rodrigues' rotation formula $exp([\omega]_\times) = R = \mathcal{I} + sin(\|\omega\|)[\omega]_\times + (1-cos(\|\omega\|))[\omega]_\times^2$. When $\omega$ is small, we should use Taylor expansion $exp([\omega]_\times) = \epsilon + [\omega]_\times + \frac{1}{2}[\omega]_\times^2 + \cdots$ to get R or use an approximation $R \approx \mathcal{I} + [\omega]_\times$
- If $\omega$ is very small, then we can think $exp([\omega]_\times)$ is a small perturbation on the current state $R_0$. This is extremely useful for solving an optimization problem on SO(3) using iterative methods.
Introduction to $\boxplus$ ¶
Suppose the current state is $\bar{x}$, for approaches like Gauss-Newton, we normally would minimize the cost function $f$ as below. We assume that $\bar{x}$ and its perturbation $\Delta x$ in the same Euclidean space so that they be summed together use $+$.
$$ \begin{align} \Delta \hat{x} &= \underset{\Delta x}{\operatorname{argmin}}{\|f(\bar{x}+\Delta x)\|_2} \end{align} $$But what is a more general expression for this problem? For example, we have rotation matrix $R \in \mathbb{R^9}$ and its perturbation $\omega \in \mathbb{R^3}$. Clearly $+$ wouldn't work in this case. Thus we introduce a new plus here as $\boxplus$: $$ \begin{align} \Delta \hat{x} &= \underset{\Delta x}{\operatorname{argmin}}{\|f(\bar{x} \boxplus \Delta x)\|_2} \end{align} $$
For rotation: $$ \begin{align} \bar{R} \boxplus \Delta \omega = \bar{R}exp([\omega]_\times) \end{align} $$ For translation: $$ \begin{align} \bar{t} \boxplus \Delta t = \bar{t} + \Delta t \end{align} $$
Jacobians ¶
For the special case, we can find Jacobian matrix $J^{f(x)}_{x}|_{x = 0}$ to approximate $f$ to be a linear function, then solve the minimization problem. $$ \begin{align} \Delta \hat{x} &= \underset{\Delta x}{\operatorname{argmin}}{\|f(\bar{x}+\Delta x)\|_2} \\ &\approx \underset{\Delta x}{\operatorname{argmin}}\|f(\bar{x})+ J^{f(x)}_{x}|_{x = 0}\Delta x\|_2 \end{align} $$
For the general case, it becomes: $$ \begin{align} \Delta \hat{x} &= \underset{\Delta x}{\operatorname{argmin}}{\|f(\bar{x} \boxplus \Delta x)\|_2} \\ &\approx \underset{\Delta x}{\operatorname{argmin}}\|f(\bar{x})+ J^{f(x\boxplus \Delta x)}_{x}|_{x = \bar{x}, \Delta x = 0}J^{\bar{x}\boxplus x}_{x}|_{x = 0}\Delta x\|_2 \end{align} $$
We actually leverage chain rule to find proper Jacobian matrix as $J^{f(x\boxplus \Delta x)}_{x}|_{x = \bar{x}, \Delta x = 0}J^{\bar{x}\boxplus x}_{x}|_{x = 0}$.
- Though the first part seems complicated, $J^{f(x\boxplus \Delta x)}_{x}|_{x = \bar{x}, \Delta x = 0}$ is actually just $J^{f(x)}_{x}|_{x = 0}$
- For the second part, if $\bar{x}$ and perturbation state $x$ in the same Euclidean space, then $J^{\bar{x}\boxplus x}_{x}|_{x = 0} = \mathcal{I}$. Otherwise we have to calculate it, e.g. $J^{\bar{R}\boxplus \omega}_{\omega}|_{\omega = 0} = J^{\bar{R}exp([\omega]_\times)}_{\omega}|_{\omega = 0} = []_{9 \times 1}$
XW's notes:
- Calculating Jacobian is not hard based on chain rules but is very tedious and error-prone. So using auto-differentiation would be a good idea.
- In order to get Jacobian analytically, we need know more about Lie theory which will be introduced throughly in the next section.
-
Lie theory gives us ways to quickly get the final Jacobian matrices mapping changes between tangent spaces but also involves lots of math. So I think using the chain rules is actually easier since calculating Jacobian to individual elements in rotation matrix is usually trivial then all you need is to find $J^{\bar{R}\boxplus \omega}_{\omega}|_{\omega = 0} = []_{9 \times 1}$.
$$ \begin{align} J^{\bar{R}\boxplus \omega}_{\omega}|_{\omega = 0} &= \frac{\partial \bar{R}Exp(\omega)}{\partial \omega} \\ &= \frac{\partial \bar{R}(\mathcal{I}+[\omega]_\times)}{\partial \omega} \\ &= \bar{R}\frac{\partial [\omega]_\times}{\partial \omega} \end{align} $$
State Correction ¶
The state correction would be as simple as below: $$ \begin{align} x = \bar{x} \boxplus \Delta \hat{x} \end{align} $$
More about Lie Theory for State Estimation in Robotics ¶
Group Operations ¶
A group $(\mathcal{G}, \circ)$ is a is a set, $\mathcal{G}$, with a composition operation, $\circ$. It satisfies 4 properties: closure, identity, inverse and associativity. For any $\mathcal{X},\mathcal{Y},\mathcal{Z} \in \mathcal{G}$: $$ \begin{align} \mathcal{X} \circ \mathcal{Y} \in \mathcal{G} \\ \mathcal{\epsilon} \circ \mathcal{X} = \mathcal{X} \circ \mathcal{\epsilon} \\ \mathcal{\epsilon} = \mathcal{X}^{-1} \circ \mathcal{X} \\ (\mathcal{X} \circ \mathcal{Y}) \circ \mathcal{Z} = \mathcal{X} \circ (\mathcal{Y} \circ \mathcal{Z}) \end{align} $$
So it is easy to see that SO(3) is a group with $\mathcal{\epsilon} = \mathcal{I}_{3 \times 3}$.
Lie Group ¶
A Lie group $\mathcal{M}$ is a smooth manifold whose elements satisfy the group axioms. Its operation $\cdot$ on other set is defined as : $$ \begin{align} \mathcal{X} \cdot \mathcal{v} \end{align} $$ where $v \in \mathcal{V} \ \mathcal{X} \in \mathcal{M}$. It satisfy: $$ \begin{align} \mathcal{\epsilon} \cdot \mathcal{v} &= \mathcal{v} \\ (\mathcal{X} \circ \mathcal{Y}) \cdot \mathcal{v} &= \mathcal{X} \circ (\mathcal{Y} \cdot \mathcal{v}) \end{align} $$ Clearly, applying rotation matrix $R$ on a vector $x \in \mathbb{R}^3$ is a Lie group action.
Tangent Space and Lie Algebra ¶
Given $\mathcal{X}(t)$ a point moving on a Lie group manifold $\mathcal{M}$, its velocity $\dot{\mathcal{X}} = \frac{\partial {\mathcal{X}}}{\partial t}$ belongs to the space tangent to $\mathcal{M}$ at $\mathcal{X}$, noted as $\mathcal{T}_{\mathcal{X}}\mathcal{M}$. The tangent space at $\mathcal{\epsilon}$ is Lie algebra , noted as $\mathfrak{m}$. We can use a exponential operation map things from $\mathfrak{m}$ to $\mathcal{M}$.
Take SO(3) as an example. $R$ is in a manifold $\mathcal{M}$ while $[\omega]_\times$ is in a lie algebra. We will define an operation called wedge $\wedge$ which converts a vector to lie algebra, and an operation called vee $\vee$ to converts a lie algebra element to a vector. Further more, we define ${}^{\mathcal{X}}\omega^{\wedge} \in \mathcal{T}_{\mathcal{X}}\mathcal{M}$.
More generally, we define Exp and Log operations to convert between vector space $\tau$ and manifold $\mathcal{M}$. We also define exp and log operations to convert between lie algebra and manifold. $$ \begin{align} \mathcal{X} &= exp(\tau^{\wedge}) \\ \tau^{\wedge} &= log(\mathcal{X}) \\ \mathcal{X} &= Exp(\tau) \\ \tau &= Log(\mathcal{X}) \end{align} $$ where $exp(\tau^{\wedge}) = \epsilon + \tau^{\wedge} + \frac{1}{2}(\tau^{\wedge})^2 + \cdots$.
Key properties of the exponential map are: $$ \begin{align} exp((t+s)\tau^{\wedge}) &= exp(t\tau^{\wedge})exp(s\tau^{\wedge}) \\ exp(t\tau^{\wedge}) &= [exp(\tau^{\wedge})]^t \\ exp(-\tau^{\wedge}) &= [exp(\tau^{\wedge})]^{-1} \\ exp(\mathcal{X}\tau^{\wedge}\mathcal{X}^{-1}) &= \mathcal{X}exp(\tau^{\wedge})\mathcal{X}^{-1} \end{align} $$
$\boxplus$ and $\boxminus$ ¶
Local operation : right operation ¶
$$ \begin{align} \mathcal{Y} &= \mathcal{X} \boxplus {}^{\mathcal{X}}\tau = \mathcal{X} \circ Exp({}^{\mathcal{X}}\tau)) \\ {}^{\mathcal{X}}\tau &= \mathcal{Y} \boxminus \mathcal{X} = Log(\mathcal{X}^{-1} \circ \mathcal{Y}) \end{align} $$The right operation is a local perturbation in frame $\mathcal{X}$. For example, for rotation operation, we have: $$ \begin{align} \dot{R^{\epsilon}_{\mathcal{X}}} &= R^{\epsilon}_{\mathcal{X}}[\omega_{\mathcal{X}}]_\times \end{align} $$ where $\epsilon$ is the inertia frame.
Global operation: left operation ¶
The right operation is a local perturbation in frame $\mathcal{X}$. For example, for rotation operation, we have: $$ \begin{align} \dot{R^{\epsilon}_{\mathcal{X}}} &= [\omega_{\mathcal{\epsilon}}]_\times R^{\epsilon}_{\mathcal{X}} \end{align} $$ where $\epsilon$ is the inertia frame.
Adjoint Operation ¶
We can find a local operation and global operation to map $\mathcal{X}$ to the same $\mathcal{Y}$, i.e. $\mathcal{X} \boxplus {}^{\mathcal{X}}\tau = {}^{\mathcal{\epsilon}}\tau \boxplus \mathcal{X}$. $$ \begin{align} \mathcal{X} \circ Exp({}^{\mathcal{X}}\tau)) &= Exp({}^{\mathcal{\epsilon}}\tau)) \circ \mathcal{X} \\ Exp({}^{\mathcal{\epsilon}}\tau)) &= \mathcal{X} \circ Exp({}^{\mathcal{X}}\tau)) \circ \mathcal{X}^{-1}\\ exp({}^{\mathcal{\epsilon}}\tau^{\wedge}) &= \mathcal{X} \circ exp({}^{\mathcal{X}}\tau^{\wedge}) \circ \mathcal{X}^{-1} \\ exp({}^{\mathcal{\epsilon}}\tau^{\wedge}) &= exp(\mathcal{X} \circ {}^{\mathcal{X}}\tau^{\wedge} \circ \mathcal{X}^{-1}) \\ \end{align} $$ Then we get a map between local lie algebra and global lie algebra:
$${}^{\mathcal{\epsilon}}\tau^{\wedge} = \mathcal{X} \circ {}^{\mathcal{X}}\tau^{\wedge} \circ \mathcal{X}^{-1}$$
We define the adjoint action as $Ad_{\mathcal{X}}(\tau^{\wedge}) = \mathcal{X} \circ {}^{\mathcal{X}}\tau^{\wedge} \circ \mathcal{X}^{-1}$ so that ${}^{\mathcal{\epsilon}}\tau^{\wedge} = Ad_{\mathcal{X}}({}^{\mathcal{X}}\tau^{\wedge})$
$$ \begin{align} Ad_{\mathcal{X}}(a\tau^{\wedge} + b\sigma^{\wedge}) &= Ad_{\mathcal{X}}(a\tau^{\wedge}) + Ad_{\mathcal{X}}(b\sigma^{\wedge}) \\ Ad_{\mathcal{X}}(Ad_{\mathcal{Y}}(\tau^{\wedge})) &= Ad_{\mathcal{XY}}(\tau^{\wedge}) \end{align} $$Since $Ad_{\mathcal{X}}()$ is a linear operation so that we can find an equivalent matrix operation as $Ad_{\mathcal{X}}$: ${}^{\mathcal{\epsilon}}\tau^{\wedge} = Ad_{\mathcal{X}}{}^{\mathcal{X}}\tau^{\wedge}$ $$ \begin{align} \mathcal{X} \boxplus {}^{\mathcal{X}}\tau^{\wedge} &= Ad_{\mathcal{X}}{}^{\mathcal{X}}\tau^{\wedge} \boxplus \mathcal{X} \\ Ad_{\mathcal{X}^{-1}} &= Ad_{\mathcal{X}}^{-1}\\ Ad_{\mathcal{XY}} &= Ad_{\mathcal{X}}Ad_{\mathcal{Y}} \end{align} $$
From ${}^{\mathcal{\epsilon}}\tau^{\wedge} = \mathcal{X} \circ {}^{\mathcal{X}}\tau^{\wedge} \circ \mathcal{X}^{-1}$, we can find $Ad_{\mathcal{X}} {}^{\mathcal{X}}\tau^{\wedge} = (\mathcal{X} \circ {}^{\mathcal{X}}\tau^{\wedge} \circ \mathcal{X}^{-1})^\vee$
Once we have adjoint matrix, we could find the relation below which is quite useful when calculate Jacobian matrices: $$ \begin{align} Exp(Ad_{\mathcal{X}} {}^{\mathcal{X}}\tau^{\wedge}) \circ \mathcal{X} &= \mathcal{X} \circ Exp({}^{\mathcal{X}}\tau) \\ Exp(Ad_{\mathcal{X}^{-1}} {}^{\mathcal{X}}\tau^{\wedge})\circ \mathcal{X}^{-1} &= \mathcal{X}^{-1} \circ Exp({}^{\mathcal{X}}\tau) \\ Exp({}^{\mathcal{X}}\tau)\circ \mathcal{X} &= \mathcal{X} \circ Exp(Ad_{\mathcal{X}^{-1}} {}^{\mathcal{X}}\tau^{\wedge}) \\ \end{align} $$
Lie Group Examples ¶
SO(3) ¶
Manifold ¶
$$ R \in SO(3) \subset \mathbb{R^{3\times3}} $$Lie Algebra ¶
$$ \begin{align} \theta & \in \mathbb{R^{3}} \\ \theta^{\wedge} &= [\theta]_\times \in \mathfrak{so(3)} \end{align} $$Inverse and Rotation Action ¶
$$ \begin{align} R^{-1} &= R^T \\ x^{n} &= R^{nb} x^{b} \end{align} $$Exp and Log ¶
$$ \begin{align} R &= Exp(\theta) \\ &= \mathcal{I} + sin(\|\theta\|)[\theta]_\times + (1-cos(\|\theta\|))[\theta]_\times^2 \\ \theta &= Log(R) \\ &= \frac{\alpha(R-R^T)^\vee}{2sin(\alpha)} \\ \alpha &= cos^{-1}(\frac{trace{R}-1}{2}) \end{align} $$Adjoint ¶
$$ Ad_{R} = R $$SE(3) ¶
Manifold ¶
$$ M = \begin{bmatrix} R & t\\ 0 & 1 \end{bmatrix} \in SE(3) \subset \mathbb{R^{4\times4}} $$Lie Algebra ¶
$$ \begin{align} \xi &= \begin{bmatrix} \rho \\ \theta \end{bmatrix} \in \mathbb{R^{6}} \\ \xi^{\wedge} &= \begin{bmatrix} [\theta]_\times & \rho\\ 0 & 0 \end{bmatrix} \in \mathfrak{se(3)} \end{align} $$Inverse and Action ¶
$$ \begin{align} M^{-1} &= \begin{bmatrix} R^T & -R^Tt\\ 0 & 1 \end{bmatrix} \\ M_aM_b &= \begin{bmatrix} R_aR_b & t_a + R_at_b\\ 0 & 1 \end{bmatrix}\\ x' &= M x = Rx + t \end{align} $$Exp and Log ¶
$$ \begin{align} M &= Exp(\xi) \\ &= \begin{bmatrix} Exp(\theta) & V(\theta)\rho\\ 0 & 1 \end{bmatrix}\\ \xi &= Log(M) \\ &= \begin{bmatrix} V^{-1}(\theta)t\\ Log(R) \end{bmatrix} \end{align} $$$$ V(\theta) = \mathcal{I} + \frac{1-cos(\|\theta\|)}{\|\theta\|^2}[\theta]_\times + \frac{\|\theta\|-sin(\|\theta\|)}{\|\theta\|^3}[\theta]_\times^2 $$Adjoint ¶
$$ \begin{align} Ad_{M}= \begin{bmatrix} R & [t]_\times R \\ 0 & R \end{bmatrix} \end{align} $$pseudo-Exp and pseudo-Log ¶
$$ \begin{align} M_{pseudo} &= Exp_{pseudo}(\xi) \\ &= \begin{bmatrix} Exp(\theta) & t\\ 0 & 1 \end{bmatrix}\\ \xi_{pseudo} &= Log_{pseudo}(M) \\ &= \begin{bmatrix} t\\ Log(R) \end{bmatrix} \end{align} $$$$ \begin{align} Ad_{M_{pseudo}} &=\begin{bmatrix} R & [t]_\times R \\ 0 & R \end{bmatrix} \end{align} $$Derivatives on Lie Groups ¶
We have $\mathcal{Y} = f(\mathcal{X})$ with $\mathcal{X} \in \mathcal{M}, \mathcal{Y} \in \mathcal{N}, \mathcal{T}_{\mathcal{M}} \in \mathbb{R}^m, \mathcal{T}_{\mathcal{N}} \in \mathbb{R}^n$, then we can get a right Jacobian matrix which maps tangent space $\mathcal{T}_{\mathcal{M}}$ to another tangent space $\mathcal{T}_{\mathcal{N}}$.
$$ \begin{align} {J}_{\mathcal{X}}^{\mathcal{Y}} &= \underset{\tau \rightarrow 0}{\operatorname{lim}}\frac{f(\mathcal{X} \boxplus_{\mathcal{M}} \tau) \boxminus_{\mathcal{N}}f(\mathcal{X})}{\tau} \in \mathbb{R^{n \times m}} \\ &= \underset{\tau \rightarrow 0}{\operatorname{lim}}\frac{Log(f(\mathcal{X})^{-1} \circ f(\mathcal{X} \circ Exp(\tau))}{\tau} \end{align} $$
The we could have a approximation based on Taylor expansion as $f(\mathcal{X} \boxplus {}^{\mathcal{X}}\tau) \approx f(\mathcal{X}) \boxplus {J^{\mathcal{Y}}_{\mathcal{X}}}{}^{\mathcal{X}}\tau$. And without proof here, we also claim relation between left and right jacobian is that ${}^{\mathcal{\epsilon}}J^{\mathcal{Y}}_{\mathcal{X}}Ad_{\mathcal{X}} = Ad_{f(\mathcal{X})}{}^{\mathcal{X}}J^{\mathcal{Y}}_{\mathcal{X}}$.
Note (XW: My own observation): If I use $\boxplus$ for this approximation, I couldn't figure out how to plug in this into Gauss Newton optimization framework. So I have to make sure $f(\mathcal{X})$ is in a Euclidean space which means its tangent space is the same as its manifold space so that $\boxplus$ becomes normal $+$, i.e. $f(\mathcal{X} \boxplus {}^{\mathcal{X}}\tau) \approx f(\mathcal{X}) + {J^{\mathcal{Y}}_{\mathcal{X}}}{}^{\mathcal{X}}\tau$.
Inverse ¶
$$ \begin{align} {J}^{\mathcal{X}^{-1}}_{\mathcal{X}} &= -Ad_{\mathcal{X}} \end{align} $$
Composition ¶
$$ \begin{align} {J}^{\mathcal{X} \circ \mathcal{Y}}_{\mathcal{X}} &= Ad_{\mathcal{Y}}^{-1}\\ {J}^{\mathcal{X} \circ \mathcal{Y}}_{\mathcal{Y}} &= \mathcal{I} \end{align} $$
Exp ¶
$$ \begin{align} {J}_r(\tau) &= \frac{\partial Exp(\tau)}{\partial \tau} \\ &= \underset{\delta\tau \rightarrow 0}{\operatorname{lim}}\frac{Exp(\tau \boxplus \delta \tau) \boxminus Exp(\tau)}{\partial \delta \tau}\\ &= \underset{\delta\tau \rightarrow 0}{\operatorname{lim}}\frac{Log(Exp(\tau)^{-1}Exp(\tau + \delta \tau)}{\partial \delta \tau} \end{align} $$First-order approximation: $$ \begin{align} Exp(\phi + \delta\phi) &\approx Exp(\phi)Exp(J_r(\phi)\delta\phi) \\ Exp(\phi + J_r^{-1}(\phi)\delta\phi) &\approx Exp(\phi)Exp(J_r(\phi)J_r^{-1}(\phi)\delta\phi) \\ Log(Exp(\phi)Exp(\delta\phi)) &\approx \phi + J_r^{-1}(\phi)\delta\phi \end{align} $$ Left and right jacobian: $$ J_r(-\tau) = J_l(\tau) $$
Log ¶
$$ \begin{align} {J}^{Log(\mathcal{X})}_{\mathcal{X}} &= J_r^{-1}(\tau) \end{align} $$
$\boxplus$ ¶
$$ \begin{align} {J}^{\mathcal{X}\boxplus \tau}_{\mathcal{X}} &= Ad_{Exp(\tau)}^{-1}\\ {J}^{\mathcal{X}\boxplus \tau}_{\tau} &= J_r({\tau}) \end{align} $$
$\boxminus$ ¶
$$ \begin{align} {J}^{\mathcal{Y}\boxminus \mathcal{X}}_{\mathcal{X}} &= -J_l^{-1(\tau)}\\ {J}^{\mathcal{Y}\boxminus \mathcal{X}}_{\mathcal{Y}} &= J_r^{-1}(\tau) \end{align} $$where $\tau = Log(\mathcal{Y}\boxminus \mathcal{X}) = Log(\mathcal{X}^{-1}\mathcal{Y})$
Derivatives on Lie Groups Examples: SO(3), SE(3) ¶
SO(3) ¶
Inverse ¶
$$ \begin{align} {J}^{\mathcal{R}^{-1}}_{\mathcal{R}} &= -Ad_{\mathcal{R}} = -R \end{align} $$
Composition ¶
$$ \begin{align} {J}^{\mathcal{Q} \circ \mathcal{R}}_{\mathcal{Q}} &= Ad_{\mathcal{Q}}^{-1} = Q^T\\ {J}^{\mathcal{Q} \circ \mathcal{R}}_{\mathcal{R}} &= \mathcal{I} \end{align} $$
Exp ¶
$t = \|\theta\|, u=\frac{\theta}{\|\theta\|}$ $$ \begin{align} {J}_r(\theta) &= \frac{sin(t)}{t}\mathcal{I}_{3\times3} + (1- \frac{sin(t)}{t})uu^T-\frac{1-cos(t)}{t}[u]_\times\\ {J}^{-1}_r(\theta) &= \frac{t}{2}cot(\frac{t}{2})\mathcal{I}_{3\times3} + (1-\frac{t}{2}cot(\frac{t}{2}))uu^T +\frac{t}{2}[u]_times\\ {J}_l(\theta) &= \frac{sin(t)}{t}\mathcal{I}_{3\times3} + (1- \frac{sin(t)}{t})uu^T + \frac{1-cos(t)}{t}[u]_\times\\ {J}^{-1}_l(\theta) &= \frac{t}{2}cot(\frac{t}{2})\mathcal{I}_{3\times3} + (1-\frac{t}{2}cot(\frac{t}{2}))uu^T -\frac{t}{2}[u]_\times\\ J_l(\theta) &= J_r(-\theta) \end{align} $$
Log ¶
$$ \begin{align} {J}^{Log(\mathcal{R})}_{\mathcal{R}} &= J_r^{-1}(\theta) \end{align} $$where $\theta = Log(R)$
$\boxminus$ ¶
$$ \begin{align} {J}^{\mathcal{Q}\boxminus \mathcal{R}}_{\mathcal{R}} &= -J_l^{-1}(\theta)\\ {J}^{\mathcal{Q}\boxminus \mathcal{R}}_{\mathcal{Q}} &= J_r^{-1}(\theta) \end{align} $$where $\theta = Log(\mathcal{Q}\boxminus \mathcal{R}) = Log(\mathcal{R}^{-1}\mathcal{Q})$
Rotation Action ¶
$$ \begin{align} {J}^{\mathcal{R} \cdot v}_{\mathcal{R}} &= -R[v]_\times\\ {J}^{\mathcal{R} \cdot v}_{v} &= R \end{align} $$
SE(3) ¶
Inverse ¶
$$ \begin{align} {J}^{\mathcal{T}^{-1}}_{\mathcal{T}} &= -Ad_{\mathcal{T}} = -\begin{bmatrix} R & [t]_\times R \\ 0 & R \end{bmatrix} \end{align} $$
Composition ¶
$$ \begin{align} {J}^{\mathcal{S} \circ \mathcal{T}}_{\mathcal{S}} &= Ad_{\mathcal{S}}^{-1} = Ad_{\mathcal{S}^{-1}} =\begin{bmatrix} R^t & R^T[t]_\times R \\ 0 & R^T \end{bmatrix}\\ {J}^{\mathcal{S} \circ \mathcal{T}}_{\mathcal{T}} &= \mathcal{I} \end{align} $$
Exp ¶
$$ \begin{align} {J}_r(\rho,\theta) &= \begin{bmatrix}J_r & Q_r \\ 0 & J_r \end{bmatrix}\\ {J}^{-1}_r(\rho,\theta) &= \begin{bmatrix}J_r^{-1} & -J_r^{-1}Q_rJ_r^{-1} \\ 0 & J_r^{-1} \end{bmatrix}\\ {J}_l(\rho,\theta) &= J_r(-\rho,-\theta)\\ {J}^{-1}_l(\rho,\theta) &= {J}^{-1}_r(-\rho,-\theta)\\ \end{align} $$$t = \|\theta\|$ $$ \begin{align} Q_l(\xi) &= Q_r(-\xi) \\ &= \frac{1}{2}[\rho]_\times \\ &+ \frac{t-sin(t)}{t^3}([\phi]_\times[\rho]_\times + [\rho]_\times[\phi]_\times+[\phi]_\times[\rho]_\times[\phi]_\times)\\ &+\frac{t^2+2cos(t)-2}{2t^4}([\phi]_\times[\phi]_\times[\rho]_\times+[\rho]_\times[\phi]_\times[\phi]_\times-3[\phi]_\times[\rho]_\times[\phi]_\times)\\ &+\frac{2t-3sin(t)+tcos(t)}{2t^5}([\phi]_\times[\rho]_\times[\phi]_\times[\phi]_\times+[\phi]_\times[\phi]_\times[\rho]_\times[\phi]_\times) \end{align} $$
Log ¶
$$ \begin{align} {J}^{Log(\mathcal{T})}_{\mathcal{T}} &= J_r^{-1}(\xi) \end{align} $$where $\xi = (\rho,\theta) = Log(T)$
$\boxminus$ ¶
$$ \begin{align} {J}^{\mathcal{S}\boxminus \mathcal{T}}_{\mathcal{T}} &= -J_l^{-1}(\xi)\\ {J}^{\mathcal{S}\boxminus \mathcal{T}}_{\mathcal{S}} &= J_r^{-1}(\xi) \end{align} $$where $\xi = Log(\mathcal{S}\boxminus \mathcal{T}) = Log(\mathcal{T}^{-1}\mathcal{S})$
Action ¶
$$ \begin{align} {J}^{\mathcal{T} \cdot v}_{\mathcal{R}} &= [R -R[v]_\times]\\ {J}^{\mathcal{T} \cdot v}_{v} &= R \end{align} $$
Uncertainty Propagation ¶
When we use right boxplus, essentially we perturbed poses in local frame (check this for more information). We can define variances in the local tangent frame or global tangent frame.
$$ \begin{align} T_{EV} &= \bar{T}_{EV}Exp({}^{\mathcal{V}}\xi) \\ T_{EV} &= Exp({}^{\mathcal{E}}\xi)\bar{T}_{EV} \end{align} $$where $\bar{T}_{EV}$ is noise free and noise $\xi \sim \mathcal{N}(0, \Sigma)$.
Inverse ¶
$$ \begin{align} T_{EV} &= \bar{T}_{EV}Exp({}^{\mathcal{V}}\xi) \\ T_{EV}^{-1} &= (\bar{T}_{EV}Exp({}^{\mathcal{V}}\xi))^{-1} \\ &= (Exp({}^{\mathcal{V}}\xi))^{-1}\bar{T}_{EV}^{-1}\\ &= Exp(-{}^{\mathcal{V}}\xi)\bar{T}_{EV}^{-1}\\ &= \bar{T}_{EV}^{-1}Exp(-Ad_{T_{EV}}{}^{\mathcal{V}}\xi) \\ \Sigma_{VE} & = Ad_{T_{EV}}\Sigma_{EV} Ad_{T_{EV}}^T \end{align} $$Composition ¶
$$ \begin{align} T_{EV_2} &= \bar{T}_{EV_1}Exp({}^{\mathcal{V_1}}\xi)\bar{T}_{V_1V_2}Exp({}^{\mathcal{V_{12}}}\xi) \\ &= \bar{T}_{EV_1}\bar{T}_{V_1V_2}Exp(Ad_{T_{V_1V_2}^{-1}}{}^{\mathcal{V_1}}\xi)Exp({}^{\mathcal{V_{12}}}\xi) \\ \Sigma_{EV_2} &= Ad_{T_{V_2V_1}}\Sigma_{EV_1} Ad_{T_{V_2V_1}}^T + \Sigma_{V_1V_2} + Ad_{T_{V_2V_1}}\Sigma_{EV_1,V_1V_2} + (Ad_{T_{V_2V_1}}\Sigma_{EV_1,V_1V_2})^T \end{align} $$Relative transformation ¶
$$ \begin{align} T_{V_1V_2} &= (\bar{T}_{EV_1}Exp({}^{\mathcal{V_1}}\xi))^{-1}\bar{T}_{EV_2}Exp({}^{\mathcal{V_{2}}}\xi)\\ &=Exp(-{}^{\mathcal{V_1}}\xi)\bar{T}_{EV_1}^{-1}\bar{T}_{EV_2}Exp({}^{\mathcal{V_{2}}}\xi)\\ &=\bar{T}_{EV_1}^{-1}\bar{T}_{EV_2}Exp(-Ad_{T_{V_2E}T_{EV_1}}{}^{\mathcal{V_1}}\xi)Exp({}^{\mathcal{V_{2}}}\xi)\\ &=\bar{T}_{V_1V_2}Exp(-Ad_{T_{V_2V_1}}{}^{\mathcal{V_1}}\xi)Exp({}^{\mathcal{V_{2}}}\xi)\\ \Sigma_{V_1V_2} &= Ad_{T_{V_2V_1}}\Sigma_{V_1}Ad_{T_{V_2V_1}}^T + \Sigma_{V_2} - Ad_{T_{V_2V_1}}\Sigma_{EV_1, EV_2} - (Ad_{T_{V_2V_1}}\Sigma_{EV_1, EV_2})^T \end{align} $$A Pose Graph Example ¶
SE(3) = SO(3) + T(3) ¶
In the first pose example, we will separate constraints on rotation and translation. In this case, we will leverage Lie theory on SO(3). $$ \begin{align} \Delta \hat{x} &= \underset{\Delta x}{\operatorname{argmin}}{\|f(R_{gi}, t_{gi}, R_{gj}, t_{gj}, \hat{R}_{gi}, \hat{t}_{gi}, \hat{R}_{ij}, \hat{t}_{ij})\|_\Sigma} \\ \end{align} $$
$$ \begin{align} \Delta \hat{x} &= \underset{\Delta x}{\operatorname{argmin}}{ \|(\bar{t}_{gi}+\Delta t_{gi})-\hat{t}_{gi} \| \\ + \|Log(\hat{R}_{gi}^T (\bar{R}_{gi}\boxplus \Delta \theta_{gi}))\| \\ +\|Log(\hat{R}_{ij}^T (\bar{R}_{gi}\boxplus \Delta \theta_{gi})^T(\bar{R}_{gj}\boxplus \Delta \theta_{gj}))\|\\ + \|(\bar{R}_{gi} \boxplus \Delta \theta_{gi})^T((\bar{t}_{gj}+\Delta t_{gj})-(\bar{t}_{gi}+\Delta t_{gi}))-\hat{t}_{ij}\| } \\ &= \underset{\Delta x}{\operatorname{argmin}}{ \|(\bar{t}_{gi} - \hat{t}_{gi}) + \mathcal{I}_{3\times3}\Delta t_{gi} \| \\ + \|Log(\hat{R}_{gi}^T \bar{R}_{gi}) + J^{-1}_r(Log(\hat{R}_{gi}^T \bar{R}_{gi})) \mathcal{I}_{3\times3}\Delta \theta_{gi}\|\\ +\|Log(\hat{R}_{ij}^T \bar{R}_{gi}^T\bar{R}_{gj}) - J^{-1}_r(Log(\hat{R}_{ij}^T \bar{R}_{gi}^T\bar{R}_{gj}))Ad_{\bar{R}^T_{gj}}Ad_{\bar{R}_{gi}} \Delta \theta_{gi} + J^{-1}_r(Log(\hat{R}_{ij}^T \bar{R}_{gi}^T\bar{R}_{gj}))\mathcal{I}_{3\times3}\Delta\theta_{gj}\| \\ + \|\bar{R}^T_{gi}(\bar{t}_{gj} - \bar{t}_{gi})-\hat{t}_{ij} -\bar{R}^T_{gi}\Delta t_{gi} + \bar{R}^T_{gi}\Delta t_{gj} + \bar{R}^T_{gi}[\bar{t}_{gj} - \bar{t}_{gi}]_\times \bar{R}_{gi}\Delta \theta_{gi}\| } \end{align} $$You can find more information about how to solve this factor graph in this post .
# Lie theory SO(3) libraries
from math import sin, cos
import numpy as np
from scipy.linalg import expm, logm
def skew(omega):
return np.array([[0, -omega[2].item(), omega[1].item()],
[omega[2].item(), 0, -omega[0].item()],
[-omega[1].item(), omega[0].item(), 0]])
def unskew(Omega):
return np.array([[Omega[2, 1].item()], [Omega[0, 2].item()], [Omega[1, 0].item()]])
def Exp_SO3(omega):
return expm(skew(omega))
def Log_SO3(R):
return unskew(logm(R))
def Adjoint_SO3(Omega):
return Omega
def LeftJacobian_SO3(w):
theta = np.linalg.norm(w)
if abs(theta) < 1E-8:
return np.eye(3)
a = w/theta
A = skew(a)
return (sin(theta)/theta)*np.eye(3) + (1-sin(theta)/theta)*a.dot(a.T) + ((1-cos(theta))/theta) * A;
def LeftJacobianInv_SO3(w):
theta = np.linalg.norm(w)
if abs(theta) < 1E-8:
return np.eye(3)
a = w/theta
A = skew(a)
cot_theta_half = cos(theta/2.0)/sin(theta/2.0);
return (theta/2.0)*cot_theta_half*np.eye(3) + (1.0 - (theta/2.0)*cot_theta_half)*a.dot(a.T) - (theta/2.0)*A;
def RightJacobian_SO3(w):
return LeftJacobian_SO3(-w)
def RightJacobianInv_SO3(w):
return LeftJacobianInv_SO3(-w)
def from_rpy(roll, pitch, yaw):
return rotz(yaw).dot(roty(pitch).dot(rotx(roll)))
def from_rpy_xyz(roll, pitch, yaw, x, y, z):
rot = from_rpy(roll, pitch, yaw)
rot_t = np.concatenate((rot, np.array([[x], [y], [z]])), axis=1)
return np.concatenate((rot_t, np.array([[0, 0, 0, 1]])), axis=0)
def rotx(t):
c = np.cos(t)
s = np.sin(t)
return np.array([[1, 0, 0],
[0, c, -s],
[0, s, c]])
def roty(t):
c = np.cos(t)
s = np.sin(t)
return np.array([[c, 0, s],
[0, 1, 0],
[-s, 0, c]])
def rotz(t):
c = np.cos(t)
s = np.sin(t)
return np.array([[c, -s, 0],
[s, c, 0],
[0, 0, 1]])
#
# A simple pose graph
np.random.seed(0)
T_gi = from_rpy_xyz(0, 0, 0, 0, 0, 0)
noise = np.random.normal([0, 0, 0, 0, 0, 0],
[1, 0.1, 0.5, 1, 1, 1])
# noise = np.zeros(6)
T_gi_hat = T_gi.dot(Exp_SE3(noise))
T_gi_gj_hat = from_rpy_xyz(0.1, 0.1, 0.1, 5, 5, 5)
T_gj = T_gi.dot(T_gi_gj_hat.dot(Exp_SE3(noise)))
print('Before optimization, check residuals.')
pose_i_anchor_error = Log_SE3(np.linalg.inv(T_gi_hat).dot(T_gi))
print(f'Pose i anchor observation error: \n {np.linalg.norm(pose_i_anchor_error)}')
transformation_ij_error = Log_SE3(np.linalg.inv(T_gi_gj_hat).dot(np.linalg.inv(T_gi).dot(T_gj)))
print(f'Transformation error: \n {np.linalg.norm(transformation_ij_error)}')
def Optimize(R_gi, t_gi, R_gj, t_gj, R_gi_hat, t_gi_hat, R_gi_gj_hat, t_gi_gj_hat):
J = np.zeros((3*4, 12)) # Jacobian
r = np.zeros((3*4, 1)) # Residual
ret = np.zeros((12, 1)) # State update
theta_1 = Log_SO3(R_gi_hat.T.dot(R_gi))
theta_2 = Log_SO3(R_gi_gj_hat.T.dot(R_gi.T.dot(R_gj)))
r[0:3, :] = t_gi - t_gi_hat
r[3:6, :] = theta_1
r[6:9, :] = theta_2
r[9:12,:] = R_gi.T.dot(t_gj - t_gi) - t_gi_gj_hat
# delta_theta_i, delta_t_i, delta_theta_j, delta_t_j
J[0:3, 3:6] = np.eye(3)
J[3:6, 0:3] = RightJacobianInv_SO3(theta_1)
J[6:9, 0:3] = -RightJacobianInv_SO3(theta_2).dot(R_gj.T.dot(R_gi))
J[6:9, 6:9] = RightJacobianInv_SO3(theta_2)
J[9:12, 0:3] = R_gi.T.dot(skew(t_gj-t_gi)).dot(R_gi)
J[9:12, 3:6] = -R_gi.T
J[9:12, 9:12] = R_gi.T
delta_x = -np.linalg.inv(J.T.dot(J)).dot(J.T.dot(r))
assert(delta_x.shape == ret.shape)
return delta_x
R_gi_hat = T_gi_hat[:3, :3]
t_gi_hat = T_gi_hat[:3, [3]]
R_gi_gj_hat = T_gi_gj_hat[:3, :3]
t_gi_gj_hat = T_gi_gj_hat[:3, [3]]
STEPS = 5
for i in range(STEPS):
R_gi = T_gi[:3, :3]
t_gi = T_gi[:3, [3]]
R_gj = T_gj[:3, :3]
t_gj = T_gj[:3, [3]]
delta = Optimize(R_gi, t_gi, R_gj, t_gj, R_gi_hat, t_gi_hat, R_gi_gj_hat, t_gi_gj_hat)
R_gi = R_gi.dot(Exp_SO3(delta[0:3,:]))
t_gi = t_gi + delta[3:6]
R_gj = R_gj.dot(Exp_SO3(delta[6:9,:]))
t_gj = t_gj + delta[9:12]
T_gi[:3, :3] = R_gi
T_gi[:3, [3]] = t_gi
T_gj[:3, :3] = R_gj
T_gj[:3, [3]] = t_gj
print('\nAfter one step optimization, check residuals.')
pose_i_anchor_error = Log_SE3(np.linalg.inv(T_gi_hat).dot(T_gi))
print(f'Pose i anchor observation error: \n {np.linalg.norm(pose_i_anchor_error)}')
transformation_ij_error = Log_SE3(np.linalg.inv(T_gi_gj_hat).dot(np.linalg.inv(T_gi).dot(T_gj)))
print(f'Transformation error: \n {np.linalg.norm(transformation_ij_error)}')
#
SE(3) ¶
In the first pose example, we will separate constraints on rotation and translation. In this case, we will leverage Lie theory on SE(3). $$ \begin{align} \Delta \hat{x} &= \underset{\Delta x}{\operatorname{argmin}}{\|f(T_{gi}, T_{gj}, \hat{T}_{gi}, \hat{T}_{ij})\|_\Sigma} \\ \end{align} $$
$$ \begin{align} \Delta \hat{x} &= \underset{\Delta x}{\operatorname{argmin}}{ \|Log(\hat{T}_{gi}^{-1} (\bar{T}_{gi}\boxplus \Delta \xi_{gi}))\| \\ +\|Log(\hat{T}_{ij}^{-1} (\bar{T}_{gi}\boxplus \Delta \xi_{gi})^{-1}(\bar{T}_{gj}\boxplus \Delta \xi_{gj}))\| } \\ &= \underset{\Delta x}{\operatorname{argmin}}{ \|Log(\hat{T}_{gi}^{-1} \bar{T}_{gi}) + J^{-1}_r(Log(\hat{T}_{gi}^{-1} \bar{T}_{gi})) \mathcal{I}_{3\times3}\Delta \xi_{gi}\|\\ +\|Log(\hat{T}_{ij}^{-1} \bar{T}_{gi}^{-1}\bar{T}_{gj}) - J^{-1}_r(Log(\hat{T}_{ij}^{-1} \bar{T}_{gi}^{-1}\bar{T}_{gj}))Ad_{\bar{T}^{-1}_{gj}}Ad_{\bar{T}_{gi}} \Delta \xi_{gi} + J^{-1}_r(Log(\hat{T}_{ij}^{-1} \bar{T}_{gi}^{-1}\bar{T}_{gj}))\mathcal{I}_{3\times3}\Delta\xi_{gj}\| } \end{align} $$# Lie theory SE(3) libraries
def wedge(x):
X = np.zeros((4, 4))
X[:3, :3] = skew(x[3:6])
X[:3, [3]] = LeftJacobian_SO3(x[3:6]).dot(x[0:3]).reshape(3,1)
return X
def vee(X):
x = np.zeros((6, 1))
x[3:6] = unskew(X[:3, :3])
x[0:3] = LeftJacobianInv_SO3(x[3:6]).dot(X[:3, [3]])
return x
def Exp_SE3(x):
return expm(wedge(x))
def Log_SE3(X):
return vee(logm(X))
def Adjoint_SE3(T):
ret = np.zeros((6, 6))
R = T[:3, :3]
t = T[:3, [3]]
ret[:3, :3] = R
ret[:3, 3:6] = skew(t).dot(R)
ret[3:6, 3:6] = R
return ret
def LeftQ(xi):
w = xi[3:6]
r = xi[0:3]
t = np.linalg.norm(w)
if abs(t) < 1E-8:
return skew(r)
return 0.5*skew(r) + \
+ (t-sin(t))/(t**3) * (skew(w).dot(skew(r)) + skew(r).dot(skew(w)) + skew(w).dot(skew(r)).dot(skew(w))) \
+ (t**2+2*cos(t)-2) / (2*t**4) * (skew(w).dot(skew(w)).dot(skew(r)) + skew(r).dot(skew(r)).dot(skew(w)) - 3*skew(w).dot(skew(r)).dot(skew(w))) \
+ (2*t-3*sin(t)+t*cos(t)) / (2*t**5) * (skew(w).dot(skew(r)).dot(skew(w)).dot(w) + skew(w).dot(skew(w)).dot(skew(r)).dot(w))
def RightQ(xi):
return LeftQ(-xi)
def RightJacobian_SE3(xi):
w = xi[3:6]
ret = np.zeros((6,6))
ret[0:3, 0:3] = RightJacobian_SO3(w)
ret[0:3, 3:6] = RightQ(xi)
ret[3:6, 3:6] = ret[0:3, 0:3]
return ret
def RightJacobianInv_SE3(xi):
w = xi[3:6]
ret = np.zeros((6,6))
ret[0:3, 0:3] = RightJacobianInv_SO3(w)
ret[0:3, 3:6] = -ret[0:3, 0:3].dot(RightQ(xi)).dot(ret[0:3, 0:3])
ret[3:6, 3:6] = ret[0:3, 0:3]
return ret
def LeftJacobian_SE3(xi):
return RightJacobian_SE3(x-i)
def LeftJacobianInv_SE3(xi):
return RightJacobianInv_SE3(-xi)
#
# A simple pose graph
np.random.seed(0)
T_gi = from_rpy_xyz(0, 0, 0, 0, 0, 0)
noise = np.random.normal([0, 0, 0, 0, 0, 0],
[1, 0.1, 0.5, 1, 1, 1])
# noise = np.zeros(6)
T_gi_hat = T_gi.dot(Exp_SE3(noise))
T_gi_gj_hat = from_rpy_xyz(0.1, 0.1, 0.1, 5, 5, 5)
T_gj = T_gi.dot(T_gi_gj_hat.dot(Exp_SE3(noise)))
print('Before optimization, check residuals.')
pose_i_anchor_error = Log_SE3(np.linalg.inv(T_gi_hat).dot(T_gi))
print(f'Pose i anchor observation error: \n {np.linalg.norm(pose_i_anchor_error)}')
transformation_ij_error = Log_SE3(np.linalg.inv(T_gi_gj_hat).dot(np.linalg.inv(T_gi).dot(T_gj)))
print(f'Transformation error: \n {np.linalg.norm(transformation_ij_error)}')
def Optimize(T_gi, T_gj, T_gi_hat, T_gi_gj_hat):
J = np.zeros((3*4, 12)) # Jacobian
r = np.zeros((3*4, 1)) # Residual
ret = np.zeros((12, 1)) # State update
r1 = Log_SE3(np.linalg.inv(T_gi_hat).dot(T_gi))
r2 = Log_SE3(np.linalg.inv(T_gi_gj_hat).dot(np.linalg.inv(T_gi).dot(T_gj)))
r[0:6] = r1
r[6:12] = r2
# delta_i, delta_j
J[0:6, 0:6] = RightJacobianInv_SE3(r1)
J[6:12, 0:6] = -RightJacobianInv_SE3(r2).dot(Adjoint_SE3(np.linalg.inv(T_gj).dot(T_gi)))
J[6:12, 6:12] = RightJacobianInv_SE3(r2)
delta_x = -np.linalg.inv(J.T.dot(J)).dot(J.T.dot(r))
assert(delta_x.shape == ret.shape)
return delta_x
STEPS = 5
for i in range(STEPS):
delta = Optimize(T_gi, T_gj, T_gi_hat, T_gi_gj_hat)
T_gi = T_gi.dot(Exp_SE3(delta[0:6,:]))
T_gj = T_gj.dot(Exp_SE3(delta[6:12,:]))
print('\nAfter one step optimization, check residuals.')
pose_i_anchor_error = Log_SE3(np.linalg.inv(T_gi_hat).dot(T_gi))
print(f'Pose i anchor observation error: \n {np.linalg.norm(pose_i_anchor_error)}')
transformation_ij_error = Log_SE3(np.linalg.inv(T_gi_gj_hat).dot(np.linalg.inv(T_gi).dot(T_gj)))
print(f'Transformation error: \n {np.linalg.norm(transformation_ij_error)}')
#
Summary ¶
First-order approximation: $$ \begin{align} Exp(\phi + \delta\phi) &\approx Exp(\phi)Exp(J_r(\phi)\delta\phi) \\ Exp(\phi + J_r^{-1}(\phi)\delta\phi) &\approx Exp(\phi)Exp(J_r(\phi)J_r^{-1}(\phi)\delta\phi) \\ Log(Exp(\phi)Exp(\delta\phi)) &\approx \phi + J_r^{-1}(\phi)\delta\phi \end{align} $$
$$ \begin{align} Exp(Ad_{\mathcal{X}} {}^{\mathcal{X}}\tau^{\wedge}) \circ \mathcal{X} &= \mathcal{X} \circ Exp({}^{\mathcal{X}}\tau) \\ Exp(R \phi) R &= R Exp(\phi) \\ Exp(Ad_{\mathcal{X}^{-1}} {}^{\mathcal{X}}\tau^{\wedge})\circ \mathcal{X}^{-1} &= \mathcal{X}^{-1} \circ Exp({}^{\mathcal{X}}\tau) \\ Exp({}^{\mathcal{X}}\tau)\circ \mathcal{X} &= \mathcal{X} \circ Exp(Ad_{\mathcal{X}^{-1}} {}^{\mathcal{X}}\tau^{\wedge}) \\ Exp(\phi) R &= R Exp(R^T \phi) \end{align} $$Reference:
-
Sola et al.
A micro Lie theory for state estimation in robotics
-
Barfoot
State Estimation for Robotics
-
Mangelson et al.
Characterizing the Uncertainty of Jointly Distributed Poses in the Lie Algebra
- https://gtsam.org/2021/02/23/uncertainties-part3.html