huber loss

Robust Loss Functions for Least Square Optimization

In most robotics applications, maximizing posterior distribution is converted into a least square problem under the assumption that residuals follow Gaussian distribution. However, errors are not Gaussian distributed in practice. This makes the naive least square formulation not very robust to outliers with large residuals. In this post, we will first explore robust kernel approachs, then try to model residuals using Gaussian mixtures.

Learn more about robust kernels for least square problems

Implicit Mapping with DNN

A tutorial for representing maps using DNN

Learn more about implicit mapping

Lie Theory for State Estimation in Robotics

A tutorial for Lie theory used in state estimation of robotics.

Learn more about Lie theory for state estimation in robotics

Factor Graph

Factor graph is a nice representation for optimization problems. It allow us to specify a joint density as a product of factors. It can be used to specify any function $\Phi(X)$ over a set of variables $X$, not just probability densities, though in SLAM, we normally use Gaussian distribution as the factor function.

Learn more about factor graph
inertial navigation system

Inertial Navigation System

A tutorial for building INS using IMU.

Learn more about Inertial Navigation System
expectation maximization

Expectation Maximization

Expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables.

Learn more about Expectation Maximization
Gradient decent

Machine Learning Notes

This is a collection of machine learning study notes.

Learn more about ML
Hello Robot

Robots, Robots, Everywhere!

On the ground, in the air, robots, robots, everywhere! Up in space, beneath the seas, robots make discoveries.

Learn more about robots