This repository contains the monocular visual odometry algorithms for rendering the camera trajectory on the provided video. The main task of this repo is to estimate the ego trajectory with respective to the first frame of the video and then render the future trajectory onto the reference frames, which helps to annotate the behaviors of self-driving cars.
- install miniconda
- create environment
- $conda create --name mvo python=3.8
- install requirements
- $pip install -r requirements.txt
- install ffmpeg in your OS
- Select a video (.mp4) to process
- It should be a video from the camera mounted (facing front) in a car
- The camera intrinsics should be known and specified in configuration yaml
This pipeline takes in a yaml configuration file. There is an example in config
, called config/default.yaml
- copy the
config/default.yaml
and rename it [Optional] - change the video path in the yaml file
- change the camera intrinsics if needed
- run the following command:
$python main.py -c [your yaml file]
- for example
$python main.py -c config/default.yaml
- A new folder called
odometry_[video name]
is created in the same folder containing the video. There are 5 things in the generated folder:frames
: the folder, containing all the original frames from the videofps.txt
: the fps of the videomvo.hdf5
: the estimated camera poses along with camera intrinsicsoverlaid-[video name].mp4
: the video (specified between the start_time and the end_time) with estimated trajectory[video name]-overlaid-frames
: the frames (specified between the start_time and the end_time) with estimated trajectory
- The examples can be found here
- $pytest