This folder contains the Jupyter Notebooks and scripts for the LearnOpenCV article - The Annotated NeRF: Training on Custom Dataset from Scratch in Pytorch.

After downloading the dataset, perform the following steps,
This command converts the video into frames and stores them inside the output_dir.
$ python video2imgs.py --video_path captain_america_v1.mp4 --output_dir /path/to/dataset --fps 5
Now, we will run COLMAP and convert the data into llff format. factor can be anything (generally 2-8 based on the original image size). The value of this factor parameter also needs to be updated in the config.txt file.
$ python imgs2poses.py --data_dir "/path/to/dataset" --factor 4
After preparing the dataset, we will update the config file based on the prepared dataset directory, and factor parameter etc. After that's done we can directly run below command to start training,
$ python run_nerf.py --config configs/<dataset_name>.txt
After training you can do inference using the below command, it will generate for both disparity map as well as the 360 degree rendered video.
$ python run_nerf.py --config configs/<dataset_name>.txt --render_only
After the end of training the model weights will be stored in the /logs folder. To do inference and extract mesh from the model, use the extract_mesh.ipynb notebook.
- The base code is adupted from the nerf-pytorch repository.