Skip to content
/ NvEM Public

[ACM MM 2021 Oral] Official repo of "Neighbor-view Enhanced Model for Vision and Language Navigation"

License

Notifications You must be signed in to change notification settings

MarSaKi/NvEM

Repository files navigation

NvEM

Code of the paper: Neighbor-view Enhanced Model for Vision and Language Navigation (ACM MM2021 oral)
Dong An, Yuankai Qi, Yan Huang, Qi Wu, Liang Wang, Tieniu Tan

[Paper] [GitHub]

Motivation

Most of existing works represent a navigable candidate by the feature of the corresponding single view where the candidate lies in. However, an instruction may mention landmarks out of the single view as references, which might lead to failures of textual-visual matching of existing methods. In this work, we propose a multi-module Neighbor-View Enhanced Model (NvEM) to adaptively incorporate visual contexts from neighbor views for better textual-visual matching.

Prerequisites

Installation

Install the Matterport3D Simulator. Please note that the code is based on Simulator-v2.

Please find the versions of packages in our environment in requirements.txt. In particular, we use:

  • Python 3.6.9
  • NumPy 1.19.1
  • OpenCV 4.1.0.25
  • PyTorch 0.4.0
  • Torchvision 0.1.8

Data Preparation

Please follow the instructions below to prepare the data in directories:

Trained Network Weights

R2R Navigation

Please read Peter Anderson's VLN paper for the R2R Navigation task.

Our code is based on the code structure of EnvDrop and Recurrent VLN-Bert.

Reproduce Testing Results

To replicate the performance reported in our paper, load the trained network weights and run validation:

bash run/valid.bash 0

Here is the full log:

Loaded the listener model at iter 119600 from snap/NvEM_bt/state_dict/best_val_unseen
Env name: val_seen, nav_error: 3.4389, oracle_error: 2.1848, steps: 5.5749, lengths: 11.2468, success_rate: 0.6866, oracle_rate: 0.7640, spl: 0.6456
Env name: val_unseen, nav_error: 4.2603, oracle_error: 2.8130, steps: 6.3585, lengths: 12.4147, success_rate: 0.6011, oracle_rate: 0.6790, spl: 0.5497

Training

Navigator

To train the network from scratch, first train a Navigator on the R2R training split:

bash run/follower.bash 0

The trained Navigator will be saved under snap/.

Speaker

You also need to train a Speaker for augmented training:

bash run/speaker.bash 0

The trained Speaker will be saved under snap/.

Augmented Navigator

Finally, keep training the Navigator with the mixture of original data and augmented data:

bash run/follower_bt.bash 0

Citation

If you feel this repository is helpful, please consider citing our paper:

@inproceedings{an2021neighbor,
  title={Neighbor-view enhanced model for vision and language navigation},
  author={An, Dong and Qi, Yuankai and Huang, Yan and Wu, Qi and Wang, Liang and Tan, Tieniu},
  booktitle={Proceedings of the 29th ACM International Conference on Multimedia},
  pages={5101--5109},
  year={2021}
}

About

[ACM MM 2021 Oral] Official repo of "Neighbor-view Enhanced Model for Vision and Language Navigation"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published