Skip to content

The implementation of "Comyco: Quality-Aware Adaptive Video Streaming via Imitation Learning" (ACM MM 2019)

License

Notifications You must be signed in to change notification settings

thu-media/Comyco

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Comyco

July 22, 2023: For Comyco with linear-based QoE, please refer to https://github.com/godka/comyco-lin.

This is a simple Tensorflow implementation of

  • Comyco: Quality-Aware Adaptive Video Streaming via Imitation Learning
  • PiTree: Practical Implementation of ABR Algorithms Using Decision Trees

Cite

If you find this work useful to you, please cite the conference version:

@inproceedings{huang2019comyco,
  title={Comyco: Quality-Aware Adaptive Video Streaming via Imitation Learning},
  author={Huang, Tianchi and Zhou, Chao and Zhang, Rui-Xiao and Wu, Chenglei and Yao, Xin and Sun, Lifeng},
  booktitle={Proceedings of the 27th ACM International Conference on Multimedia},
  pages={429--437},
  year={2019},
  organization={ACM}
}

or the arXiv version:

@article{huang2019comyco,
  title={Comyco: Quality-Aware Adaptive Video Streaming via Imitation Learning},
  author={Huang, Tianchi and Zhou, Chao and Zhang, Rui-Xiao and Wu, Chenglei and Yao, Xin and Sun, Lifeng},
  journal={arXiv preprint arXiv:1908.02270},
  year={2019}
}

What's Comyco

Comyco is a video quality-aware ABR approach that enormously improves the learning-based methods by tackling the issues, i.e., low sample efficiency and lacks video quality information. Comyco trains the policy via imitating expert trajectories given by the instant solver, which can not only avoid redundant exploration but also make better use of the collected samples. Meanwhile, Comyco attempts to pick the chunk with higher perceptual video qualities rather than video bitrates. To achieve this, we construct Comyco's neural network architecture, video datasets and QoE metrics with video quality features.

Quick Start

Steps to train and test Comyco:

Requirements

This work is based on python3.6+. You can also use python 2.7 instead. Run the following command to install dependencies.

pip install numpy tensorflow tflearn sklearn swig

Instant Solver

Please note that the instant solver is written by c++. To build this module, you need to install g++ first. We pre-build the module for two platforms, i.e., Windows (_envcpp.pyd) and Linux (_envcpp.so). The source code of instant solver is demonstrated in cpp-windows/ and cpp-linux/.

For Windows users, please install Swig and Visual Studio 2017.

swig -c++ -python abr.i

For Linux users, please install Swig and type this code.

swig -c++ -python abr.i

g++ abr_wrap.cxx env.cpp -fPIC -shared -I /usr/include/python2.7/ -L /usr/lib/python2.7 -o _envcpp.so -O4

Note: Windows is the recommended platform.

Pretrained model

If you have trouble on building the environment, we also publish the pretrained model for Comyco and Comyco-Pitree in results/pretrained/.

Try to type these code to evaluate Comyco.

cd src

python rl_test.py ../results/pretrained/pretrain.ckpt

And, for Comyco-pitree

cd src

python dt_test.py ../results/pretrained/pretrain-pitree.model

How to train Comyco

We provide a relatively simple Comyco which uses i) the same network traces of Pensieve's public repo; ii) only Envivo video dataset rather than 86 video dataset; iii) single agent version.

Detailed implementation are demonstrated in libcomyco.py. You can type these code to retrain Comyco.

cd src/

python train.py

The model will be automatically validated and saved in results.log during the training process (10 epoches).

Besides, we also provide several ABR baselines, including, Pensieve-retrained (via QoE_v) [we exclude this work in this repo since it's somewhat a big guy.], Supervised, and RobustMPC (cpp-linux/mpc.cpp). Comyco with single agent's training time will last about only 20 - 40 minutes if you are lucky enough.

How to train Comyco-Pitree

Moreover, we also implement Comyco-Pitree w.r.t the original paper. Specifically, we only use 4 critical features, i.e., last chunk's VMAF score, current buffer occupancy, average throughput, std throughput. We train Comyco-Pitree via directly imitating the expert answer generated by instant solver. Moreover, we leverage a simple yet effective entropy trick for better exploration (entropy < 0.3: randomly picks an action; entropy >= 0.3: roulette algorithm).

Surprisingly, Comyco-Pitree achieves comparable result compared with Pensieve. You can type these code to retrain Comyco-Pitree, and the trained model will be stored in src/pitree/.

cd src/
python train_dt.py

Unlike Comyco, the Comyco-Pitree's training time will take almost 10 - 12 hours. The visualization of Comyco-Pitree is shown like this.

overview

Comyco-Pitree Code

Besides, you can type these code to convert tree-model to python code.

cd src/

python code.py

The python code of pretrained Comyco-Pitree is described as follows.

# src/tree/comyco-pitree.py
def predict(last_vmaf, buf, thr_avg, thr_std):
  if thr_avg <= 0.1278446540236473:
    if thr_avg <= 0.07279053330421448:
      if buf <= 5.917577266693115:
        if thr_std <= 0.006877874955534935:
          if thr_avg <= 0.048280853778123856:
            if thr_std <= 0.0019393544062040746:
              return [1723, 32, 5, 0, 0, 0]
...

Feb. 5th Updated

Video Description Dataset

To better improve Comyco’s generalization ability, we propose a video quality DASH dataset involves movies, sports, TV-shows, games, news and MVs. Specifically, we first collect video clips with the highest resolution from Youtube, then leverage FFmpeg to encode the video by H.264 codec and MP4Box to dashify videos according to the encoding ladder of video sequences. The bitrate ladder is represented as {235, 375, 560, 750, 1050, 1750, 2350, 3000, 4300}kbps.

Each chunk is encoded as 4 seconds. During the trans-coding process, for each video, we measure VMAF, VMAF-4K, and VMAF-phone metric with the reference resolution of 1920 × 1080 respectively. In general, the dataset contains 86 complete videos, with 394,551 video chunks and 1,578,204 video quality assessments.

Contact Us

Please feel free to let me know if you have any questions.

Contact: htc17@mails.tsinghua.edu.cn,

or,

create an issue.

About

The implementation of "Comyco: Quality-Aware Adaptive Video Streaming via Imitation Learning" (ACM MM 2019)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published