# YOLOX
**Repository Path**: cmfighting/YOLOX
## Basic Information
- **Project Name**: YOLOX
- **Description**: yolox-megengine版本
- **Primary Language**: Unknown
- **License**: Apache-2.0
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2021-09-26
- **Last Updated**: 2021-09-26
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README

## Introduction
YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and industrial communities.
For more details, please refer to our [report on Arxiv](https://arxiv.org/abs/2107.08430).
This repo is an implementation of [MegEngine](https://github.com/MegEngine/MegEngine) version YOLOX, there is also a [PyTorch implementation](https://github.com/Megvii-BaseDetection/YOLOX).
## Updates!!
* 【2021/08/05】 We release MegEngine version YOLOX.
## Comming soon
- [ ] Faster YOLOX training speed.
- [ ] More models of megEngine version.
- [ ] AMP training of megEngine.
## Benchmark
#### Light Models.
| Model | size | mAPval
0.5:0.95 | Params
(M) | FLOPs
(G) | weights |
| ------------------------------------------ | :--: | :---------------------: | :-----------: | :----------: | :----------------------------------------------------------: |
| [YOLOX-Tiny](./exps/default/yolox_tiny.py) | 416 | 32.2 | 5.06 | 6.45 | [github](https://github.com/MegEngine/YOLOX/releases/download/0.0.1/yolox_tiny.pkl) |
#### Standard Models.
Comming soon!
## Quick Start
Installation
Step1. Install YOLOX.
```shell
git clone git@github.com:MegEngine/YOLOX.git
cd YOLOX
pip3 install -U pip && pip3 install -r requirements.txt
pip3 install -v -e . # or python3 setup.py develop
```
Step2. Install [pycocotools](https://github.com/cocodataset/cocoapi).
```shell
pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
```
Demo
Step1. Download a pretrained model from the benchmark table.
Step2. Use either -n or -f to specify your detector's config. For example:
```shell
python tools/demo.py image -n yolox-tiny -c /path/to/your/yolox_tiny.pkl --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 416 --save_result --device [cpu/gpu]
```
or
```shell
python tools/demo.py image -f exps/default/yolox_tiny.py -c /path/to/your/yolox_tiny.pkl --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 416 --save_result --device [cpu/gpu]
```
Demo for video:
```shell
python tools/demo.py video -n yolox-s -c /path/to/your/yolox_s.pkl --path /path/to/your/video --conf 0.25 --nms 0.45 --tsize 416 --save_result --device [cpu/gpu]
```
Reproduce our results on COCO
Step1. Prepare COCO dataset
```shell
cd
ln -s /path/to/your/COCO ./datasets/COCO
```
Step2. Reproduce our results on COCO by specifying -n:
```shell
python tools/train.py -n yolox-tiny -d 8 -b 128
```
* -d: number of gpu devices
* -b: total batch size, the recommended number for -b is num-gpu * 8
When using -f, the above commands are equivalent to:
```shell
python tools/train.py -f exps/default/yolox-tiny.py -d 8 -b 128
```
Evaluation
We support batch testing for fast evaluation:
```shell
python tools/eval.py -n yolox-tiny -c yolox_tiny.pkl -b 64 -d 8 --conf 0.001 [--fuse]
```
* --fuse: fuse conv and bn
* -d: number of GPUs used for evaluation. DEFAULT: All GPUs available will be used.
* -b: total batch size across on all GPUs
To reproduce speed test, we use the following command:
```shell
python tools/eval.py -n yolox-tiny -c yolox_tiny.pkl -b 1 -d 1 --conf 0.001 --fuse
```
Tutorials
* [Training on custom data](docs/train_custom_data.md).
## MegEngine Deployment
[MegEngine in C++](./demo/MegEngine/cpp)
Dump mge file
**NOTE**: result model is dumped with `optimize_for_inference` and `enable_fuse_conv_bias_nonlinearity`.
```shell
python3 tools/export_mge.py -n yolox-tiny -c yolox_tiny.pkl --dump_path yolox_tiny.mge
```
### Benchmark
* Model Info: yolox-s @ input(1,3,640,640)
* Testing Devices
* `x86_64 -- Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz`
* `AArch64 -- xiamo phone mi9`
* `CUDA -- 1080TI @ cuda-10.1-cudnn-v7.6.3-TensorRT-6.0.1.5.sh @ Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz`
| megengine@tag1.5 +fastrun +weight\_preprocess (msec) | 1 thread | 2 thread | 4 thread | 8 thread |
| ---------------------------------------------------- | -------- | -------- | -------- | -------- |
| x86\_64(fp32) | 516.245 | 318.29 | 253.273 | 222.534 |
| x86\_64(fp32+chw88) | 362.020 | NONE | NONE | NONE |
| aarch64(fp32+chw44) | 555.877 | 351.371 | 242.044 | NONE |
| aarch64(fp16+chw) | 439.606 | 327.356 | 255.531 | NONE |
| CUDA @ CUDA (msec) | 1 batch | 2 batch | 4 batch | 8 batch | 16 batch | 32 batch | 64 batch |
| ------------------- | ---------- | --------- | --------- | --------- | --------- | -------- | -------- |
| megengine(fp32+chw) | 8.137 | 13.2893 | 23.6633 | 44.470 | 86.491 | 168.95 | 334.248 |
## Third-party resources
* The ncnn android app with video support: [ncnn-android-yolox](https://github.com/FeiGeChuanShu/ncnn-android-yolox) from [FeiGeChuanShu](https://github.com/FeiGeChuanShu)
* YOLOX with Tengine support: [Tengine](https://github.com/OAID/Tengine/blob/tengine-lite/examples/tm_yolox.cpp) from [BUG1989](https://github.com/BUG1989)
* YOLOX + ROS2 Foxy: [YOLOX-ROS](https://github.com/Ar-Ray-code/YOLOX-ROS) from [Ar-Ray](https://github.com/Ar-Ray-code)
* YOLOX Deploy DeepStream: [YOLOX-deepstream](https://github.com/nanmi/YOLOX-deepstream) from [nanmi](https://github.com/nanmi)
* YOLOX ONNXRuntime C++ Demo: [lite.ai](https://github.com/DefTruth/lite.ai/blob/main/ort/cv/yolox.cpp) from [DefTruth](https://github.com/DefTruth)
* Converting darknet or yolov5 datasets to COCO format for YOLOX: [YOLO2COCO](https://github.com/RapidAI/YOLO2COCO) from [Daniel](https://github.com/znsoftm)
## Cite YOLOX
If you use YOLOX in your research, please cite our work by using the following BibTeX entry:
```latex
@article{yolox2021,
title={YOLOX: Exceeding YOLO Series in 2021},
author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},
journal={arXiv preprint arXiv:2107.08430},
year={2021}
}
```