- PyTorch training, evaluation, inference and benchmark code with SOTA practices (support for wandb.ai logging)
- Onnx conversion, calibration and inference
- TensorRT conversion and inference
- Example notebook
- C++ Inference (Future release)
- FastAPI (
fastapibranch) [+ Heroku deployment] - Triton Inference Server (
tritonbranch)
In this project, for a given image classification task, we can perform a large number of experiments just by changing param.json file.
The project supports pretraining and finetuning of timm models. The training code supports scope for lot many customization for example adding more optimizers in _get_optimizers or schedulers in _get_scheduler functions.
It also contains an option to convert model to onnx and TensorRT. There are reference inference scripts for all different formats of the model.
- How to run with custom dataset?
- replace
datasets_to_dfinutils.pywith a function that returns a dataframe with 2 columns containing image file paths namedfileand labels namedlabel. - check if
prepare_dfinmain.pyis compatible.
- replace
- Create many different models and experiments just by replacing
model_nameinparams.json(by creating appropriate folder for each model underexperimentsfolder) orfinetune_layerparameter or any other hyper parameter in json file.
Notebooks folder contains a sample notebook to run cifar10 dataset end to end.
Docker container
sudo docker build -t e2e .
sudo chmod +x run_container.sh
./run_container.sh
python3 main_cifar10.pyTo run TensorRT Inference, build it's corresponding docker and set do_trt_inference to True in main_cifar10.py.