ONNX* is a representation format for deep learning models. ONNX allows AI developers easily transfer models between different frameworks that helps to choose the best combination for them. Today, PyTorch*, Caffe2*, Apache MXNet*, Microsoft Cognitive Toolkit* and other tools are developing ONNX support.
| Model Name | Path to Public Models master branch |
|---|---|
| bvlc_alexnet | model archive |
| bvlc_googlenet | model archive |
| bvlc_reference_caffenet | model archive |
| bvlc_reference_rcnn_ilsvrc13 | model archive |
| inception_v1 | model archive |
| inception_v2 | model archive |
| resnet50 | model archive |
| squeezenet | model archive |
| densenet121 | model archive |
| emotion_ferplus | model archive |
| mnist | model archive |
| shufflenet | model archive |
| VGG19 | model archive |
| zfnet512 | model archive |
Listed models are built with operation set version 8. Models that are upgraded to higher operation set versions may not be supported.
Starting from the R4 release, the OpenVINO™ toolkit officially supports public Pytorch* models (from torchvision 0.2.1 and pretrainedmodels 0.7.4 packages) via ONNX conversion. The list of supported topologies is presented below:
| Package Name | Supported Models |
|---|---|
| Torchvision Models | alexnet, densenet121, densenet161, densenet169, densenet201, resnet101, resnet152, resnet18, resnet34, resnet50, vgg11, vgg13, vgg16, vgg19 |
| Pretrained Models | alexnet, fbresnet152, resnet101, resnet152, resnet18, resnet34, resnet152, resnet18, resnet34, resnet50, resnext101_32x4d, resnext101_64x4d, vgg11 |
| ESPNet Models |
Starting from the R5 release, the OpenVINO™ toolkit officially supports public PaddlePaddle* models via ONNX conversion. The list of supported topologies is presented below:
| Model Name | Path to Model Code |
|---|---|
| fit_a_line | model code |
| recognize_digits | model code |
| VGG16 | model code |
| ResNet | model code |
| MobileNet | model code |
| SE_ResNeXt | model code |
| Inception-v4 | model code |
The Model Optimizer process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
To convert an ONNX* model:
<INSTALL_DIR>/deployment_tools/model_optimizer directory.mo.py script to simply convert a model with the path to the input model .nnet file: There are no ONNX* specific parameters, so only framework-agnostic parameters are available to convert your model.
Refer to Supported Framework Layers for the list of supported standard layers.