ONNX
import sol
import numpy as np
model = sol.optimize("myModel.onnx") # no input description needed, as provided by model itself!
# or if you want to override the shape of the model
model = sol.optimize("myModel.onnx", [np.rand(1, 3, 224, 224), ...], {'named_tensor': np.rand(3, 2, 1)})
input = np.random.rand(1, 3, 224, 224)
output = model(input)
F.A.Q.
How can I execute an ONNX model on an accelerator device? |
By default the ONNX frontend returns a Numpy executable model. You can either
set sol.optimize("model.onnx", framework='pytorch') to a framework
that supports accelerator devices or use the sol.device.set('device_type',
device_idx) API for transparent offloading.
|
How can I train an ONNX model? |
The ONNX format does not store information about trainable parameters. However,
you can set sol.optimize("model.onnx", framework="pytorch") to load
the ONNX model into PyTorch. Then use iterate over
model.parameters() and set the param.requires_grad =
True for all parameters that shall be trained.
|
Tested Models
ONNX Hub v1.16.1
Format: ModelName (OpSet(s))
- AlexNet (7, 8, 9, 12)
- CaffeNet (7, 8, 9, 12)
- DenseNet-121-12 (12)
- DenseNet-121 (6, 7, 8, 9)
- EfficientNet-Lite4 (11)
- Emotion FERPlus (7, 8)
- GoogleNet (12)
- GoogleNet (3, 6, 7, 8, 9)
- Inception-2 (7, 8, 9)
- MNIST-12 (12)
- MNIST (7,8)
- MobileNet v2-7 (7)
- R-CNN ILSVRC13 (7, 8, 9)
- ResNet101-v2 (7)
- ResNet101 (7)
- ResNet152-v2 (7)
- ResNet152 (7)
- ResNet18-v2 (7)
- ResNet18 (7)
- ResNet34-v2 (7)
- ResNet34 (7)
- ResNet50-caffe2 (7, 8, 9)
- ResNet50-fp32 (12)
- ResNet50-v2 (7)
- ResNet50 (7)
- ShuffleNet-v1 (7, 8, 9)
- ShuffleNet-v2-fp32 (12)
- ShuffleNet-v2 (10)
- SqueezeNet 1.0 (6, 7, 8, 9, 12)
- SqueezeNet 1.1 (7)
- Super_Resolution (10)
- Tiny YOLOv2 (7,8)
- VGG 16-bn (7)
- VGG 16-fp32 (12)
- VGG 16 (7)
- VGG 19-bn (7)
- VGG 19-caffe2 (7, 8, 9)
- VGG 19 (7)
- YOLOv2 (9)
- YOLOv4 (11)
- ZFNet-512 (7, 8, 9, 12)
- version-RFB-320 (9)
- version-RFB-640 (9)
TorchVision v0.18.0
Exported via torch.onnx.export(...)
.
- alexnet
- convnext_base
- convnext_large
- convnext_small
- convnext_tiny
- deeplabv3_mobilenet_v3_large
- deeplabv3_resnet101
- deeplabv3_resnet50
- densenet121
- densenet161
- densenet169
- densenet201
- efficientnet_b0
- efficientnet_b1
- efficientnet_b2
- efficientnet_b3
- efficientnet_b4
- efficientnet_b5
- efficientnet_b6
- efficientnet_b7
- efficientnet_v2_l
- efficientnet_v2_m
- efficientnet_v2_s
- fcn_resnet101
- fcn_resnet50
- googlenet
- inception_v3
- lraspp_mobilenet_v3_large
- maxvit_t
- mc3_18
- mnasnet0_5
- mnasnet0_75
- mnasnet1_0
- mnasnet1_3
- mobilenet_v2
- mobilenet_v3_large
- mobilenet_v3_small
- mvit_v1_b
- quantized_googlenet
- quantized_inception_v3
- quantized_mobilenet_v2
- quantized_mobilenet_v3_large
- quantized_resnet18
- quantized_resnet50
- quantized_resnext101_32x8d
- quantized_resnext101_64x4d
- quantized_shufflenet_v2_x0_5
- quantized_shufflenet_v2_x1_0
- quantized_shufflenet_v2_x1_5
- quantized_shufflenet_v2_x2_0
- r2plus1d_18
- r3d_18
- regnet_x_16gf
- regnet_x_1_6gf
- regnet_x_32gf
- regnet_x_3_2gf
- regnet_x_400mf
- regnet_x_800mf
- regnet_x_8gf
- regnet_y_128gf
- regnet_y_16gf
- regnet_y_1_6gf
- regnet_y_32gf
- regnet_y_3_2gf
- regnet_y_400mf
- regnet_y_800mf
- regnet_y_8gf
- resnet101
- resnet152
- resnet18
- resnet34
- resnet50
- resnext101_32x8d
- resnext101_64x4d
- resnext50_32x4d
- s3d
- shufflenet_v2_x0_5
- shufflenet_v2_x1_0
- shufflenet_v2_x1_5
- shufflenet_v2_x2_0
- squeezenet1_0
- squeezenet1_1
- vgg11
- vgg11_bn
- vgg13
- vgg13_bn
- vgg16
- vgg16_bn
- vgg19
- vgg19_bn
- wide_resnet101_2
- wide_resnet50_2
Supported Layers
Please refer to https://github.com/onnx/onnx/blob/master/docs/Operators.md for
how these functions are used. This documentation only contains which layers,
functions and tensor functionality are currently implemented within SOL.