ONNX

import sol
import numpy as np

model  = sol.optimize("myModel.onnx") # no input description needed, as provided by model itself!
# or if you want to override the shape of the model
model  = sol.optimize("myModel.onnx", [np.rand(1, 3, 224, 224), ...], {'named_tensor': np.rand(3, 2, 1)})

input  = np.random.rand(1, 3, 224, 224)
output = model(input)

F.A.Q.

How can I execute an ONNX model on an accelerator device?
By default the ONNX frontend returns a Numpy executable model. You can either set sol.optimize("model.onnx", framework='pytorch') to a framework that supports accelerator devices or use the sol.device.set('device_type', device_idx) API for transparent offloading.

I get a weird message about size >0
The ONNX format does not store sizes for dynamic dimensions, but SOL requires these for optimizing it's sizes. If your ONNX model uses dynamic dimensions, please provide an example input when calling sol.optimze(...), i.e.: sol.optimize("model.onnx", [np.random.rand(1, 3, 224, 224)])

How can I train an ONNX model?
The ONNX format does not store information about trainable parameters. However, you can set sol.optimize("model.onnx", framework="pytorch", training=True) to load the ONNX model into PyTorch and let SOL enable training for the parameters. Warning: this enables training for ALL possibly trainable parameters in your model.

Supported Layers

Please refer to https://github.com/onnx/onnx/blob/master/docs/Operators.md for how these functions are used. This documentation only contains which layers, functions and tensor functionality are currently implemented within SOL.