The SOL v0.3.0 release contains a huge amount of changes. The highlights are listed below:
- PIP dependency installation: As SOL will support more frameworks starting from v0.3.1, we do not longer install all dependencies by default. Instead we you can select the installation of dependencies when issuing the
pip3 install sol-image.whl[torch, onnx].
- PyTorch Training No-Grad: During training PyTorch only computes the gradients starting from the
tensor.backward() call, setting all other gradients to zero. SOL however assumes all outputs to contribute to the gradient. To achieve the same behavior, we introduced
sol.no_grad(tensor). You can use it as follows to integrate into your model, without changing the model itself. Without this, it is not guaranteed that the gradient is identical!
def __init__(self, model):
self.model = model
def forward(self, *args):
A, B, C = self.model(*args)
return A, sol.no_grad(B), sol.no_grad(C)
model = TrainingModel(model)
sol_model = sol.optimize(model, ...)
for(batch in ...):
A, B, C = sol_model(batch)
- PyTorch parameter auto-loading: Before it was always necessary to copy parameters from the PyTorch model to the SOL model. We added the parameter
sol.optimize(...) which does this automatically.
- Huggingface GPT-2 support:
- Support for Huggingface GPT-2 has been added. However, there is an accuracy problem in the backward pass that we are currently investigating #80
- Variable batchsize does not work in GPT-2 as they use the wildcard in the second and not first dimension, which is currently not supported by SOL #81. Using GPT-2 without variable batchsize works!
- ONNX Support: Was postponed to v0.3.1 release, to perform more tests before releasing.
- Internal changes to graph representation for faster processing.
- Process bar. Don't be alarmed that it might jump back. This is caused by the face that SOL does not know how many files need to be compiled prior generating the computation graph, therefore it might be the case that more files get added, moving the bar "backwards".