v0.3 Betelgeuse

VersionDateChanges
v0.3.1
Download
10.11.2020
  • #51 DL4J Support
  • #82 [CMake] building with PCH seems to be broken
  • #83 [Python] SOL does not support named model arguments, i.e. sol_model(key=value)
  • #84 [Core] Inputs that directly get passed through as output, are not handled correctly
  • #87 [Core] Missing DType: Bool
  • #88 [VE] PyTorch does always print in scientific notation
  • #89 [DFP] Possible race condition in Conv2D BWD_DATA
  • #91 [DFP] Wrongly initialized value for MaxPooling
  • #93 [Pytorch] Can't use torch.Tensor as input for sol.optimize
  • #94 [PyTorch] Switching devices might yield in gradients of different shapes
  • #96 [Docs] add DL4J docs
  • #97 [PyTorch] LocalResponseNorm missing
  • #100 [PyTorch] can't convert buffers prior execution
  • #105 [DFP] Error when using variable batchsizes
  • #106 [AVEO] Errornous transfer of function parameters
v0.3.0
Download
13.10.2020 The SOL v0.3.0 release contains a huge amount of changes. The highlights are listed below:
  • PIP dependency installation: As SOL will support more frameworks starting from v0.3.1, we do not longer install all dependencies by default. Instead we you can select the installation of dependencies when issuing the pip3 install sol-image.whl[torch, onnx].
  • PyTorch Training No-Grad: During training PyTorch only computes the gradients starting from the tensor.backward() call, setting all other gradients to zero. SOL however assumes all outputs to contribute to the gradient. To achieve the same behavior, we introduced sol.no_grad(tensor). You can use it as follows to integrate into your model, without changing the model itself. Without this, it is not guaranteed that the gradient is identical!
    class TrainingModel(torch.nn.Module):
    	def __init__(self, model):
    		super().__init__()
    		self.model = model
    		
    	def forward(self, *args):
    		A, B, C = self.model(*args)
    		return A, sol.no_grad(B), sol.no_grad(C)
    		
    model = TrainingModel(model)
    sol_model = sol.optimize(model, ...)
    
    for(batch in ...):
    	A, B, C = sol_model(batch)
    	A.backward()
  • PyTorch parameter auto-loading: Before it was always necessary to copy parameters from the PyTorch model to the SOL model. We added the parameter copy_parameters=True to sol.optimize(...) which does this automatically.
  • Huggingface GPT-2 support:
    • Support for Huggingface GPT-2 has been added. However, there is an accuracy problem in the backward pass that we are currently investigating #80
    • Variable batchsize does not work in GPT-2 as they use the wildcard in the second and not first dimension, which is currently not supported by SOL #81. Using GPT-2 without variable batchsize works!
  • ONNX Support: Was postponed to v0.3.1 release, to perform more tests before releasing.
  • Internal changes to graph representation for faster processing.
  • Process bar. Don't be alarmed that it might jump back. This is caused by the face that SOL does not know how many files need to be compiled prior generating the computation graph, therefore it might be the case that more files get added, moving the bar "backwards".