v0.1 SOL

VersionDateChanges
v0.1.8.228.04.2020
  • Minor maintenance release. Fixes linking problem with libnfort_m.so.2.
v0.1.8.123.04.2020
  • Supports: PyTorch 1.4.0
  • This is a maintenance release, linked against newer VEOS libraries (2.4.2). You only need to update if you encounter Abort (core dump), Illegal Instruction (core dump) or similar errors when running SOL with newer VEOS versions.
  • We will have a new release soon, with support to run SOL on multiple VE's in parallel and PyTorch v1.5.0 support.
v0.1.827.01.2020
  • Supports: PyTorch 1.4.0
  • Fixed "X86 requires sol.backends.ispc!" as reported by @malon
  • Fixed ## WARNING ##: This version of SOL has been linked against PyTorch v1.4.0+cpu but you are using v1.4.0. It's not recommended to use varying versions! as reported by @malon.
  • Fixed limitation to VE#0 in Native Tensors mode as proposed by @efocht. Use VE_NODE_NUMBER env var to set the VE you want to run on.
  • Minor performance improvements for Inference mode.
v0.1.724.01.2020
  • Supports: PyTorch 1.4.0
  • Lots of performance improvements, especially for inference (BatchSize < 8)
  • Native Tensor Support for PyTorch: This allows you to use Aurora Tensors within PyTorch! I didn't have time to update the documentation yet, but in here is an example:
    import torch
    import sol.pytorch
    
    input = torch.rand(1, 3, 224, 224)
    py_model = ...
    sol_model = sol.optimize(py_model, input.size())
    sol_model.load_state_dict(py_model)
    
    # sol.device.set(sol.device.ve, 0) # no longer needed
    sol_model.to("hip") # copy model to device
    input = input.to("hip") # copy input to device
    
    sol_model(input)
    torch.hip.synchronize()

    So in principle it works as with CUDA but you need to use “hip” instead of cuda. The other method with only using sol.device.set(sol.device.ve, 0) still works and will be further supported, but it has performance drawbacks for training compared to the native tensor implementation.

  • Limitation:
    • you only can use VE#0 with this method
    • only L1Loss implemented yet. Please let me know if you use other loss functions.
    • you can use print() and some other basic functions on the Aurora tensors, but most functionality is not implemented. If you want to do computations on the data outside of the SOL optimized model, you need to copy the tensor back to the CPU via output = output.cpu()
v0.1.609.12.2019
  • Supports: PyTorch 1.3.1
  • SOL will warn you to if you are trying to use an unsupported framework version (e.g. trying to run PyTorch 1.0)
  • Added cache, to not recompile network everytime SOL is run. In case you want to explicitly recompile, delete folder ".sol" or use "sol.cache.clear()" before calling "sol.optimize(...)" or "sol.deploy(...)"
  • Bugfixes for NHWC input data format.
  • Updated docs to explain more details about "sol.deploy(...)"
v0.1.502.12.2019
  • Supports: Pytorch 1.3.1
  • Preliminary deployment support, look at "sol.deploy(...)" in documentation