NEC SX-Aurora

The NEC SX-Aurora supports two execution modes, transparent and native offloading. If you only care about inference, the transparent offloading methods is the easiest to use. For training the native offloading should be used due to much better performance, if available for your framework.


Native Offloading (PyTorch)

Within PyTorch we support to use native tensors. For this program PyTorch as if you would use a GPU but replace all calls to cuda with ve. I.e.

model.ve()               # copy model to VE#0
input = input.ve()       # copy data to VE#0
model(input)             # gets executed on the device
torch.ve.synchronize()   # wait for execution to complete

Available functions

(see https://pytorch.org/docs/stable/cuda.html for description)

torch.Tensor.ve()
torch.Tensor.to('ve')
torch.Tensor.to('ve:X')
torch.nn.Module.ve()
torch.ve.synchronize(device=0)
torch.ve.is_available()
torch.ve.current_device()
torch.ve.set_device(device)
torch.ve.device_count()
torch.ve.memory_allocated(device=None)
CLASS torch.ve.device(device)
CLASS torch.ve.device_of(device)

Native Offloading (TensorFlow)

Due to increasing number of unresolved issues in TensorFlow PluggableDevice API (e.g., #55497, #57095, #60883 or #60895) we decided to no longer maintain our veda-tensorflow extension. Therefore you cannot longer use with tf.device("/VE:0"):. Instead please use Transparent Offloading using sol.device.set(’ve’, 0). We are sorry for the inconvenience, but we don’t see any commitment of the TensorFlow team to accept our bugfixes, nor to fix the issues themselves.


Transparent Offloading (all frameworks)

To use the NEC SX-Aurora, it is necessary to set sol.device.set("ve", deviceIdx) (deviceIdx is the index of the Aurora to run on, start from 0). Further it is necessary that the input data is located on the host system.


Config Options

Option Type/Default Description
ve::trace bool/false Enables to use ftrace.
ve::packed bool/false Enables use of packed vector for float32.

FAQ

The AI framework reports that an operation is not supported by device type "VE"
This is caused by the fact, that only a minimal subset of VE function calls are supported to be executed "eagerly" within the framework, i.e., +, -, *, /, ... If you encounter this problem, please open an issue for VEDA-PyTorch.

SOL reports "not found" for NCC compiler.
Possible Cause 1 SOL is unable to find /opt/nec/ve/bin/nc++. If you don't use a standard installation, please use NCXX, NAR and NLD env vars to specify the paths to your NCC installation.
Possible Cause 2 If there is a problem with your NCC license SOL is unable to properly detect the compiler. Please run nc++ --version and check for any error messages.

SOL crashes with nc++: /opt/nec/ve/ncc/3.4.2/libexec/ccom is abnormally terminated by SIGSEGV.
On some systems NCC v3.4.2 crashes when compiling code generated by SOL. If you encounter this problem, please switch to an older version of the compiler using the NCXX env var.