TensorFlow

SOL’s TensorFlow integration supports to translate tf.Function, tf.Module, Keras and tf.saved_model models into SOL models. If your tf.saved_model has multiple signatures, you need to select the preferred one using sol.optimize(my_saved_model.signatures['my_signature']). By default SOL uses the tf.saved_model.__call__ function.

import tensorflow as tf
import sol
import tensorflow.keras as keras

def AlexNet(input_shape=(224, 224, 3), format="channels_last"):
	inputs = keras.Input(shape=(input_shape))
	x = inputs
	x = keras.layers.Conv2D			(input_shape=input_shape, filters=64, kernel_size=(11,11), strides=(4,4), padding='same', activation='relu', data_format=format)(x)
	x = keras.layers.MaxPooling2D	(pool_size=3, strides=2, padding='valid', data_format=format)(x)
	x = keras.layers.Conv2D			(filters=192, kernel_size=5, strides=1, padding='same', activation='relu', data_format=format)(x)
	x = keras.layers.MaxPooling2D	(pool_size=3, strides=2, padding="valid", data_format=format)(x)
	x = keras.layers.Conv2D			(filters=384, kernel_size=3, strides=1, padding="same", activation='relu', data_format=format)(x)
	x = keras.layers.Conv2D			(filters=256, kernel_size=3, strides=1, padding="same", activation='relu', data_format=format)(x)
	x = keras.layers.Conv2D			(filters=256, kernel_size=3, strides=1, padding="same", activation='relu', data_format=format)(x)
	x = keras.layers.MaxPooling2D	(pool_size=3, strides=2, padding="valid", data_format=format)(x)
	x = keras.layers.Flatten		(data_format=format)(x)
	x = keras.layers.Dropout		(rate=0.5)(x)
	x = keras.layers.Dense			(4096, input_shape=(256*6*6,), activation='relu')(x)
	x = keras.layers.Dropout		(rate=0.5)(x)
	x = keras.layers.Dense			(4096, activation="relu")(x)
	x = keras.layers.Dense			(1000)(x)
	return keras.models.Model		(inputs=inputs, outputs=x)

@tf.function(input_signature=[tf.TensorSpec([None, 224, 224, 3], dtype=tf.float32)])
def tf_function(input):
	return ...

class TFModule(tf.Module):
	def init(self):
		super().__init__()
		self.var = tf.Variable(...)
		
	@tf.function(input_signature=[tf.TensorSpec([None, 224, 224, 3], dtype=tf.float32)])
	def __call__(self, input):
		return ...

with tf.device('/CPU:0'):
	sol_model = sol.optimize(AlexNet(), batch_size=1)
	# or
	sol_model = sol.optimize(tf_function)
	# or
	sol_model = sol.optimize(TFModule())
	# or
	sol_model = sol.optimize(tf.saved_model.load("/path/to/saved/model"))

	# Inference
	output = sol_model(inputs)

	# Training for Keras Models
	sol_model.compile(...)
	sol_model.fit(inputs, targets)

	# Training for tf.Function and tf.Module
	# TODO:

Since SOL v0.5.3 we integrated SOL tighter into Keras’s Model.compile(...) function. It’s still experimental and might not work in all situations.

import sol.tensorflow # required to enable the modifications to Keras

model = init_your_model()
model.compile(optimizer, loss, sol_compile=True) # use sol_vdims=[...] to modify the sol.optimize(..., vdims=sol_vdims) attribute.
model(input_data) # runs using SOL

F.A.Q.

What are the best configurations for using SOL with TensorFlow?

If you are using SOL with TensorFlow on X86 you should set the following env vars:

OMP_NUM_THREADS=$(lscpu -b -p=Core,Socket | grep -v '^#' | sort -u | wc -l)
OMP_PROC_BIND=TRUE
TF_NUM_INTEROP_THREADS=1
How can I define that the model input shall return a gradient?

By default all inputs get no gradients assigned. If you want to override this behavior, use

sol_model = sol.optimize(model, requires_grad={"input_1", "whatever"})

All input’s whose name is within the set will return a gradient.

How can I override the model input shapes?

By default all inputs use the input shapes defined in the model. If you want to override this behavior, use

sol_model = sol.optimize(model, shapes={"input_1": [1, 2, 3], "whatever": [77, 3, 5]})

Be aware that your overwritten shapes need to be valid in terms of the model, otherwise compilation will fail.

How can I update/downgrade to another TensorFlow version?

Before switching version, please have a look at the compatibility list if your TensorFlow version is supported by SOL. If yes then you can just use pip3 install tensorflow~={VERSION}.

How do I store/load a Tensorflow Keras model?

SOL model's cannot be stored directly. For storing/loading a SOL Keras model, use model.save_weights(...) and model.load_weights(...) methods.

# Storing
sol_model = sol.optimize(keras_model)
sol_model.save_weights(checkpoint_path)

# Loading
sol_model = sol.optimize(keras_model)
sol_model.load_weights(checkpoint_path)

More information on loading/storing the weights can be found here

Which activations/recurrent_activations are supported by RNN layers?

SOL currently supports [None, ’linear’, ’tanh’, ‘sigmoid’, ‘relu’]. If you need another RNN activation function, please get in contact with us.

Tested Models

keras.applications (v3.10.0)

  • ConvNeXtBase
  • ConvNeXtLarge
  • ConvNeXtSmall
  • ConvNeXtTiny
  • ConvNeXtXLarge
  • DenseNet121
  • DenseNet169
  • DenseNet201
  • EfficientNetB0
  • EfficientNetB1
  • EfficientNetB2
  • EfficientNetB3
  • EfficientNetB4
  • EfficientNetB5
  • EfficientNetB6
  • EfficientNetB7
  • EfficientNetV2B0
  • EfficientNetV2B1
  • EfficientNetV2B2
  • EfficientNetV2B3
  • EfficientNetV2L
  • EfficientNetV2M
  • EfficientNetV2S
  • InceptionResNetV2
  • InceptionV3
  • MobileNet
  • MobileNetV2
  • MobileNetV3Large
  • MobileNetV3Small
  • NASNetLarge
  • NASNetMobile
  • RegNetX002
  • RegNetX004
  • RegNetX006
  • RegNetX008
  • RegNetX016
  • RegNetX032
  • RegNetX040
  • RegNetX064
  • RegNetX080
  • RegNetX120
  • RegNetX160
  • RegNetX320
  • RegNetY002
  • RegNetY004
  • RegNetY006
  • RegNetY008
  • RegNetY016
  • RegNetY032
  • RegNetY040
  • RegNetY064
  • RegNetY080
  • RegNetY120
  • RegNetY160
  • RegNetY320
  • ResNet101
  • ResNet101V2
  • ResNet152
  • ResNet152V2
  • ResNet50
  • ResNet50V2
  • ResNetRS101
  • ResNetRS152
  • ResNetRS200
  • ResNetRS270
  • ResNetRS350
  • ResNetRS420
  • ResNetRS50
  • VGG16
  • VGG19
  • Xception

Supported Layers

Please refer to https://www.tensorflow.org/api/stable for how these functions are used. This documentation only contains which layers, functions and tensor functionality is currently implemented within SOL.

Keras Layers

Keras layers not listed are parsed using the TensorFlow parser.

  • Activation
  • AlphaDropout
  • BatchNormalization
  • Bidirectional
  • Conv1D
  • Conv1DTranspose
  • Conv2D
  • Conv2DTranspose
  • Conv3D
  • Conv3DTranspose
  • Dense
  • DepthwiseConv1D
  • DepthwiseConv2D
  • Dropout
  • GRU
  • InputLayer
  • LSTM
  • SeparableConv1D
  • SeparableConv2D
  • SimpleRNN
  • Softmax
  • Concatenate

TensorFlow Layers

  • Abs
  • Acos
  • Acosh
  • AddN
  • AddV2
  • All
  • Any
  • ArgMax
  • ArgMin
  • Asin
  • Asinh
  • AssignSubVariableOp
  • AssignVariableOp
  • Atan2
  • Atan
  • Atanh
  • AvgPool3D
  • AvgPool
  • BatchMatMulV2
  • BiasAdd
  • Cast
  • Ceil
  • ConcatV2
  • Const
  • Conv1D
  • Conv2D
  • Conv2DBackpropInput
  • Conv3D
  • Cos
  • Cosh
  • Cumsum
  • DepthwiseConv2dNative
  • DivNoNan
  • Einsum
  • Elu
  • Equal
  • Erf
  • Erfc
  • Exp
  • ExpandDims
  • Expm1
  • Fill
  • Floor
  • FloorDiv
  • FloorMod
  • FusedBatchNormV3
  • GatherV2
  • Greater
  • GreaterEqual
  • Identity
  • IdentityN
  • IsFinite
  • IsInf
  • IsNan
  • LeakyRelu
  • Less
  • LessEqual
  • Log1p
  • Log
  • LogSoftmax
  • LogicalAnd
  • LogicalNot
  • LogicalOr
  • MatMul
  • Max
  • MaxPool3D
  • MaxPool
  • MaxPoolWithArgmax
  • Maximum
  • Mean
  • Min
  • Minimum
  • Mul
  • Neg
  • NoOp
  • NotEqual
  • Pack
  • Pad
  • PadV2
  • PartitionedCall
  • Placeholder
  • Pow
  • Prod
  • RandomUniform
  • RandomUniformInt
  • Range
  • ReadVariableOp
  • RealDiv
  • Reciprocal
  • Relu6
  • Relu
  • Reshape
  • ResizeArea
  • ResizeBicubic
  • ResizeBilinear
  • ResizeNearestNeighbor
  • ResourceGather
  • ReverseV2
  • Round
  • Rsqrt
  • Select
  • SelectV2
  • Selu
  • Shape
  • Sigmoid
  • Sign
  • Sin
  • Sinh
  • Softmax
  • Softplus
  • Softsign
  • Split
  • SplitV
  • Sqrt
  • Square
  • SquaredDifference
  • Squeeze
  • StatefulPartitionedCall
  • StatelessRandomGetKeyCounter
  • StatelessRandomUniformIntV2
  • StatelessRandomUniformV2
  • StatelessWhile
  • StopGradient
  • StridedSlice
  • Sub
  • Sum
  • Tan
  • Tanh
  • TensorListFromTensor
  • TensorListGetItem
  • TensorListReserve
  • TensorListSetItem
  • TensorListStack
  • Tile
  • Transpose
  • Unpack
  • Where
  • While
  • Xdivy
  • Xlog1py
  • Xlogy
  • ZerosLike