Supported Neural Network Operators#

Neural network operators can be directly supported by the hardware, decomposed into hardware-compatible components, or approximated. While the hardware-supported operators only change with new chip generations, the coverage of decomposed and approximated operators increases with each new software release. This page lists the latest support for operators across various neural network frameworks.

keras.activations.elu
keras.activations.exponential
keras.activations.gelu
keras.activations.linear
keras.activations.relu
keras.activations.selu
keras.activations.sigmoid
keras.activations.softmax
keras.activations.softplus
keras.activations.softsign
keras.activations.swish
keras.activations.tanh
keras.layers.Activation
keras.layers.Add

Reference: keras.layers.Add

keras.layers.AlphaDropout
keras.layers.Attention
keras.layers.AveragePooling1D
keras.layers.AveragePooling2D
keras.layers.AveragePooling3D
keras.layers.Average
keras.layers.BatchNormalization
keras.layers.Concatenate
keras.layers.Conv1DTranspose
keras.layers.Conv1D

Reference: keras.layers.Conv1D

keras.layers.Conv2DTranspose
keras.layers.Conv2D

Reference: keras.layers.Conv2D

Notes:

  • kernel_size <= input_size

keras.layers.Conv3DTranspose
keras.layers.Conv3D

Reference: keras.layers.Conv3D

keras.layers.Cropping1D
keras.layers.Cropping2D
keras.layers.Cropping3D
keras.layers.Dense

Reference: keras.layers.Dense

keras.layers.DepthwiseConv1D
keras.layers.DepthwiseConv2D
keras.layers.Dot

Reference: keras.layers.Dot

keras.layers.Dropout
keras.layers.ELU

Reference: keras.layers.ELU

keras.layers.Flatten
keras.layers.GaussianDropout
keras.layers.GaussianNoise
keras.layers.GlobalAveragePooling1D
keras.layers.GlobalAveragePooling2D
keras.layers.GlobalAveragePooling3D
keras.layers.GlobalMaxPooling1D
keras.layers.GlobalMaxPooling2D
keras.layers.GlobalMaxPooling3D
keras.layers.InputLayer
keras.layers.LayerNormalization
keras.layers.LeakyReLU
keras.layers.Linear

Reference: keras.layers.Linear

keras.layers.MaxPooling1D
keras.layers.MaxPooling2D
keras.layers.MaxPooling3D
keras.layers.Maximum
keras.layers.Minimum
keras.layers.MultiHeadAttention
keras.layers.Multiply
keras.layers.Normalization
keras.layers.PReLU

Reference: keras.layers.PReLU

keras.layers.Permute
keras.layers.ReLU

Reference: keras.layers.ReLU

keras.layers.RepeatVector
keras.layers.Rescaling
keras.layers.Reshape
keras.layers.SeparableConv1D
keras.layers.SeparableConv2D
keras.layers.Softmax
keras.layers.SpatialDropout1D
keras.layers.SpatialDropout2D
keras.layers.SpatialDropout3D
keras.layers.Subtract
keras.layers.TensorFlowOpLayer
keras.layers.ThresholdedReLU
keras.layers.UpSampling1D
keras.layers.UpSampling2D
keras.layers.UpSampling3D
keras.layers.ZeroPadding1D
keras.layers.ZeroPadding2D
keras.layers.ZeroPadding3D
Abs

Reference: onnx__Abs

Acos

Reference: onnx__Acos

Acosh

Reference: onnx__Acosh

Add

Reference: onnx__Add

And

Reference: onnx__And

Asin

Reference: onnx__Asin

Asinh

Reference: onnx__Asinh

Atan

Reference: onnx__Atan

Atanh

Reference: onnx__Atanh

AveragePool

Reference: onnx__AveragePool

BatchNormalization
Cast

Reference: onnx__Cast

Notes:

  • The compiler will ignore the Cast operator, since all internal FMaps are BFloat16 .

Ceil

Reference: onnx__Ceil

Celu

Reference: onnx__Celu

Clip

Reference: onnx__Clip

Concat

Reference: onnx__Concat

ConvTranspose

Reference: onnx__ConvTranspose

Conv

Reference: onnx__Conv

Cos

Reference: onnx__Cos

Cosh

Reference: onnx__Cosh

DepthToSpace

Reference: onnx__DepthToSpace

Div

Reference: onnx__Div

Dropout

Reference: onnx__Dropout

Elu

Reference: onnx__Elu

Exp

Reference: onnx__Exp

Expand

Reference: onnx__Expand

FastGelu

Reference: onnx__FastGelu

Flatten

Reference: onnx__Flatten

Floor

Reference: onnx__Floor

Gelu

Reference: onnx__Gelu

Gemm

Reference: onnx__Gemm

GlobalAveragePool
GlobalMaxPool

Reference: onnx__GlobalMaxPool

GreaterOrEqual
Greater

Reference: onnx__Greater

HardSigmoid

Reference: onnx__HardSigmoid

HardSwish

Reference: onnx__HardSwish

Identity

Reference: onnx__Identity

InstanceNormalization
LRN

Reference: onnx__LRN

Notes:

  • Only support beta=0.75 (default) for now.

LayerNormalization
LayerNormalization
LeakyRelu

Reference: onnx__LeakyRelu

LessOrEqual

Reference: onnx__LessOrEqual

Less

Reference: onnx__Less

Log

Reference: onnx__Log

MatMul

Reference: onnx__MatMul

MaxPool

Reference: onnx__MaxPool

Max

Reference: onnx__Max

Mean

Reference: onnx__Mean

Min

Reference: onnx__Min

Mod

Reference: onnx__Mod

Mul

Reference: onnx__Mul

Neg

Reference: onnx__Neg

Not

Reference: onnx__Not

Or

Reference: onnx__Or

PRelu

Reference: onnx__PRelu

Pad

Reference: onnx__Pad

Pow

Reference: onnx__Pow

QLinearAdd

Reference: onnx__QLinearAdd

QLinearConv

Reference: onnx__QLinearConv

QuantizeLinear
Reciprocal

Reference: onnx__Reciprocal

ReduceL1

Reference: onnx__ReduceL1

ReduceL2

Reference: onnx__ReduceL2

ReduceMax

Reference: onnx__ReduceMax

ReduceMean

Reference: onnx__ReduceMean

ReduceMin

Reference: onnx__ReduceMin

ReduceSum

Reference: onnx__ReduceSum

Relu

Reference: onnx__Relu

Reshape

Reference: onnx__Reshape

Resize

Reference: onnx__Resize

Round

Reference: onnx__Round

Selu

Reference: onnx__Selu

Shape

Reference: onnx__Shape

Notes:

  • The compiler will statically evaluate and fold all shapes during

  • compilation. We do not support variable shape inference.

Sigmoid

Reference: onnx__Sigmoid

Sign

Reference: onnx__Sign

Sin

Reference: onnx__Sin

Sinh

Reference: onnx__Sinh

Slice

Reference: onnx__Slice

Softmax

Reference: onnx__Softmax

Softplus

Reference: onnx__Softplus

Softsign

Reference: onnx__Softsign

SpaceToDepth

Reference: onnx__SpaceToDepth

Split

Reference: onnx__Split

Sqrt

Reference: onnx__Sqrt

Squeeze

Reference: onnx__Squeeze

Sub

Reference: onnx__Sub

Sum

Reference: onnx__Sum

Tan

Reference: onnx__Tan

Tanh

Reference: onnx__Tanh

ThresholdedRelu
Tile

Reference: onnx__Tile

Transpose

Reference: onnx__Transpose

Unsqueeze

Reference: onnx__Unsqueeze

Upsample

Reference: onnx__Upsample

Xor

Reference: onnx__Xor

Abs

Reference: Abs

AddN

Reference: AddN

AddV2

Reference: AddV2

Add

Reference: Add

AvgPool

Reference: AvgPool

BatchToSpaceND

Reference: BatchToSpaceND

BiasAdd

Reference: BiasAdd

Cast

Reference: Cast

ConcatV2

Reference: ConcatV2

Concat

Reference: Concat

Const

Reference: Const

Conv2DBackpropInput

Reference: Conv2DBackpropInput

Conv2D

Reference: Conv2D

Conv3D

Reference: Conv3D

DepthToSpace

Reference: DepthToSpace

DepthwiseConv2dNative
Exp

Reference: Exp

ExpandDims

Reference: ExpandDims

Fill

Reference: Fill

FusedBatchNorm

Reference: FusedBatchNorm

GatherV2

Reference: GatherV2

IdentityN

Reference: IdentityN

Identity

Reference: Identity

LeakyRelu

Reference: LeakyRelu

MatMul

Reference: MatMul

MaxPool

Reference: MaxPool

Max

Reference: Max

Maximum

Reference: Maximum

Mean

Reference: Mean

Minimum

Reference: Minimum

MirrorPad

Reference: MirrorPad

Mul

Reference: Mul

Neg

Reference: Neg

NoOp

Reference: NoOp

Pack

Reference: Pack

Pad

Reference: Pad

Placeholder

Reference: Placeholder

Pow

Reference: Pow

Prod

Reference: Prod

RealDiv

Reference: RealDiv

Relu6

Reference: Relu6

Relu

Reference: Relu

Reshape

Reference: Reshape

ResizeBilinear

Reference: ResizeBilinear

ResizeNearestNeighbor
Rsqrt

Reference: Rsqrt

Selu

Reference: Selu

Shape

Reference: Shape

Sigmoid

Reference: Sigmoid

Softmax

Reference: Softmax

Softsign

Reference: Softsign

SpaceToBatchND

Reference: SpaceToBatchND

Split

Reference: Split

Sqrt

Reference: Sqrt

Square

Reference: Square

SquaredDifference

Reference: SquaredDifference

Squeeze

Reference: Squeeze

StopGradient

Reference: StopGradient

StridedSlice

Reference: StridedSlice

Sub

Reference: Sub

Tanh

Reference: Tanh

Tile

Reference: Tile

Transpose

Reference: Transpose

abs

Reference: tfl.abs

add_n

Reference: tfl.add_n

add

Reference: tfl.add

average_pool_2d

Reference: tfl.average_pool_2d

batch_to_space_nd

Reference: tfl.batch_to_space_nd

Notes:

  • This is only supported as a node in the subgraph of a decompsed dilated convolution

broadcast_args

Reference: tfl.broadcast_args

broadcast_to

Reference: tfl.broadcast_to

cast

Reference: tfl.cast

ceil

Reference: tfl.ceil

concatenation

Reference: tfl.concatenation

conv_2d

Reference: tfl.conv_2d

conv_3d

Reference: tfl.conv_3d

conv_3d_transpose
densify

Reference: tfl.densify

depth_to_space_nd
depthwise_conv_2d
dequantize

Reference: tfl.dequantize

div

Reference: tfl.div

elu

Reference: tfl.elu

equal

Reference: tfl.equal

exp

Reference: tfl.exp

expand_dims

Reference: tfl.expand_dims

fill

Reference: tfl.fill

floor_div

Reference: tfl.floor_div

floor_mod

Reference: tfl.floor_mod

floor

Reference: tfl.floor

fully_connected

Reference: tfl.fully_connected

gelu

Reference: tfl.gelu

greater_equal

Reference: tfl.greater_equal

greater

Reference: tfl.greater

hard_swish

Reference: tfl.hard_swish

l2_normalization
leaky_relu

Reference: tfl.leaky_relu

less_equal

Reference: tfl.less_equal

less

Reference: tfl.less

local_response_normalization
logical_and

Reference: tfl.logical_and

logical_not

Reference: tfl.logical_not

logical_or

Reference: tfl.logical_or

logistic

Reference: tfl.logistic

max_pool_2d

Reference: tfl.max_pool_2d

maximum

Reference: tfl.maximum

mean

Reference: tfl.mean

minimum

Reference: tfl.minimum

mirror_pad

Reference: tfl.mirror_pad

mul

Reference: tfl.mul

neg

Reference: tfl.neg

not_equal

Reference: tfl.not_equal

pack

Reference: tfl.pack

pad

Reference: tfl.pad

padv2

Reference: tfl.padv2

pow

Reference: tfl.pow

prelu

Reference: tfl.prelu

quantize

Reference: tfl.quantize

reduce_any

Reference: tfl.reduce_any

reduce

Reference: tfl.reduce

reduce_min

Reference: tfl.reduce_min

relu6

Reference: tfl.relu6

relu_n1_to_1

Reference: tfl.relu_n1_to_1

relu

Reference: tfl.relu

reshape

Reference: tfl.reshape

resize_bilinear

Reference: tfl.resize_bilinear

resize_nearest_neighbor
round

Reference: tfl.round

rsqrt

Reference: tfl.rsqrt

shape

Reference: tfl.shape

sin

Reference: tfl.sin

slice

Reference: tfl.slice

softmax

Reference: tfl.softmax

space_to_batch_nd
space_to_depth

Reference: tfl.space_to_depth

split

Reference: tfl.split

sqrt

Reference: tfl.sqrt

square

Reference: tfl.square

squared_difference
squeeze

Reference: tfl.squeeze

strided_slice

Reference: tfl.strided_slice

sub

Reference: tfl.sub

sum

Reference: tfl.sum

tanh

Reference: tfl.tanh

tile

Reference: tfl.tile

transpose_conv

Reference: tfl.transpose_conv

transpose

Reference: tfl.transpose

unpack

Reference: tfl.unpack

Note

PyTorch models can be supported by exporting them to ONNX (for more information see tutorial on exporting to ONNX). Direct support for Pytorch 2 will be available in a future release.