qualia_plugin_snn.learningmodel.pytorch.QuantizedSResNet module
Contains the template for a quantized residual spiking neural network.
- class qualia_plugin_snn.learningmodel.pytorch.QuantizedSResNet.CreateNeuron[source]
Bases:
Protocol
Signature for create_neuron from
qualia_plugin_snn.learningmodel.pytorch.SNN.create_neuron()
.Used to pass neuron builder to BasicBlock.
- __call__(quant_params: QuantizationConfigDict) Module [source]
Instanciate a spiking neuron.
- Parameters:
quant_params (QuantizationConfigDict) – Optional quantization configuration dict in case of quantized network, see
qualia_core.learningmodel.pytorch.Quantizer.Quantizer
- Returns:
A spiking neuron instance
- Return type:
- __init__(*args, **kwargs)
- class qualia_plugin_snn.learningmodel.pytorch.QuantizedSResNet.QuantizedBasicBlockBuilder[source]
Bases:
Protocol
Signature for basicblockbuilder.
Used to bind hyperparameters constant across all the ResNet blocks.
- __call__(in_planes: int, planes: int, kernel_size: int, stride: int, padding: int) QuantizedBasicBlock [source]
Build a
QuantizedBasicBlock
.- Parameters:
in_planes (int) – Number of input channels
planes (int) – Number of filters (i.e., output channels) in the main branch Conv layers
kernel_size (int) –
kernel_size
for the main branch Conv layersstride (int) –
kernel_size
for the MaxPool layers, no MaxPool layer added if 1padding (int) – Padding for the main branch Conv layers
- Returns:
- Return type:
- __init__(*args, **kwargs)
- class qualia_plugin_snn.learningmodel.pytorch.QuantizedSResNet.QuantizedBasicBlock[source]
Bases:
Module
A single quantized ResNetv1 block.
Should have topology identical to
qualia_plugin_snn.learningmodel.pytorch.SResNet.BasicBlock
but with layers replaced with their quantized equivalent.Structure is:
| / \ | | QuantizedConv | | | QuantizedBatchNorm | | | QuantizedMaxPool QuantizedConv | | QuantizedIF QuantizedBatchNorm | | QuantizedConv QuantizedMaxPool | | QuantizedBatchNorm QuantizedIF | | QuantizedIF | | | \ / | QuantizedAdd
Main (left) branch QuantizedConv use
kernel_size=kernel_size
, while residual (right) branch QuantizedConv usekernel_size=1
.QuantizedBatchNorm layers will be absent if
batch_norm == False
QuantizedMaxPool layer will be absent if
stride == 1
.Residual (right) branch QuantizedConv layer will be asbent if
in_planes==planes
, except ifforce_projection_with_stride==True
andstride != 1
.- __init__(sjlayers_t: ModuleType, in_planes: int, planes: int, kernel_size: int, stride: int, padding: int, batch_norm: bool, bn_momentum: float, force_projection_with_stride: bool, create_neuron: CreateNeuron, step_mode: str, quant_params: QuantizationConfigDict) None [source]
Construct
QuantizedBasicBlock
.- Parameters:
sjlayers_t (ModuleType) – Module containing the aliased quantized layers to use (1D or 2D)
in_planes (int) – Number of input channels
planes (int) – Number of filters (i.e., output channels) in the main branch QuantizedConv layers
kernel_size (int) –
kernel_size
for the main branch QuantizedConv layersstride (int) –
kernel_size
for the QuantizedMaxPool layers, no QuantizedMaxPool layer added if 1padding (int) – Padding for the main branch QuantizedConv layers
batch_norm (bool) – If
True
, add BatchNorm layer after each QuantizedConv layerbn_momentum (float) – QuantizedBatchNorm layer
momentum
force_projection_with_stride (bool) – If
True
, residual QuantizedConv layer is kept whenstride != 1
even ifin_planes == planes
create_neuron (CreateNeuron) –
qualia_plugin_snn.learningmodel.pytorch.SNN.SNN.create_neuron()
method to instantiate a spiking neuronstep_mode (str) – SpikingJelly
step_mode
fromqualia_plugin_snn.learningmodel.pytorch.SNN.SNN.step_mode
quant_params (QuantizationConfigDict) – Quantization configuration dict, see
qualia_core.learningmodel.pytorch.Quantizer.Quantizer
- Return type:
None
- class qualia_plugin_snn.learningmodel.pytorch.QuantizedSResNet.QuantizedSResNet[source]
Bases:
SNN
Quantized residual spiking neural network template.
Should have topology identical to
qualia_plugin_snn.learningmodel.pytorch.SResNet.SResNet
but with layers replaced with their quantized equivalent.- __init__(input_shape: tuple[int, ...], output_shape: tuple[int, ...], filters: list[int], kernel_sizes: list[int], num_blocks: list[int], strides: list[int], paddings: list[int], quant_params: QuantizationConfig, prepool: int = 1, postpool: str = 'max', batch_norm: bool = False, bn_momentum: float = 0.1, force_projection_with_stride: bool = True, neuron: RecursiveConfigDict | None = None, timesteps: int = 2, dims: int = 1, basicblockbuilder: QuantizedBasicBlockBuilder | None = None) None [source]
Construct
QuantizedSResNet
.Structure is:
QuantizedInput | QuantizedAvgPool | QuantizedConv | QuantizedBatchNorm | QuantizedIF | QuantizedBasicBlock | … | QuantizedBasicBlock | QuantizedGlobalPool | Flatten | QuantizedLinear
- Parameters:
filters (list[int]) – List of
out_channels
for QuantizedConv layers inside eachQuantizedBasicBlock
group, must be of the same size asnum_blocks
, first element is for the first QuantizedConv layer at the beginning of the networkkernel_sizes (list[int]) – List of
kernel_size
for QuantizedConv layers inside eachQuantizedBasicBlock
group, must of the same size asnum_blocks
, first element is for the first QuantizedConv layer at the beginning of the networknum_blocks (list[int]) – List of number of
QuantizedBasicBlock
in each group, also defines the number ofQuantizedBasicBlock
groups inside the networkstrides (list[int]) – List of
kernel_size
for QuantizedMaxPool layers inside eachQuantizedBasicBlock
group, must of the same size asnum_blocks
,stride
is applied only to the firstQuantizedBasicBlock
of the group, nextQuantizedBasicBlock
in the group use astride
of1
, first element is the stride of the first QuantizedConv layer at the beginning of the networkpaddings (list[int]) – List of
padding
for QuantizedConv layer inside eachQuantizedBasicBlock
group, must of the same size asnum_blocks
, first element is for the first QuantizedConv layer at the beginning of the networkprepool (int) – QuantizedAvgPool layer
kernel_size
to add at the beginning of the network, no layer added if 0postpool (str) – Quantized global pooling layer type after all
QuantizedBasicBlock
, either max for QuantizedMaxPool or avg for QuantizedAvgPoolbatch_norm (bool) – If
True
, add a QuantizedBatchNorm layer after each QuantizedConv layer, otherwise no layer addedbn_momentum (float) – QuantizedBatchNorm
momentum
force_projection_with_stride (bool) – If
True
, residual QuantizedConv layer is kept whenstride != 1
even ifin_planes == planes
inside aQuantizedBasicBlock
neuron (RecursiveConfigDict | None) – Spiking neuron configuration, see
qualia_plugin_snn.learningmodel.pytorch.SNN.SNN.__init__()
timesteps (int) – Number of timesteps
dims (int) – Either 1 or 2 for 1D or 2D convolutional network.
basicblockbuilder (QuantizedBasicBlockBuilder | None) – Optional function with
QuantizedBasicBlockBuilder.__call__()
signature to build a basic block after binding constants common across all basic blocksquant_params (QuantizationConfig)
- Return type:
None