qualia_plugin_snn.learningmodel.pytorch package
Subpackages
Submodules
- qualia_plugin_snn.learningmodel.pytorch.QuantizedSCNN module
- qualia_plugin_snn.learningmodel.pytorch.QuantizedSResNet module
- qualia_plugin_snn.learningmodel.pytorch.SCNN module
- qualia_plugin_snn.learningmodel.pytorch.SNN module
- qualia_plugin_snn.learningmodel.pytorch.SResNet module
Module contents
Pytorch learningmodel templates for Spiking Neural Networks based on SpikingJelly.
- class qualia_plugin_snn.learningmodel.pytorch.SCNN[source]
Bases:
SNN
Convolutional spiking neural network template.
Similar to
qualia_core.learningmodel.pytorch.CNN.CNN
but with spiking neuron activation layers (e.g., IF) instead oftorch.nn.ReLU
.Example TOML configuration for a 2D SVGG16 over 4 timesteps with soft-reset multi-step IF based on the SCNN template:
[[model]] name = "VGG16" params.filters = [ 64, 64, 128, 128, 256, 256, 256, 512, 512, 512, 512, 512, 512] params.kernel_sizes = [ 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] params.paddings = [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] params.strides = [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] params.pool_sizes = [ 0, 2, 0, 2, 0, 0, 2, 0, 0, 2, 0, 0, 2] params.dropouts = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] params.fc_units = [4096, 4096] params.batch_norm = true params.timesteps = 4 params.dims = 2 params.neuron.kind = 'IFNode' params.neuron.params.v_reset = false # Soft reset params.neuron.params.v_threshold = 1.0 params.neuron.params.detach_reset = true params.neuron.params.step_mode = 'm' # Multi-step mode, make sure to use SpikingJellyMultiStep learningframework params.neuron.params.backend = 'cupy'
- __init__(input_shape: tuple[int, ...], output_shape: tuple[int, ...], filters: list[int], kernel_sizes: list[int], paddings: list[int], strides: list[int], dropouts: float | list[float], pool_sizes: list[int], fc_units: list[int], batch_norm: bool = False, prepool: int | list[int] = 1, postpool: int | list[int] = 1, neuron: RecursiveConfigDict | None = None, timesteps: int = 4, gsp: bool = False, dims: int = 1) None [source]
Construct
SCNN
.- Parameters:
filters (list[int]) – List of
out_channels
for each Conv layer, also defines the number of Conv layerskernel_sizes (list[int]) – List of
kernel_size
for each Conv layer, must of the same size asfilters
paddings (list[int]) – List of
padding
for each Conv layer, must of the same size asfilters
strides (list[int]) – List of
stride
for each Conv layer, must of the same size asfilters
dropouts (float | list[float]) – List of Dropout layer
p
to apply after each Conv or Linear layer, must be of the same size asfilters
+fc_units
, no layer added if element is 0pool_sizes (list[int]) – List of MaxPool layer
kernel_size
to apply after each Conv layer, must be of the same size asfilters
, no layer added if element is 0fc_units (list[int]) – List of
torch.nn.Linear
layerout_features
to add at the end of the network, no layer added if emptybatch_norm (bool) – If
True
, add a BatchNorm layer after each Conv layer, otherwise no layer addedprepool (int | list[int]) – AvgPool layer
kernel_size
to add at the beginning of the network, no layer added if 0postpool (int | list[int]) – AvgPool layer
kernel_size
to add after all Conv layers, no layer added if 0neuron (RecursiveConfigDict | None) – Spiking neuron configuration, see
qualia_plugin_snn.learningmodel.pytorch.SNN.SNN.__init__()
timesteps (int) – Number of timesteps
gsp (bool) – If
True
, a single GlobalSumPool layer is added instead of Linear layersdims (int) – Either 1 or 2 for 1D or 2D convolutional network.
- Return type:
None
- layers: nn.ModuleDict
List of sequential layers of the SCNN model
- class qualia_plugin_snn.learningmodel.pytorch.QuantizedSCNN[source]
Bases:
SNN
Quantized convolutional spiking neural network template.
Should have topology identical to
qualia_plugin_snn.learningmodel.pytorch.SCNN.SCNN
but with layers replaced with their quantized equivalent.- __init__(input_shape: tuple[int, ...], output_shape: tuple[int, ...], filters: list[int], kernel_sizes: list[int], paddings: list[int], strides: list[int], dropouts: float | list[float], pool_sizes: list[int], fc_units: list[int], quant_params: QuantizationConfig, batch_norm: bool = False, prepool: int | list[int] = 1, postpool: int | list[int] = 1, neuron: RecursiveConfigDict | None = None, timesteps: int = 4, gsp: bool = False, dims: int = 1) None [source]
Construct
QuantizedSCNN
.- Parameters:
filters (list[int]) – List of
out_channels
for each QuantizedConv layer, also defines the number of QuantizedConv layerskernel_sizes (list[int]) – List of
kernel_size
for each QuantizedConv layer, must of the same size asfilters
paddings (list[int]) – List of
padding
for each QuantizedConv layer, must of the same size asfilters
strides (list[int]) – List of
stride
for each QuantizedConv layer, must of the same size asfilters
dropouts (float | list[float]) – List of Dropout layer
p
to apply after each QuantizedConv or QuantizedLinear layer, must be of the same size asfilters
+fc_units
, no layer added if element is 0pool_sizes (list[int]) – List of QuantizedMaxPool layer
kernel_size
to apply after each QuantizedConv layer, must be of the same size asfilters
, no layer added if element is 0fc_units (list[int]) – List of QuantizedLinear layer
out_features
to add at the end of the network, no layer added if emptybatch_norm (bool) – If
True
, add a QuantizedBatchNorm layer after each QuantizedConv layer, otherwise no layer addedprepool (int | list[int]) – QuantizedAvgPool layer
kernel_size
to add at the beginning of the network, no layer added if 0postpool (int | list[int]) – QuantizedAvgPool layer
kernel_size
to add after all QuantizedConv layers, no layer added if 0neuron (RecursiveConfigDict | None) – Spiking neuron configuration, see
qualia_plugin_snn.learningmodel.pytorch.SNN.SNN.__init__()
,'Quantized'
is automatically prepended toneuron.kind
timesteps (int) – Number of timesteps
gsp (bool) – If
True
, a single QuantizedGlobalSumPool layer is added instead of QuantizedLinear layersdims (int) – Either 1 or 2 for 1D or 2D convolutional network.
quant_params (QuantizationConfig) – Quantization configuration dict, see
qualia_core.learningmodel.pytorch.Quantizer.Quantizer
- Return type:
None
- layers: nn.ModuleDict
List of sequential layers of the SCNN model
- class qualia_plugin_snn.learningmodel.pytorch.QuantizedSResNet[source]
Bases:
SNN
Quantized residual spiking neural network template.
Should have topology identical to
qualia_plugin_snn.learningmodel.pytorch.SResNet.SResNet
but with layers replaced with their quantized equivalent.- __init__(input_shape: tuple[int, ...], output_shape: tuple[int, ...], filters: list[int], kernel_sizes: list[int], num_blocks: list[int], strides: list[int], paddings: list[int], quant_params: QuantizationConfig, prepool: int = 1, postpool: str = 'max', batch_norm: bool = False, bn_momentum: float = 0.1, force_projection_with_stride: bool = True, neuron: RecursiveConfigDict | None = None, timesteps: int = 2, dims: int = 1, basicblockbuilder: QuantizedBasicBlockBuilder | None = None) None [source]
Construct
QuantizedSResNet
.Structure is:
QuantizedInput | QuantizedAvgPool | QuantizedConv | QuantizedBatchNorm | QuantizedIF | QuantizedBasicBlock | … | QuantizedBasicBlock | QuantizedGlobalPool | Flatten | QuantizedLinear
- Parameters:
filters (list[int]) – List of
out_channels
for QuantizedConv layers inside eachQuantizedBasicBlock
group, must be of the same size asnum_blocks
, first element is for the first QuantizedConv layer at the beginning of the networkkernel_sizes (list[int]) – List of
kernel_size
for QuantizedConv layers inside eachQuantizedBasicBlock
group, must of the same size asnum_blocks
, first element is for the first QuantizedConv layer at the beginning of the networknum_blocks (list[int]) – List of number of
QuantizedBasicBlock
in each group, also defines the number ofQuantizedBasicBlock
groups inside the networkstrides (list[int]) – List of
kernel_size
for QuantizedMaxPool layers inside eachQuantizedBasicBlock
group, must of the same size asnum_blocks
,stride
is applied only to the firstQuantizedBasicBlock
of the group, nextQuantizedBasicBlock
in the group use astride
of1
, first element is the stride of the first QuantizedConv layer at the beginning of the networkpaddings (list[int]) – List of
padding
for QuantizedConv layer inside eachQuantizedBasicBlock
group, must of the same size asnum_blocks
, first element is for the first QuantizedConv layer at the beginning of the networkprepool (int) – QuantizedAvgPool layer
kernel_size
to add at the beginning of the network, no layer added if 0postpool (str) – Quantized global pooling layer type after all
QuantizedBasicBlock
, either max for QuantizedMaxPool or avg for QuantizedAvgPoolbatch_norm (bool) – If
True
, add a QuantizedBatchNorm layer after each QuantizedConv layer, otherwise no layer addedbn_momentum (float) – QuantizedBatchNorm
momentum
force_projection_with_stride (bool) – If
True
, residual QuantizedConv layer is kept whenstride != 1
even ifin_planes == planes
inside aQuantizedBasicBlock
neuron (RecursiveConfigDict | None) – Spiking neuron configuration, see
qualia_plugin_snn.learningmodel.pytorch.SNN.SNN.__init__()
timesteps (int) – Number of timesteps
dims (int) – Either 1 or 2 for 1D or 2D convolutional network.
basicblockbuilder (QuantizedBasicBlockBuilder | None) – Optional function with
QuantizedBasicBlockBuilder.__call__()
signature to build a basic block after binding constants common across all basic blocksquant_params (QuantizationConfig)
- Return type:
None
- class qualia_plugin_snn.learningmodel.pytorch.SResNet[source]
Bases:
SNN
Residual spiking neural network template.
Similar to
qualia_core.learningmodel.pytorch.ResNet.ResNet
but with spiking neuron activation layers (e.g., IF) instead oftorch.nn.ReLU
.Example TOML configuration for a 2D ResNetv1-18 over 4 timesteps with soft-reset multi-step IF based on the SResNet template:
[[model]] name = "SResNetv1-18" params.filters = [64, 64, 128, 256, 512] params.kernel_sizes = [ 7, 3, 3, 3, 3] params.paddings = [ 3, 1, 1, 1, 1] params.strides = [ 2, 1, 1, 1, 1] params.num_blocks = [ 2, 2, 2, 2] params.prepool = 1 params.postpool = 'max' params.batch_norm = true params.dims = 2 params.timesteps = 4 params.neuron.kind = 'IFNode' params.neuron.params.v_reset = false # Soft reset params.neuron.params.v_threshold = 1.0 params.neuron.params.detach_reset = true params.neuron.params.step_mode = 'm' # Multi-step mode, make sure to use SpikingJellyMultiStep learningframework params.neuron.params.backend = 'torch'
- __init__(input_shape: tuple[int, ...], output_shape: tuple[int, ...], filters: list[int], kernel_sizes: list[int], num_blocks: list[int], strides: list[int], paddings: list[int], prepool: int = 1, postpool: str = 'max', batch_norm: bool = False, bn_momentum: float = 0.1, force_projection_with_stride: bool = True, neuron: RecursiveConfigDict | None = None, timesteps: int = 2, dims: int = 1, basicblockbuilder: BasicBlockBuilder | None = None) None [source]
Construct
SResNet
.Structure is:
Input | AvgPool | Conv | BatchNorm | IF | BasicBlock | … | BasicBlock | GlobalPool | Flatten | Linear
- Parameters:
filters (list[int]) – List of
out_channels
for Conv layers inside eachBasicBlock
group, must be of the same size asnum_blocks
, first element is for the first Conv layer at the beginning of the networkkernel_sizes (list[int]) – List of
kernel_size
for Conv layers inside eachBasicBlock
group, must of the same size asnum_blocks
, first element is for the first Conv layer at the beginning of the networknum_blocks (list[int]) – List of number of
BasicBlock
in each group, also defines the number ofBasicBlock
groups inside the networkstrides (list[int]) – List of
kernel_size
for MaxPool layers inside eachBasicBlock
group, must of the same size asnum_blocks
,stride
is applied only to the firstBasicBlock
of the group, nextBasicBlock
in the group use astride
of1
, first element is the stride of the first Conv layer at the beginning of the networkpaddings (list[int]) – List of
padding
for Conv layer inside eachBasicBlock
group, must of the same size asnum_blocks
, first element is for the first Conv layer at the beginning of the networkprepool (int) – AvgPool layer
kernel_size
to add at the beginning of the network, no layer added if 0postpool (str) – Global pooling layer type after all
BasicBlock
, either max for MaxPool or avg for AvgPoolbatch_norm (bool) – If
True
, add a BatchNorm layer after each Conv layer, otherwise no layer addedbn_momentum (float) – BatchNorm
momentum
force_projection_with_stride (bool) – If
True
, residual Conv layer is kept whenstride != 1
even ifin_planes == planes
inside aBasicBlock
neuron (RecursiveConfigDict | None) – Spiking neuron configuration, see
qualia_plugin_snn.learningmodel.pytorch.SNN.SNN.__init__()
timesteps (int) – Number of timesteps
dims (int) – Either 1 or 2 for 1D or 2D convolutional network.
basicblockbuilder (BasicBlockBuilder | None) – Optional function with
BasicBlockBuilder.__call__()
signature to build a basic block after binding constants common across all basic blocks
- Return type:
None