qualia_plugin_snn.learningmodel.pytorch package
Subpackages
Submodules
- qualia_plugin_snn.learningmodel.pytorch.QuantizedSCNN module
- qualia_plugin_snn.learningmodel.pytorch.QuantizedSMLP module
- qualia_plugin_snn.learningmodel.pytorch.QuantizedSResNet module
- qualia_plugin_snn.learningmodel.pytorch.QuantizedSResNetStride module
- qualia_plugin_snn.learningmodel.pytorch.SCNN module
- qualia_plugin_snn.learningmodel.pytorch.SMLP module
- qualia_plugin_snn.learningmodel.pytorch.SNN module
- qualia_plugin_snn.learningmodel.pytorch.SResNet module
- qualia_plugin_snn.learningmodel.pytorch.SResNetStride module
Module contents
Pytorch learningmodel templates for Spiking Neural Networks based on SpikingJelly.
- class qualia_plugin_snn.learningmodel.pytorch.SCNN[source]
Bases:
SNNConvolutional spiking neural network template.
Similar to
qualia_core.learningmodel.pytorch.CNN.CNNbut with spiking neuron activation layers (e.g., IF) instead oftorch.nn.ReLU.Example TOML configuration for a 2D SVGG16 over 4 timesteps with soft-reset multi-step IF based on the SCNN template:
[[model]] name = "VGG16" params.filters = [ 64, 64, 128, 128, 256, 256, 256, 512, 512, 512, 512, 512, 512] params.kernel_sizes = [ 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] params.paddings = [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] params.strides = [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] params.pool_sizes = [ 0, 2, 0, 2, 0, 0, 2, 0, 0, 2, 0, 0, 2] params.dropouts = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] params.separables = [false, false, false, false, false, false, false, false, false, false, false, false, false] params.fc_units = [4096, 4096] params.batch_norm = true params.timesteps = 4 params.dims = 2 params.neuron.kind = 'IFNode' params.neuron.params.v_reset = false # Soft reset params.neuron.params.v_threshold = 1.0 params.neuron.params.detach_reset = true params.neuron.params.step_mode = 'm' # Multi-step mode, make sure to use SpikingJellyMultiStep learningframework params.neuron.params.backend = 'cupy'
- __init__(input_shape: tuple[int, ...], output_shape: tuple[int, ...], filters: list[int], kernel_sizes: list[int], paddings: list[int], strides: list[int], dropouts: float | list[float], pool_sizes: list[int], fc_units: list[int], separables: list[bool] | None = None, batch_norm: bool = False, prepool: int | list[int] = 1, postpool: int | list[int] = 1, neuron: RecursiveConfigDict | None = None, timesteps: int = 4, gsp: bool = False, dims: int = 1) None[source]
Construct
SCNN.- Parameters:
filters (list[int]) – List of
out_channelsfor each Conv layer, also defines the number of Conv layerskernel_sizes (list[int]) – List of
kernel_sizefor each Conv layer, must of the same size asfilterspaddings (list[int]) – List of
paddingfor each Conv layer, must of the same size asfiltersstrides (list[int]) – List of
stridefor each Conv layer, must of the same size asfiltersdropouts (float | list[float]) – List of Dropout layer
pto apply after each Conv or Linear layer, must be of the same size asfilters+fc_units, no layer added if element is 0pool_sizes (list[int]) – List of MaxPool layer
kernel_sizeto apply after each Conv layer, must be of the same size asfilters, no layer added if element is 0fc_units (list[int]) – List of
torch.nn.Linearlayerout_featuresto add at the end of the network, no layer added if emptyseparables (list[bool] | None) – Whether a given Conv layer is implemented as a depthwise-pointwise pair (true) or as a standard Conv (false), must be of the same size as
filtersbatch_norm (bool) – If
True, add a BatchNorm layer after each Conv layer, otherwise no layer addedprepool (int | list[int]) – AvgPool layer
kernel_sizeto add at the beginning of the network, no layer added if 0postpool (int | list[int]) – AvgPool layer
kernel_sizeto add after all Conv layers, no layer added if 0neuron (RecursiveConfigDict | None) – Spiking neuron configuration, see
qualia_plugin_snn.learningmodel.pytorch.SNN.SNN.__init__()timesteps (int) – Number of timesteps
gsp (bool) – If
True, a single GlobalSumPool layer is added instead of Linear layersdims (int) – Either 1 or 2 for 1D or 2D convolutional network.
- Return type:
None
- layers: ModuleDict
List of sequential layers of the SCNN model
- class qualia_plugin_snn.learningmodel.pytorch.SMLP[source]
Bases:
SNNSpiking multi-layer perceptron template.
Similar to
qualia_core.learningmodel.pytorch.MLP.MLPbut with spiking neuron activation layers (e.g., IF) instead oftorch.nn.ReLU.Last
torch.nn.Linearlayer matching number of output classes is implicitely added.Example TOML configuration for a 3-layer spiking MLP over 4 timesteps with soft-reset multi-step IF based on the SMLP template:
[[model]] kind = "SMLP" name = "smlp_128-128-10" params.units = [128, 128] params.timesteps = 4 params.neuron.kind = 'IFNode' params.neuron.params.v_reset = false # Soft reset params.neuron.params.v_threshold = 1.0 params.neuron.params.detach_reset = true params.neuron.params.step_mode = 'm' # Multi-step mode, make sure to use SpikingJellyMultiStep learningframework params.neuron.params.backend = 'torch'
- __init__(input_shape: tuple[int, ...], output_shape: tuple[int, ...], units: list[int], timesteps: int, neuron: RecursiveConfigDict | None = None) None[source]
Construct
SMLP.- Parameters:
units (list[int]) – List of
torch.nn.Linearlayerout_featuresto add in the networkneuron (RecursiveConfigDict | None) – Spiking neuron configuration, see
qualia_plugin_snn.learningmodel.pytorch.SNN.SNN.__init__()timesteps (int) – Number of timesteps
- Return type:
None
- class qualia_plugin_snn.learningmodel.pytorch.QuantizedSCNN[source]
Bases:
SNNQuantized convolutional spiking neural network template.
Should have topology identical to
qualia_plugin_snn.learningmodel.pytorch.SCNN.SCNNbut with layers replaced with their quantized equivalent.- __init__(input_shape: tuple[int, ...], output_shape: tuple[int, ...], filters: list[int], kernel_sizes: list[int], paddings: list[int], strides: list[int], dropouts: float | list[float], pool_sizes: list[int], fc_units: list[int], quant_params: QuantizationConfig, batch_norm: bool = False, prepool: int | list[int] = 1, postpool: int | list[int] = 1, neuron: RecursiveConfigDict | None = None, timesteps: int = 4, gsp: bool = False, dims: int = 1) None[source]
Construct
QuantizedSCNN.- Parameters:
filters (list[int]) – List of
out_channelsfor each QuantizedConv layer, also defines the number of QuantizedConv layerskernel_sizes (list[int]) – List of
kernel_sizefor each QuantizedConv layer, must of the same size asfilterspaddings (list[int]) – List of
paddingfor each QuantizedConv layer, must of the same size asfiltersstrides (list[int]) – List of
stridefor each QuantizedConv layer, must of the same size asfiltersdropouts (float | list[float]) – List of Dropout layer
pto apply after each QuantizedConv or QuantizedLinear layer, must be of the same size asfilters+fc_units, no layer added if element is 0pool_sizes (list[int]) – List of QuantizedMaxPool layer
kernel_sizeto apply after each QuantizedConv layer, must be of the same size asfilters, no layer added if element is 0fc_units (list[int]) – List of QuantizedLinear layer
out_featuresto add at the end of the network, no layer added if emptybatch_norm (bool) – If
True, add a QuantizedBatchNorm layer after each QuantizedConv layer, otherwise no layer addedprepool (int | list[int]) – QuantizedAvgPool layer
kernel_sizeto add at the beginning of the network, no layer added if 0postpool (int | list[int]) – QuantizedAvgPool layer
kernel_sizeto add after all QuantizedConv layers, no layer added if 0neuron (RecursiveConfigDict | None) – Spiking neuron configuration, see
qualia_plugin_snn.learningmodel.pytorch.SNN.SNN.__init__(),'Quantized'is automatically prepended toneuron.kindtimesteps (int) – Number of timesteps
gsp (bool) – If
True, a single QuantizedGlobalSumPool layer is added instead of QuantizedLinear layersdims (int) – Either 1 or 2 for 1D or 2D convolutional network.
quant_params (QuantizationConfig) – Quantization configuration dict, see
qualia_core.learningmodel.pytorch.layers.Quantizer.Quantizer
- Return type:
None
- layers: ModuleDict
List of sequential layers of the SCNN model
- class qualia_plugin_snn.learningmodel.pytorch.QuantizedSMLP[source]
Bases:
SNNQuantized spiking multi-layer perceptron template.
Should have topology identical to
qualia_plugin_snn.learningmodel.pytorch.SMLP.SMLPbut with layers replaced with their quantized equivalent.- __init__(input_shape: tuple[int, ...], output_shape: tuple[int, ...], units: list[int], quant_params: QuantizationConfig, timesteps: int, neuron: RecursiveConfigDict | None = None) None[source]
Construct
QuantizedSMLP.- Parameters:
units (list[int]) – List of
torch.nn.Linearlayerout_featuresto add in the networkquant_params (QuantizationConfig) – Quantization configuration dict, see
qualia_core.learningmodel.pytorch.layers.Quantizer.Quantizerneuron (RecursiveConfigDict | None) – Spiking neuron configuration, see
qualia_plugin_snn.learningmodel.pytorch.SNN.SNN.__init__()timesteps (int) – Number of timesteps
- Return type:
None
- class qualia_plugin_snn.learningmodel.pytorch.QuantizedSResNet[source]
Bases:
SNNQuantized residual spiking neural network template.
Should have topology identical to
qualia_plugin_snn.learningmodel.pytorch.SResNet.SResNetbut with layers replaced with their quantized equivalent.- __init__(input_shape: tuple[int, ...], output_shape: tuple[int, ...], filters: list[int], kernel_sizes: list[int], num_blocks: list[int], strides: list[int], paddings: list[int], quant_params: QuantizationConfig, prepool: int = 1, postpool: str = 'max', batch_norm: bool = False, bn_momentum: float = 0.1, force_projection_with_stride: bool = True, neuron: RecursiveConfigDict | None = None, timesteps: int = 2, dims: int = 1, basicblockbuilder: QuantizedBasicBlockBuilder | None = None) None[source]
Construct
QuantizedSResNet.Structure is:
QuantizedInput | QuantizedAvgPool | QuantizedConv | QuantizedBatchNorm | QuantizedIF | QuantizedBasicBlock | … | QuantizedBasicBlock | QuantizedGlobalPool | Flatten | QuantizedLinear- Parameters:
filters (list[int]) – List of
out_channelsfor QuantizedConv layers inside eachQuantizedBasicBlockgroup, must be of the same size asnum_blocks, first element is for the first QuantizedConv layer at the beginning of the networkkernel_sizes (list[int]) – List of
kernel_sizefor QuantizedConv layers inside eachQuantizedBasicBlockgroup, must of the same size asnum_blocks, first element is for the first QuantizedConv layer at the beginning of the networknum_blocks (list[int]) – List of number of
QuantizedBasicBlockin each group, also defines the number ofQuantizedBasicBlockgroups inside the networkstrides (list[int]) – List of
kernel_sizefor QuantizedMaxPool layers inside eachQuantizedBasicBlockgroup, must of the same size asnum_blocks,strideis applied only to the firstQuantizedBasicBlockof the group, nextQuantizedBasicBlockin the group use astrideof1, first element is the stride of the first QuantizedConv layer at the beginning of the networkpaddings (list[int]) – List of
paddingfor QuantizedConv layer inside eachQuantizedBasicBlockgroup, must of the same size asnum_blocks, first element is for the first QuantizedConv layer at the beginning of the networkprepool (int) – QuantizedAvgPool layer
kernel_sizeto add at the beginning of the network, no layer added if 0postpool (str) – Quantized global pooling layer type after all
QuantizedBasicBlock, either max for QuantizedMaxPool or avg for QuantizedAvgPoolbatch_norm (bool) – If
True, add a QuantizedBatchNorm layer after each QuantizedConv layer, otherwise no layer addedbn_momentum (float) – QuantizedBatchNorm
momentumforce_projection_with_stride (bool) – If
True, residual QuantizedConv layer is kept whenstride != 1even ifin_planes == planesinside aQuantizedBasicBlockneuron (RecursiveConfigDict | None) – Spiking neuron configuration, see
qualia_plugin_snn.learningmodel.pytorch.SNN.SNN.__init__()timesteps (int) – Number of timesteps
dims (int) – Either 1 or 2 for 1D or 2D convolutional network.
basicblockbuilder (QuantizedBasicBlockBuilder | None) – Optional function with
QuantizedBasicBlockBuilder.__call__()signature to build a basic block after binding constants common across all basic blocksquant_params (QuantizationConfig)
- Return type:
None
- class qualia_plugin_snn.learningmodel.pytorch.QuantizedSResNetStride[source]
Bases:
SNNQuantized residual spiking neural network template.
Should have topology identical to
qualia_plugin_snn.learningmodel.pytorch.SResNetStride.SResNetStridebut with layers replaced with their quantized equivalent.- __init__(input_shape: tuple[int, ...], output_shape: tuple[int, ...], filters: list[int], kernel_sizes: list[int], pool_sizes: list[int], num_blocks: list[int], strides: list[int], paddings: list[int], quant_params: QuantizationConfig, prepool: int = 1, postpool: str = 'max', batch_norm: bool = False, bn_momentum: float = 0.1, force_projection_with_pooling: bool = True, neuron: RecursiveConfigDict | None = None, timesteps: int = 2, dims: int = 1, basicblockbuilder: QuantizedBasicBlockBuilder | None = None) None[source]
Construct
QuantizedSResNet.Structure is:
QuantizedInput | QuantizedAvgPool | QuantizedConv | QuantizedBatchNorm | QuantizedIF | QuantizedBasicBlock | … | QuantizedBasicBlock | QuantizedGlobalPool | Flatten | QuantizedLinear- Parameters:
filters (list[int]) – List of
out_channelsfor QuantizedConv layers inside eachQuantizedBasicBlockgroup, must be be of the same size asnum_blocks, first element is for the first QuantizedConv layer at the beginning of the networkkernel_sizes (list[int]) – List of
kernel_sizefor QuantizedConv layers inside eachQuantizedBasicBlockgroup, must be of the same size asnum_blocks, first element is for the first QuantizedConv layer at the beginning of the networkpool_size – List of
kernel_sizefor QuantizedMaxPool layers inside eachQuantizedBasicBlockgroup, must be of the same size asnum_blocks,strideis applied only to the firstQuantizedBasicBlockof the group, nextQuantizedBasicBlockin the group use astrideof1, first element is the stride of the first QuantizedConv layer at the beginning of the networknum_blocks (list[int]) – List of number of
QuantizedBasicBlockin each group, also defines the number ofQuantizedBasicBlockgroups inside the networkstrides (list[int]) – List of
stridefor first QuantizedConv layers inside eachQuantizedBasicBlockgroup, must be of the same size asnum_blocks, first element is for the first Conv layer at the beginning of the networkpaddings (list[int]) – List of
paddingfor QuantizedConv layer inside eachQuantizedBasicBlockgroup, must be of the same size asnum_blocks, first element is for the first QuantizedConv layer at the beginning of the networkprepool (int) – QuantizedAvgPool layer
kernel_sizeto add at the beginning of the network, no layer added if 0postpool (str) – Quantized global pooling layer type after all
QuantizedBasicBlock, either max for QuantizedMaxPool or avg for QuantizedAvgPoolbatch_norm (bool) – If
True, add a QuantizedBatchNorm layer after each QuantizedConv layer, otherwise no layer addedbn_momentum (float) – QuantizedBatchNorm
momentumforce_projection_with_pooling (bool) – If
True, residual QuantizedConv layer is kept whenstride != 1even ifin_planes == planesinside aQuantizedBasicBlockneuron (RecursiveConfigDict | None) – Spiking neuron configuration, see
qualia_plugin_snn.learningmodel.pytorch.SNN.SNN.__init__()timesteps (int) – Number of timesteps
dims (int) – Either 1 or 2 for 1D or 2D convolutional network.
basicblockbuilder (QuantizedBasicBlockBuilder | None) – Optional function with
QuantizedBasicBlockBuilder.__call__()signature to build a basic block after binding constants common across all basic blocksquant_params (QuantizationConfig)
- Return type:
None
- class qualia_plugin_snn.learningmodel.pytorch.SResNet[source]
Bases:
SNNResidual spiking neural network template.
Similar to
qualia_core.learningmodel.pytorch.ResNet.ResNetbut with spiking neuron activation layers (e.g., IF) instead oftorch.nn.ReLU.Example TOML configuration for a 2D ResNetv1-18 over 4 timesteps with soft-reset multi-step IF based on the SResNet template:
[[model]] name = "SResNetv1-18" params.filters = [64, 64, 128, 256, 512] params.kernel_sizes = [ 7, 3, 3, 3, 3] params.paddings = [ 3, 1, 1, 1, 1] params.strides = [ 2, 1, 1, 1, 1] params.num_blocks = [ 2, 2, 2, 2] params.prepool = 1 params.postpool = 'max' params.batch_norm = true params.dims = 2 params.timesteps = 4 params.neuron.kind = 'IFNode' params.neuron.params.v_reset = false # Soft reset params.neuron.params.v_threshold = 1.0 params.neuron.params.detach_reset = true params.neuron.params.step_mode = 'm' # Multi-step mode, make sure to use SpikingJellyMultiStep learningframework params.neuron.params.backend = 'torch'
- __init__(input_shape: tuple[int, ...], output_shape: tuple[int, ...], filters: list[int], kernel_sizes: list[int], num_blocks: list[int], strides: list[int], paddings: list[int], prepool: int = 1, postpool: str = 'max', batch_norm: bool = False, bn_momentum: float = 0.1, force_projection_with_stride: bool = True, neuron: RecursiveConfigDict | None = None, timesteps: int = 2, dims: int = 1, basicblockbuilder: BasicBlockBuilder | None = None) None[source]
Construct
SResNet.Structure is:
Input | AvgPool | Conv | BatchNorm | IF | BasicBlock | … | BasicBlock | GlobalPool | Flatten | Linear- Parameters:
filters (list[int]) – List of
out_channelsfor Conv layers inside eachBasicBlockgroup, must be of the same size asnum_blocks, first element is for the first Conv layer at the beginning of the networkkernel_sizes (list[int]) – List of
kernel_sizefor Conv layers inside eachBasicBlockgroup, must of the same size asnum_blocks, first element is for the first Conv layer at the beginning of the networknum_blocks (list[int]) – List of number of
BasicBlockin each group, also defines the number ofBasicBlockgroups inside the networkstrides (list[int]) – List of
kernel_sizefor MaxPool layers inside eachBasicBlockgroup, must of the same size asnum_blocks,strideis applied only to the firstBasicBlockof the group, nextBasicBlockin the group use astrideof1, first element is the stride of the first Conv layer at the beginning of the networkpaddings (list[int]) – List of
paddingfor Conv layer inside eachBasicBlockgroup, must of the same size asnum_blocks, first element is for the first Conv layer at the beginning of the networkprepool (int) – AvgPool layer
kernel_sizeto add at the beginning of the network, no layer added if 0postpool (str) – Global pooling layer type after all
BasicBlock, either max for MaxPool or avg for AvgPoolbatch_norm (bool) – If
True, add a BatchNorm layer after each Conv layer, otherwise no layer addedbn_momentum (float) – BatchNorm
momentumforce_projection_with_stride (bool) – If
True, residual Conv layer is kept whenstride != 1even ifin_planes == planesinside aBasicBlockneuron (RecursiveConfigDict | None) – Spiking neuron configuration, see
qualia_plugin_snn.learningmodel.pytorch.SNN.SNN.__init__()timesteps (int) – Number of timesteps
dims (int) – Either 1 or 2 for 1D or 2D convolutional network.
basicblockbuilder (BasicBlockBuilder | None) – Optional function with
BasicBlockBuilder.__call__()signature to build a basic block after binding constants common across all basic blocks
- Return type:
None
- class qualia_plugin_snn.learningmodel.pytorch.SResNetStride[source]
Bases:
SNNResidual spiking neural network template.
Similar to
qualia_core.learningmodel.pytorch.ResNetStride.ResNetStridebut with spiking neuron activation layers (e.g., IF) instead oftorch.nn.ReLU.Example TOML configuration for a 2D ResNetv1-18 over 4 timesteps with soft-reset multi-step IF based on the SResNet template:
[[model]] name = "SResNetv1-18" params.filters = [64, 64, 128, 256, 512] params.kernel_sizes = [ 7, 3, 3, 3, 3] params.paddings = [ 3, 1, 1, 1, 1] params.strides = [ 2, 1, 1, 1, 1] params.pool_sizes = [ 0, 0, 0, 0, 0] params.num_blocks = [ 2, 2, 2, 2] params.prepool = 1 params.postpool = 'max' params.batch_norm = true params.dims = 2 params.timesteps = 4 params.neuron.kind = 'IFNode' params.neuron.params.v_reset = false # Soft reset params.neuron.params.v_threshold = 1.0 params.neuron.params.detach_reset = true params.neuron.params.step_mode = 'm' # Multi-step mode, make sure to use SpikingJellyMultiStep learningframework params.neuron.params.backend = 'torch'
- __init__(input_shape: tuple[int, ...], output_shape: tuple[int, ...], filters: list[int], kernel_sizes: list[int], pool_sizes: list[int], num_blocks: list[int], strides: list[int], paddings: list[int], prepool: int = 1, postpool: str = 'max', batch_norm: bool = False, bn_momentum: float = 0.1, force_projection_with_pooling: bool = True, neuron: RecursiveConfigDict | None = None, timesteps: int = 2, dims: int = 1, basicblockbuilder: BasicBlockBuilder | None = None) None[source]
Construct
SResNet.Structure is:
Input | AvgPool | Conv | BatchNorm | IF | BasicBlock | … | BasicBlock | GlobalPool | Flatten | Linear- Parameters:
filters (list[int]) – List of
out_channelsfor Conv layers inside eachBasicBlockgroup, must be of the same size asnum_blocks, first element is for the first Conv layer at the beginning of the networkkernel_sizes (list[int]) – List of
kernel_sizefor Conv layers inside eachBasicBlockgroup, must be of the same size asnum_blocks, first element is for the first Conv layer at the beginning of the networkpool_size – List of
kernel_sizefor MaxPool layers inside eachBasicBlockgroup, must be of the same size asnum_blocks,strideis applied only to the firstBasicBlockof the group, nextBasicBlockin the group use astrideof1, first element is the stride of the first Conv layer at the beginning of the networknum_blocks (list[int]) – List of number of
BasicBlockin each group, also defines the number ofBasicBlockgroups inside the networkstrides (list[int]) – List of
stridefor first Conv layers inside eachBasicBlockgroup, must be of the same size asnum_blocks, first element is for the first Conv layer at the beginning of the networkpaddings (list[int]) – List of
paddingfor Conv layer inside eachBasicBlockgroup, must be of the same size asnum_blocks, first element is for the first Conv layer at the beginning of the networkprepool (int) – AvgPool layer
kernel_sizeto add at the beginning of the network, no layer added if 0postpool (str) – Global pooling layer type after all
BasicBlock, either max for MaxPool or avg for AvgPoolbatch_norm (bool) – If
True, add a BatchNorm layer after each Conv layer, otherwise no layer addedbn_momentum (float) – BatchNorm
momentumforce_projection_with_pooling (bool) – If
True, residual Conv layer is kept whenstride != 1even ifin_planes == planesinside aBasicBlockneuron (RecursiveConfigDict | None) – Spiking neuron configuration, see
qualia_plugin_snn.learningmodel.pytorch.SNN.SNN.__init__()timesteps (int) – Number of timesteps
dims (int) – Either 1 or 2 for 1D or 2D convolutional network.
basicblockbuilder (BasicBlockBuilder | None) – Optional function with
BasicBlockBuilder.__call__()signature to build a basic block after binding constants common across all basic blocks
- Return type:
None