qualia_plugin_snn.learningmodel.pytorch package

Subpackages

Submodules

Module contents

Pytorch learningmodel templates for Spiking Neural Networks based on SpikingJelly.

class qualia_plugin_snn.learningmodel.pytorch.QuantizedSCNN[source]

Bases: SNN

Quantized convolutional spiking neural network template.

Should have topology identical to qualia_plugin_snn.learningmodel.pytorch.SCNN.SCNN but with layers replaced with their quantized equivalent.

__init__(input_shape: tuple[int, ...], output_shape: tuple[int, ...], filters: list[int], kernel_sizes: list[int], paddings: list[int], strides: list[int], dropouts: float | list[float], pool_sizes: list[int], fc_units: list[int], quant_params: QuantizationConfig, batch_norm: bool = False, prepool: int | list[int] = 1, postpool: int | list[int] = 1, neuron: RecursiveConfigDict | None = None, timesteps: int = 4, gsp: bool = False, dims: int = 1) None[source]

Construct QuantizedSCNN.

Parameters:
  • input_shape (tuple[int, ...]) – Input shape

  • output_shape (tuple[int, ...]) – Output shape

  • filters (list[int]) – List of out_channels for each QuantizedConv layer, also defines the number of QuantizedConv layers

  • kernel_sizes (list[int]) – List of kernel_size for each QuantizedConv layer, must of the same size as filters

  • paddings (list[int]) – List of padding for each QuantizedConv layer, must of the same size as filters

  • strides (list[int]) – List of stride for each QuantizedConv layer, must of the same size as filters

  • dropouts (float | list[float]) – List of Dropout layer p to apply after each QuantizedConv or QuantizedLinear layer, must be of the same size as filters + fc_units, no layer added if element is 0

  • pool_sizes (list[int]) – List of QuantizedMaxPool layer kernel_size to apply after each QuantizedConv layer, must be of the same size as filters, no layer added if element is 0

  • fc_units (list[int]) – List of QuantizedLinear layer out_features to add at the end of the network, no layer added if empty

  • batch_norm (bool) – If True, add a QuantizedBatchNorm layer after each QuantizedConv layer, otherwise no layer added

  • prepool (int | list[int]) – QuantizedAvgPool layer kernel_size to add at the beginning of the network, no layer added if 0

  • postpool (int | list[int]) – QuantizedAvgPool layer kernel_size to add after all QuantizedConv layers, no layer added if 0

  • neuron (RecursiveConfigDict | None) – Spiking neuron configuration, see qualia_plugin_snn.learningmodel.pytorch.SNN.SNN.__init__(), 'Quantized' is automatically prepended to neuron.kind

  • timesteps (int) – Number of timesteps

  • gsp (bool) – If True, a single QuantizedGlobalSumPool layer is added instead of QuantizedLinear layers

  • dims (int) – Either 1 or 2 for 1D or 2D convolutional network.

  • quant_params (QuantizationConfig) – Quantization configuration dict, see qualia_core.learningmodel.pytorch.Quantizer.Quantizer

Return type:

None

layers: nn.ModuleDict

List of sequential layers of the SCNN model

forward(input: Tensor) Tensor[source]

Forward calls each of the SCNN layers sequentially.

Parameters:

input (Tensor) – Input tensor

Returns:

Output tensor

Return type:

Tensor

class qualia_plugin_snn.learningmodel.pytorch.QuantizedSResNet[source]

Bases: SNN

Quantized residual spiking neural network template.

Should have topology identical to qualia_plugin_snn.learningmodel.pytorch.SResNet.SResNet but with layers replaced with their quantized equivalent.

__init__(input_shape: tuple[int, ...], output_shape: tuple[int, ...], filters: list[int], kernel_sizes: list[int], num_blocks: list[int], strides: list[int], paddings: list[int], quant_params: QuantizationConfig, prepool: int = 1, postpool: str = 'max', batch_norm: bool = False, bn_momentum: float = 0.1, force_projection_with_stride: bool = True, neuron: RecursiveConfigDict | None = None, timesteps: int = 2, dims: int = 1, basicblockbuilder: QuantizedBasicBlockBuilder | None = None) None[source]

Construct QuantizedSResNet.

Structure is:

  QuantizedInput
         |
 QuantizedAvgPool
         |
   QuantizedConv
         |
QuantizedBatchNorm
         |
    QuantizedIF
         |
QuantizedBasicBlock
         |
         …
         |
QuantizedBasicBlock
         |
QuantizedGlobalPool
         |
      Flatten
         |
  QuantizedLinear
Parameters:
  • input_shape (tuple[int, ...]) – Input shape

  • output_shape (tuple[int, ...]) – Output shape

  • filters (list[int]) – List of out_channels for QuantizedConv layers inside each QuantizedBasicBlock group, must be of the same size as num_blocks, first element is for the first QuantizedConv layer at the beginning of the network

  • kernel_sizes (list[int]) – List of kernel_size for QuantizedConv layers inside each QuantizedBasicBlock group, must of the same size as num_blocks, first element is for the first QuantizedConv layer at the beginning of the network

  • num_blocks (list[int]) – List of number of QuantizedBasicBlock in each group, also defines the number of QuantizedBasicBlock groups inside the network

  • strides (list[int]) – List of kernel_size for QuantizedMaxPool layers inside each QuantizedBasicBlock group, must of the same size as num_blocks, stride is applied only to the first QuantizedBasicBlock of the group, next QuantizedBasicBlock in the group use a stride of 1, first element is the stride of the first QuantizedConv layer at the beginning of the network

  • paddings (list[int]) – List of padding for QuantizedConv layer inside each QuantizedBasicBlock group, must of the same size as num_blocks, first element is for the first QuantizedConv layer at the beginning of the network

  • prepool (int) – QuantizedAvgPool layer kernel_size to add at the beginning of the network, no layer added if 0

  • postpool (str) – Quantized global pooling layer type after all QuantizedBasicBlock, either max for QuantizedMaxPool or avg for QuantizedAvgPool

  • batch_norm (bool) – If True, add a QuantizedBatchNorm layer after each QuantizedConv layer, otherwise no layer added

  • bn_momentum (float) – QuantizedBatchNorm momentum

  • force_projection_with_stride (bool) – If True, residual QuantizedConv layer is kept when stride != 1 even if in_planes == planes inside a QuantizedBasicBlock

  • neuron (RecursiveConfigDict | None) – Spiking neuron configuration, see qualia_plugin_snn.learningmodel.pytorch.SNN.SNN.__init__()

  • timesteps (int) – Number of timesteps

  • dims (int) – Either 1 or 2 for 1D or 2D convolutional network.

  • basicblockbuilder (QuantizedBasicBlockBuilder | None) – Optional function with QuantizedBasicBlockBuilder.__call__() signature to build a basic block after binding constants common across all basic blocks

  • quant_params (QuantizationConfig)

Return type:

None

forward(input: Tensor) Tensor[source]
Parameters:

input (Tensor)

Return type:

Tensor

class qualia_plugin_snn.learningmodel.pytorch.SCNN[source]

Bases: SNN

Convolutional spiking neural network template.

Similar to qualia_core.learningmodel.pytorch.CNN.CNN but with spiking neuron activation layers (e.g., IF) instead of torch.nn.ReLU.

Example TOML configuration for a 2D SVGG16 over 4 timesteps with soft-reset multi-step IF based on the SCNN template:

[[model]]
name = "VGG16"
params.filters      = [ 64, 64, 128, 128, 256, 256, 256, 512, 512, 512, 512, 512, 512]
params.kernel_sizes = [  3,  3,   3,   3,   3,   3,   3,   3,   3,   3,   3,   3,   3]
params.paddings     = [  1,  1,   1,   1,   1,   1,   1,   1,   1,   1,   1,   1,   1]
params.strides      = [  1,  1,   1,   1,   1,   1,   1,   1,   1,   1,   1,   1,   1]
params.pool_sizes   = [  0,  2,   0,   2,   0,   0,   2,   0,   0,   2,   0,   0,   2]
params.dropouts     = [  0,  0,   0,   0,   0,   0,   0,   0,   0,   0,   0,   0,   0, 0, 0]
params.fc_units     = [4096, 4096]
params.batch_norm   = true
params.timesteps    = 4
params.dims         = 2
params.neuron.kind                = 'IFNode'
params.neuron.params.v_reset      = false # Soft reset
params.neuron.params.v_threshold  = 1.0
params.neuron.params.detach_reset = true
params.neuron.params.step_mode    = 'm' # Multi-step mode, make sure to use SpikingJellyMultiStep learningframework
params.neuron.params.backend      = 'cupy'
__init__(input_shape: tuple[int, ...], output_shape: tuple[int, ...], filters: list[int], kernel_sizes: list[int], paddings: list[int], strides: list[int], dropouts: float | list[float], pool_sizes: list[int], fc_units: list[int], batch_norm: bool = False, prepool: int | list[int] = 1, postpool: int | list[int] = 1, neuron: RecursiveConfigDict | None = None, timesteps: int = 4, gsp: bool = False, dims: int = 1) None[source]

Construct SCNN.

Parameters:
  • input_shape (tuple[int, ...]) – Input shape

  • output_shape (tuple[int, ...]) – Output shape

  • filters (list[int]) – List of out_channels for each Conv layer, also defines the number of Conv layers

  • kernel_sizes (list[int]) – List of kernel_size for each Conv layer, must of the same size as filters

  • paddings (list[int]) – List of padding for each Conv layer, must of the same size as filters

  • strides (list[int]) – List of stride for each Conv layer, must of the same size as filters

  • dropouts (float | list[float]) – List of Dropout layer p to apply after each Conv or Linear layer, must be of the same size as filters + fc_units, no layer added if element is 0

  • pool_sizes (list[int]) – List of MaxPool layer kernel_size to apply after each Conv layer, must be of the same size as filters, no layer added if element is 0

  • fc_units (list[int]) – List of torch.nn.Linear layer out_features to add at the end of the network, no layer added if empty

  • batch_norm (bool) – If True, add a BatchNorm layer after each Conv layer, otherwise no layer added

  • prepool (int | list[int]) – AvgPool layer kernel_size to add at the beginning of the network, no layer added if 0

  • postpool (int | list[int]) – AvgPool layer kernel_size to add after all Conv layers, no layer added if 0

  • neuron (RecursiveConfigDict | None) – Spiking neuron configuration, see qualia_plugin_snn.learningmodel.pytorch.SNN.SNN.__init__()

  • timesteps (int) – Number of timesteps

  • gsp (bool) – If True, a single GlobalSumPool layer is added instead of Linear layers

  • dims (int) – Either 1 or 2 for 1D or 2D convolutional network.

Return type:

None

layers: nn.ModuleDict

List of sequential layers of the SCNN model

forward(input: Tensor) Tensor[source]

Forward calls each of the SCNN layers sequentially.

Parameters:

input (Tensor) – Input tensor

Returns:

Output tensor

Return type:

Tensor

class qualia_plugin_snn.learningmodel.pytorch.SResNet[source]

Bases: SNN

Residual spiking neural network template.

Similar to qualia_core.learningmodel.pytorch.ResNet.ResNet but with spiking neuron activation layers (e.g., IF) instead of torch.nn.ReLU.

Example TOML configuration for a 2D ResNetv1-18 over 4 timesteps with soft-reset multi-step IF based on the SResNet template:

[[model]]
name = "SResNetv1-18"
params.filters      = [64, 64, 128, 256, 512]
params.kernel_sizes = [ 7,  3,   3,   3,   3]
params.paddings     = [ 3,  1,   1,   1,   1]
params.strides      = [ 2,  1,   1,   1,   1]
params.num_blocks   = [     2,   2,   2,   2]
params.prepool      = 1
params.postpool     = 'max'
params.batch_norm   = true
params.dims         = 2
params.timesteps    = 4
params.neuron.kind  = 'IFNode'
params.neuron.params.v_reset = false # Soft reset
params.neuron.params.v_threshold = 1.0
params.neuron.params.detach_reset = true
params.neuron.params.step_mode = 'm' # Multi-step mode, make sure to use SpikingJellyMultiStep learningframework
params.neuron.params.backend = 'torch'
__init__(input_shape: tuple[int, ...], output_shape: tuple[int, ...], filters: list[int], kernel_sizes: list[int], num_blocks: list[int], strides: list[int], paddings: list[int], prepool: int = 1, postpool: str = 'max', batch_norm: bool = False, bn_momentum: float = 0.1, force_projection_with_stride: bool = True, neuron: RecursiveConfigDict | None = None, timesteps: int = 2, dims: int = 1, basicblockbuilder: BasicBlockBuilder | None = None) None[source]

Construct SResNet.

Structure is:

  Input
    |
 AvgPool
    |
   Conv
    |
BatchNorm
    |
    IF
    |
BasicBlock
    |
    …
    |
BasicBlock
    |
GlobalPool
    |
 Flatten
    |
  Linear
Parameters:
  • input_shape (tuple[int, ...]) – Input shape

  • output_shape (tuple[int, ...]) – Output shape

  • filters (list[int]) – List of out_channels for Conv layers inside each BasicBlock group, must be of the same size as num_blocks, first element is for the first Conv layer at the beginning of the network

  • kernel_sizes (list[int]) – List of kernel_size for Conv layers inside each BasicBlock group, must of the same size as num_blocks, first element is for the first Conv layer at the beginning of the network

  • num_blocks (list[int]) – List of number of BasicBlock in each group, also defines the number of BasicBlock groups inside the network

  • strides (list[int]) – List of kernel_size for MaxPool layers inside each BasicBlock group, must of the same size as num_blocks, stride is applied only to the first BasicBlock of the group, next BasicBlock in the group use a stride of 1, first element is the stride of the first Conv layer at the beginning of the network

  • paddings (list[int]) – List of padding for Conv layer inside each BasicBlock group, must of the same size as num_blocks, first element is for the first Conv layer at the beginning of the network

  • prepool (int) – AvgPool layer kernel_size to add at the beginning of the network, no layer added if 0

  • postpool (str) – Global pooling layer type after all BasicBlock, either max for MaxPool or avg for AvgPool

  • batch_norm (bool) – If True, add a BatchNorm layer after each Conv layer, otherwise no layer added

  • bn_momentum (float) – BatchNorm momentum

  • force_projection_with_stride (bool) – If True, residual Conv layer is kept when stride != 1 even if in_planes == planes inside a BasicBlock

  • neuron (RecursiveConfigDict | None) – Spiking neuron configuration, see qualia_plugin_snn.learningmodel.pytorch.SNN.SNN.__init__()

  • timesteps (int) – Number of timesteps

  • dims (int) – Either 1 or 2 for 1D or 2D convolutional network.

  • basicblockbuilder (BasicBlockBuilder | None) – Optional function with BasicBlockBuilder.__call__() signature to build a basic block after binding constants common across all basic blocks

Return type:

None

forward(input: Tensor) Tensor[source]

Forward of residual spiking neural network.

Parameters:

input (Tensor) – Input tensor

Returns:

Output tensor

Return type:

Tensor