qualia_plugin_snn.learningmodel.pytorch.QuantizedSCNN module

Contains the template for a quantized convolutional spiking neural network.

class qualia_plugin_snn.learningmodel.pytorch.QuantizedSCNN.QuantizedSCNN[source]

Bases: SNN

Quantized convolutional spiking neural network template.

Should have topology identical to qualia_plugin_snn.learningmodel.pytorch.SCNN.SCNN but with layers replaced with their quantized equivalent.

__init__(input_shape: tuple[int, ...], output_shape: tuple[int, ...], filters: list[int], kernel_sizes: list[int], paddings: list[int], strides: list[int], dropouts: float | list[float], pool_sizes: list[int], fc_units: list[int], quant_params: QuantizationConfig, batch_norm: bool = False, prepool: int | list[int] = 1, postpool: int | list[int] = 1, neuron: RecursiveConfigDict | None = None, timesteps: int = 4, gsp: bool = False, dims: int = 1) None[source]

Construct QuantizedSCNN.

Parameters:
  • input_shape (tuple[int, ...]) – Input shape

  • output_shape (tuple[int, ...]) – Output shape

  • filters (list[int]) – List of out_channels for each QuantizedConv layer, also defines the number of QuantizedConv layers

  • kernel_sizes (list[int]) – List of kernel_size for each QuantizedConv layer, must of the same size as filters

  • paddings (list[int]) – List of padding for each QuantizedConv layer, must of the same size as filters

  • strides (list[int]) – List of stride for each QuantizedConv layer, must of the same size as filters

  • dropouts (float | list[float]) – List of Dropout layer p to apply after each QuantizedConv or QuantizedLinear layer, must be of the same size as filters + fc_units, no layer added if element is 0

  • pool_sizes (list[int]) – List of QuantizedMaxPool layer kernel_size to apply after each QuantizedConv layer, must be of the same size as filters, no layer added if element is 0

  • fc_units (list[int]) – List of QuantizedLinear layer out_features to add at the end of the network, no layer added if empty

  • batch_norm (bool) – If True, add a QuantizedBatchNorm layer after each QuantizedConv layer, otherwise no layer added

  • prepool (int | list[int]) – QuantizedAvgPool layer kernel_size to add at the beginning of the network, no layer added if 0

  • postpool (int | list[int]) – QuantizedAvgPool layer kernel_size to add after all QuantizedConv layers, no layer added if 0

  • neuron (RecursiveConfigDict | None) – Spiking neuron configuration, see qualia_plugin_snn.learningmodel.pytorch.SNN.SNN.__init__(), 'Quantized' is automatically prepended to neuron.kind

  • timesteps (int) – Number of timesteps

  • gsp (bool) – If True, a single QuantizedGlobalSumPool layer is added instead of QuantizedLinear layers

  • dims (int) – Either 1 or 2 for 1D or 2D convolutional network.

  • quant_params (QuantizationConfig) – Quantization configuration dict, see qualia_core.learningmodel.pytorch.Quantizer.Quantizer

Return type:

None

layers: nn.ModuleDict

List of sequential layers of the SCNN model

timesteps: int

Number of timesteps

input_shape: tuple[int, ...]
output_shape: tuple[int, ...]
training: bool
forward(input: Tensor) Tensor[source]

Forward calls each of the SCNN layers sequentially.

Parameters:

input (Tensor) – Input tensor

Returns:

Output tensor

Return type:

Tensor