qualia_plugin_snn.learningmodel.pytorch.QuantizedSCNN module
Contains the template for a quantized convolutional spiking neural network.
- class qualia_plugin_snn.learningmodel.pytorch.QuantizedSCNN.QuantizedSCNN[source]
Bases:
SNN
Quantized convolutional spiking neural network template.
Should have topology identical to
qualia_plugin_snn.learningmodel.pytorch.SCNN.SCNN
but with layers replaced with their quantized equivalent.- __init__(input_shape: tuple[int, ...], output_shape: tuple[int, ...], filters: list[int], kernel_sizes: list[int], paddings: list[int], strides: list[int], dropouts: float | list[float], pool_sizes: list[int], fc_units: list[int], quant_params: QuantizationConfig, batch_norm: bool = False, prepool: int | list[int] = 1, postpool: int | list[int] = 1, neuron: RecursiveConfigDict | None = None, timesteps: int = 4, gsp: bool = False, dims: int = 1) None [source]
Construct
QuantizedSCNN
.- Parameters:
filters (list[int]) – List of
out_channels
for each QuantizedConv layer, also defines the number of QuantizedConv layerskernel_sizes (list[int]) – List of
kernel_size
for each QuantizedConv layer, must of the same size asfilters
paddings (list[int]) – List of
padding
for each QuantizedConv layer, must of the same size asfilters
strides (list[int]) – List of
stride
for each QuantizedConv layer, must of the same size asfilters
dropouts (float | list[float]) – List of Dropout layer
p
to apply after each QuantizedConv or QuantizedLinear layer, must be of the same size asfilters
+fc_units
, no layer added if element is 0pool_sizes (list[int]) – List of QuantizedMaxPool layer
kernel_size
to apply after each QuantizedConv layer, must be of the same size asfilters
, no layer added if element is 0fc_units (list[int]) – List of QuantizedLinear layer
out_features
to add at the end of the network, no layer added if emptybatch_norm (bool) – If
True
, add a QuantizedBatchNorm layer after each QuantizedConv layer, otherwise no layer addedprepool (int | list[int]) – QuantizedAvgPool layer
kernel_size
to add at the beginning of the network, no layer added if 0postpool (int | list[int]) – QuantizedAvgPool layer
kernel_size
to add after all QuantizedConv layers, no layer added if 0neuron (RecursiveConfigDict | None) – Spiking neuron configuration, see
qualia_plugin_snn.learningmodel.pytorch.SNN.SNN.__init__()
,'Quantized'
is automatically prepended toneuron.kind
timesteps (int) – Number of timesteps
gsp (bool) – If
True
, a single QuantizedGlobalSumPool layer is added instead of QuantizedLinear layersdims (int) – Either 1 or 2 for 1D or 2D convolutional network.
quant_params (QuantizationConfig) – Quantization configuration dict, see
qualia_core.learningmodel.pytorch.Quantizer.Quantizer
- Return type:
None
- layers: nn.ModuleDict
List of sequential layers of the SCNN model