qualia_plugin_snn.learningmodel.pytorch.QuantizedSCNN module
Contains the template for a quantized convolutional spiking neural network.
- class qualia_plugin_snn.learningmodel.pytorch.QuantizedSCNN.QuantizedSCNN[source]
Bases:
SNNQuantized convolutional spiking neural network template.
Should have topology identical to
qualia_plugin_snn.learningmodel.pytorch.SCNN.SCNNbut with layers replaced with their quantized equivalent.- __init__(input_shape: tuple[int, ...], output_shape: tuple[int, ...], filters: list[int], kernel_sizes: list[int], paddings: list[int], strides: list[int], dropouts: float | list[float], pool_sizes: list[int], fc_units: list[int], quant_params: QuantizationConfig, batch_norm: bool = False, prepool: int | list[int] = 1, postpool: int | list[int] = 1, neuron: RecursiveConfigDict | None = None, timesteps: int = 4, gsp: bool = False, dims: int = 1) None[source]
Construct
QuantizedSCNN.- Parameters:
filters (list[int]) – List of
out_channelsfor each QuantizedConv layer, also defines the number of QuantizedConv layerskernel_sizes (list[int]) – List of
kernel_sizefor each QuantizedConv layer, must of the same size asfilterspaddings (list[int]) – List of
paddingfor each QuantizedConv layer, must of the same size asfiltersstrides (list[int]) – List of
stridefor each QuantizedConv layer, must of the same size asfiltersdropouts (float | list[float]) – List of Dropout layer
pto apply after each QuantizedConv or QuantizedLinear layer, must be of the same size asfilters+fc_units, no layer added if element is 0pool_sizes (list[int]) – List of QuantizedMaxPool layer
kernel_sizeto apply after each QuantizedConv layer, must be of the same size asfilters, no layer added if element is 0fc_units (list[int]) – List of QuantizedLinear layer
out_featuresto add at the end of the network, no layer added if emptybatch_norm (bool) – If
True, add a QuantizedBatchNorm layer after each QuantizedConv layer, otherwise no layer addedprepool (int | list[int]) – QuantizedAvgPool layer
kernel_sizeto add at the beginning of the network, no layer added if 0postpool (int | list[int]) – QuantizedAvgPool layer
kernel_sizeto add after all QuantizedConv layers, no layer added if 0neuron (RecursiveConfigDict | None) – Spiking neuron configuration, see
qualia_plugin_snn.learningmodel.pytorch.SNN.SNN.__init__(),'Quantized'is automatically prepended toneuron.kindtimesteps (int) – Number of timesteps
gsp (bool) – If
True, a single QuantizedGlobalSumPool layer is added instead of QuantizedLinear layersdims (int) – Either 1 or 2 for 1D or 2D convolutional network.
quant_params (QuantizationConfig) – Quantization configuration dict, see
qualia_core.learningmodel.pytorch.layers.Quantizer.Quantizer
- Return type:
None
- layers: ModuleDict
List of sequential layers of the SCNN model