qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.quantized_layers module

Contain implementation of a QuantizedLinear layer with support for SpikingJelly step_mode.

class qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.quantized_layers.QuantizedLinear[source]

Bases: QuantizedLinear, StepModule

Add SpikingJelly’s step_mode support to Qualia’s quantized Linear layer.

__init__(in_features: int, out_features: int, quant_params: QuantizationConfigDict, bias: bool = True, activation: Module | None = None, step_mode: str = 's') None[source]

Construct QuantizedLinear.

Parameters:
  • in_features (int) – Dimension of input vector

  • out_features (int) – Dimension of output vector and number of neurons

  • bias (bool) – If True, adds a learnable bias to the output.

  • quant_params (QuantizationConfigDict) – Dict containing quantization parameters, see qualia_core.learningmodel.pytorch.Quantizer.Quantizer

  • activation (Module | None) – Activation layer to fuse for quantization purposes, None if unfused or no activation

  • step_mode (str) – SpikingJelly’s step_mode, either 's' or 'm', see spikingjelly.activation_based.layer.Linear

Return type:

None

extra_repr() str[source]

Add step_mode to the __repr__ method.

Returns:

String representation of torch.nn.Linear with step_mode.

Return type:

str

forward(input: Tensor) Tensor[source]

Forward qualia_core.learningmodel.pytorch.layers.quantized_layers.QuantizedLinear with step_mode support.

Parameters:

input (Tensor) – Input tensor

Returns:

Output tensor

Return type:

Tensor