qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.quantized_layers1d module

Contain implementation of quantized 1D layers with support for SpikingJelly step_mode.

class qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.quantized_layers1d.QuantizedConv1d[source]

Bases: QuantizedConv1d, StepModule

Add SpikingJelly’s step_mode support to Qualia’s quantized Conv1d layer.

__init__(in_channels: int, out_channels: int, quant_params: QuantizationConfigDict, kernel_size: int = 3, stride: int = 1, padding: int = 0, dilation: int = 1, groups: int = 1, bias: bool = True, activation: Module | None = None, step_mode: str = 's') None[source]

Construct QuantizedConv1d.

Parameters:
  • in_channels (int) – Number of input channels

  • out_channels (int) – Number of output channels, i.e., number of filters

  • kernel_size (int) – Size of the kernel

  • stride (int) – Stride of the convolution

  • padding (int) – Padding added to both sides of the input

  • dilation (int) – Spacing between kernel elements

  • groups (int) – Number of blocked connections from input channels to output channels

  • bias (bool) – If True, adds a learnable bias to the output

  • quant_params (QuantizationConfigDict) – Dict containing quantization parameters, see qualia_core.learningmodel.pytorch.Quantizer.Quantizer

  • activation (Module | None) – Activation layer to fuse for quantization purposes, None if unfused or no activation

  • step_mode (str) – SpikingJelly’s step_mode, either 's' or 'm', see spikingjelly.activation_based.layer.Linear

Return type:

None

extra_repr() str[source]

Add step_mode to the __repr__ method.

Returns:

String representation of torch.nn.Conv1d with step_mode.

Return type:

str

forward(input: Tensor) Tensor[source]

Forward qualia_core.learningmodel.pytorch.quantized_layers1d.QuantizedConv1d with step_mode support.

Parameters:

input (Tensor) – Input tensor

Returns:

Output tensor

Return type:

Tensor

class qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.quantized_layers1d.QuantizedMaxPool1d[source]

Bases: QuantizedMaxPool1d, StepModule

Add SpikingJelly’s step_mode support to Qualia’s quantized MaxPool1d layer.

__init__(kernel_size: int, quant_params: QuantizationConfigDict, stride: int | None = None, padding: int = 0, activation: Module | None = None, step_mode: str = 's') None[source]

Construct QuantizedMaxPool1d.

Parameters:
  • kernel_size (int) – Size of the sliding window

  • stride (int | None) – Stride of the sliding window

  • padding (int) – Implicit negative infinity padding to be added on both sides

  • quant_params (QuantizationConfigDict) – Dict containing quantization parameters, see qualia_core.learningmodel.pytorch.Quantizer.Quantizer

  • activation (Module | None) – Activation layer to fuse for quantization purposes, None if unfused or no activation

  • step_mode (str) – SpikingJelly’s step_mode, either 's' or 'm', see spikingjelly.activation_based.layer.Linear

Return type:

None

extra_repr() str[source]

Add step_mode to the __repr__ method.

Returns:

String representation of torch.nn.MaxPool1d with step_mode.

Return type:

str

forward(input: Tensor) Tensor[source]

Forward qualia_core.learningmodel.pytorch.quantized_layers1d.QuantizedMaxPool1d with step_mode support.

Parameters:

input (Tensor) – Input tensor

Returns:

Output tensor

Return type:

Tensor

class qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.quantized_layers1d.QuantizedBatchNorm1d[source]

Bases: QuantizedBatchNorm1d, StepModule

Add SpikingJelly’s step_mode support to Qualia’s quantized BatchNorm1d layer.

__init__(num_features: int, quant_params: QuantizationConfigDict, eps: float = 1e-05, momentum: float = 0.1, affine: bool = True, track_running_stats: bool = True, device: device | None = None, dtype: dtype | None = None, activation: Module | None = None, step_mode: str = 's') None[source]

Construct QuantizedBatchNorm1d.

Parameters:
  • num_features (int) – Number of features or channels of the input

  • eps (float) – Value added to the denominator for numerical stability

  • momentum (float) – value used for the running_mean and running_var computation

  • affine (bool) – If True, add learnable affine parameters

  • track_running_stats (bool) – If True, track the running mean and variance, otherwise always uses batch statistics

  • device (device | None) – Optional device to perform the computation on

  • dtype (dtype | None) – Optional data type

  • quant_params (QuantizationConfigDict) – Dict containing quantization parameters, see qualia_core.learningmodel.pytorch.Quantizer.Quantizer

  • activation (Module | None) – Activation layer to fuse for quantization purposes, None if unfused or no activation

  • step_mode (str) – SpikingJelly’s step_mode, either 's' or 'm', see spikingjelly.activation_based.layer.Linear

Return type:

None

extra_repr() str[source]

Add step_mode to the __repr__ method.

Returns:

String representation of torch.nn.BatchNorm1d with step_mode.

Return type:

str

forward(input: Tensor) Tensor[source]

Forward qualia_core.learningmodel.pytorch.quantized_layers1d.QuantizedBatchNorm1d with step_mode support.

Parameters:

input (Tensor) – Input tensor

Returns:

Output tensor

Return type:

Tensor