qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.quantized_layers2d module
Contain implementation of quantized 1D layers with support for SpikingJelly step_mode
.
- class qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.quantized_layers2d.QuantizedConv2d[source]
Bases:
QuantizedConv2d
,StepModule
Add SpikingJelly’s
step_mode
support to Qualia’s quantized Conv2d layer.- __init__(in_channels: int, out_channels: int, quant_params: QuantizationConfigDict, kernel_size: int = 3, stride: int = 1, padding: int = 0, dilation: int = 1, groups: int = 1, bias: bool = True, activation: Module | None = None, step_mode: str = 's') None [source]
Construct
QuantizedConv2d
.- Parameters:
in_channels (int) – Number of input channels
out_channels (int) – Number of output channels, i.e., number of filters
kernel_size (int) – Size of the kernel
stride (int) – Stride of the convolution
padding (int) – Padding added to both sides of the input
dilation (int) – Spacing between kernel elements
groups (int) – Number of blocked connections from input channels to output channels
bias (bool) – If
True
, adds a learnable bias to the outputquant_params (QuantizationConfigDict) – Dict containing quantization parameters, see
qualia_core.learningmodel.pytorch.Quantizer.Quantizer
activation (Module | None) – Activation layer to fuse for quantization purposes,
None
if unfused or no activationstep_mode (str) – SpikingJelly’s
step_mode
, either's'
or'm'
, seespikingjelly.activation_based.layer.Linear
- Return type:
None
- extra_repr() str [source]
Add
step_mode
to the__repr__
method.- Returns:
String representation of
torch.nn.Conv2d
withstep_mode
.- Return type:
- class qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.quantized_layers2d.QuantizedMaxPool2d[source]
Bases:
QuantizedMaxPool2d
,StepModule
Add SpikingJelly’s
step_mode
support to Qualia’s quantized MaxPool2d layer.- __init__(kernel_size: int, quant_params: QuantizationConfigDict, stride: int | None = None, padding: int = 0, activation: Module | None = None, step_mode: str = 's') None [source]
Construct
QuantizedMaxPool2d
.- Parameters:
kernel_size (int) – Size of the sliding window
stride (int | None) – Stride of the sliding window
padding (int) – Implicit negative infinity padding to be added on both sides
quant_params (QuantizationConfigDict) – Dict containing quantization parameters, see
qualia_core.learningmodel.pytorch.Quantizer.Quantizer
activation (Module | None) – Activation layer to fuse for quantization purposes,
None
if unfused or no activationstep_mode (str) – SpikingJelly’s
step_mode
, either's'
or'm'
, seespikingjelly.activation_based.layer.Linear
- Return type:
None
- extra_repr() str [source]
Add
step_mode
to the__repr__
method.- Returns:
String representation of
torch.nn.MaxPool2d
withstep_mode
.- Return type:
- class qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.quantized_layers2d.QuantizedBatchNorm2d[source]
Bases:
QuantizedBatchNorm2d
,StepModule
Add SpikingJelly’s
step_mode
support to Qualia’s quantized BatchNorm2d layer.- __init__(num_features: int, quant_params: QuantizationConfigDict, eps: float = 1e-05, momentum: float = 0.1, affine: bool = True, track_running_stats: bool = True, device: device | None = None, dtype: dtype | None = None, activation: Module | None = None, step_mode: str = 's') None [source]
Construct
QuantizedBatchNorm2d
.- Parameters:
num_features (int) – Number of features or channels of the input
eps (float) – Value added to the denominator for numerical stability
momentum (float) – value used for the running_mean and running_var computation
affine (bool) – If
True
, add learnable affine parameterstrack_running_stats (bool) – If
True
, track the running mean and variance, otherwise always uses batch statisticsdevice (device | None) – Optional device to perform the computation on
dtype (dtype | None) – Optional data type
quant_params (QuantizationConfigDict) – Dict containing quantization parameters, see
qualia_core.learningmodel.pytorch.Quantizer.Quantizer
activation (Module | None) – Activation layer to fuse for quantization purposes,
None
if unfused or no activationstep_mode (str) – SpikingJelly’s
step_mode
, either's'
or'm'
, seespikingjelly.activation_based.layer.Linear
- Return type:
None
- extra_repr() str [source]
Add
step_mode
to the__repr__
method.- Returns:
String representation of
torch.nn.BatchNorm2d
withstep_mode
.- Return type: