qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.quantized_layers module
Contain implementation of a QuantizedLinear layer with support for SpikingJelly step_mode
.
- class qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.quantized_layers.QuantizedLinear[source]
Bases:
QuantizedLinear
,StepModule
Add SpikingJelly’s
step_mode
support to Qualia’s quantized Linear layer.- __init__(in_features: int, out_features: int, quant_params: QuantizationConfigDict, bias: bool = True, activation: Module | None = None, step_mode: str = 's') None [source]
Construct
QuantizedLinear
.- Parameters:
in_features (int) – Dimension of input vector
out_features (int) – Dimension of output vector and number of neurons
bias (bool) – If
True
, adds a learnable bias to the output.quant_params (QuantizationConfigDict) – Dict containing quantization parameters, see
qualia_core.learningmodel.pytorch.Quantizer.Quantizer
activation (Module | None) – Activation layer to fuse for quantization purposes,
None
if unfused or no activationstep_mode (str) – SpikingJelly’s
step_mode
, either's'
or'm'
, seespikingjelly.activation_based.layer.Linear
- Return type:
None
- extra_repr() str [source]
Add
step_mode
to the__repr__
method.- Returns:
String representation of
torch.nn.Linear
withstep_mode
.- Return type: