qualia_plugin_snn.learningmodel.pytorch.layers.quantized_SNN_layers module
Contains quantized spiking neuron implementations.
- class qualia_plugin_snn.learningmodel.pytorch.layers.quantized_SNN_layers.QuantizedLIFNode[source]
Bases:
LIFNode
,QuantizerInputProtocol
,QuantizerActProtocol
,QuantizedLayer
Quantized variant of SpikingJelly’s
spikingjelly.activation_based.neuron.LIFNode
.Hyperparameters
v_threshold
,v_reset
andtau
are quantized as well as membrane potentialv
.- __init__(quant_params: QuantizationConfigDict, tau: float = 2.0, decay_input: bool = True, v_threshold: float = 1.0, v_reset: float = 0.0, detach_reset: bool = False, step_mode: str = 's', backend: str = 'torch') None [source]
Construct
QuantizedLIFNode
.For more information about spiking neuron parameters, see:
spikingjelly.activation_based.neuron.LIFNode.__init__()
- Parameters:
tau (float) – Membrane time constant
decay_input (bool) – Whether the input will decay
v_threshold (float) – Threshold of this neurons layer
v_reset (float) – Reset voltage of this neurons layer. If not
None
, the neuron’s voltage will be set tov_reset
after firing a spike. IfNone
, the neuron’s voltage will subtractv_threshold
after firing a spikedetach_reset (bool) – Whether detach the computation graph of reset in backward
step_mode (str) – The step mode, which can be s (single-step) or m (multi-step)
backend (str) – backend fot this neurons layer, only ‘torch’ is supported
quant_params (QuantizationConfigDict) – Quantization configuration dict, see
qualia_core.learningmodel.pytorch.Quantizer.Quantizer
- Return type:
None
- property supported_backends: tuple[Literal['torch']]
Supported step_mode and backend.
Only torch backend is supported.
- Returns:
Tuple of
'torch'
- Raises:
ValueError – When
step_mode
is not's'
or'm'
- neuronal_charge(x: Tensor) None [source]
Quantized
spikingjelly.activation_based.neuron.LIFNode.neuronal_charge()
.Membrane potential and hyperparameters are quantized before and after computation using
quantize_v_and_hyperparams()
.- Parameters:
x (Tensor) – Input tensor
- Return type:
None
- single_step_forward(x: Tensor) Tensor [source]
Quantized
spikingjelly.activation_based.neuron.LIFNode.single_step_forward()
.Input is (optionally) quantized. Membrane potential and hyperparameters are quantized before and after computation using
quantize_v_and_hyperparams()
.
- multi_step_forward(x_seq: Tensor) Tensor [source]
Implement multi-step as loop over single-step for quantized neurons, inefficient but at least it works.
- quantize_v_and_hyperparams() None [source]
Quantize potential and hyperparameters (v_threshold, tau, v_reset) in-place with the same quantizer at the same time.
tau is not quantized directly, instead quantize reciprocal_tau (1 / tau) because this is what is used during inference to avoid division.
- Return type:
None
- get_hyperparams_tensor(device: device, dtype: dtype) Tensor [source]
Pack v_threshold, reciprocal_tau and optionally v_reset into the same Tensor.
- property weights_q: int | None
Number of fractional part bits for the membrane potential and hyperparameters in case of fixed-point quantization.
See
qualia_core.learningmodel.pytorch.Quantizer.Quantizer.fractional_bits()
.- Returns:
Fractional part bits for the membrane potential and hyperparameters or
None
if not applicable.
- class qualia_plugin_snn.learningmodel.pytorch.layers.quantized_SNN_layers.QuantizedIFNode[source]
Bases:
IFNode
,QuantizerInputProtocol
,QuantizerActProtocol
,QuantizedLayer
Quantized variant of SpikingJelly’s
spikingjelly.activation_based.neuron.IFNode
.Hyperparameters
v_threshold
andv_reset
are quantized as well as membrane potentialv
.- __init__(quant_params: QuantizationConfigDict, v_threshold: float = 1.0, v_reset: float = 0.0, detach_reset: bool = False, step_mode: str = 's', backend: str = 'torch') None [source]
Construct
QuantizedIFNode
.For more information about spiking neuron parameters, see:
spikingjelly.activation_based.neuron.IFNode.__init__()
- Parameters:
v_threshold (float) – Threshold of this neurons layer
v_reset (float) – Reset voltage of this neurons layer. If not
None
, the neuron’s voltage will be set tov_reset
after firing a spike. IfNone
, the neuron’s voltage will subtractv_threshold
after firing a spikedetach_reset (bool) – Whether detach the computation graph of reset in backward
step_mode (str) – The step mode, which can be s (single-step) or m (multi-step)
backend (str) – backend fot this neurons layer, only ‘torch’ is supported
quant_params (QuantizationConfigDict) – Quantization configuration dict, see
qualia_core.learningmodel.pytorch.Quantizer.Quantizer
- Return type:
None
- property supported_backends: tuple[Literal['torch']]
Supported step_mode and backend.
Only torch backend is supported.
- Returns:
Tuple of
'torch'
- Raises:
ValueError – When
step_mode
is not's'
or'm'
- neuronal_charge(x: Tensor) None [source]
Quantized
spikingjelly.activation_based.neuron.IFNode.neuronal_charge()
.Membrane potential and hyperparameters are quantized before and after computation using
quantize_v_and_hyperparams()
.- Parameters:
x (Tensor) – Input tensor
- Return type:
None
- single_step_forward(x: Tensor) Tensor [source]
Quantized
spikingjelly.activation_based.neuron.IFNode.single_step_forward()
.Input is (optionally) quantized. Membrane potential and hyperparameters are quantized before and after computation using
quantize_v_and_hyperparams()
.
- multi_step_forward(x_seq: Tensor) Tensor [source]
Implement multi-step as loop over single-step for quantized neurons, inefficient but at least it works.
- quantize_v_and_hyperparams() None [source]
Quantize potential and hyperparameters (v_threshold, v_reset) in-place with the same quantizer at the same time.
- Return type:
None
- get_hyperparams_tensor(device: device, dtype: dtype) Tensor [source]
Pack v_threshold and optionally v_reset into the same Tensor.
- property weights_q: int | None
Number of fractional part bits for the membrane potential and hyperparameters in case of fixed-point quantization.
See
qualia_core.learningmodel.pytorch.Quantizer.Quantizer.fractional_bits()
.- Returns:
Fractional part bits for the membrane potential and hyperparameters or
None
if not applicable.
- class qualia_plugin_snn.learningmodel.pytorch.layers.quantized_SNN_layers.QuantizedATIF[source]
Bases:
ATIF
,QuantizerInputProtocol
,QuantizerActProtocol
,QuantizedLayer
Quantized Integrate and Fire soft-reset with learnable Vth and activation scaling, based on spikingjelly.
- __init__(quant_params: QuantizationConfigDict, v_threshold: float = 1.0, vth_init_l: float = 0.8, vth_init_h: float = 1.0, alpha: float = 1.0, device: str = 'cpu') None [source]
Construct
ATIF
.- Parameters:
v_threshold (float) – Factor to apply to the uniform initialization bounds
vth_init_l (float) – Lower bound for uniform initialization of threshold Tensor
vth_init_h (float) – Higher bound for uniform initialization of threshold Tensor
alpha (float) – Sigmoig surrogate scale factor
device (str) – Device to run the computation on
quant_params (QuantizationConfigDict) – Quantization configuration dict, see
qualia_core.learningmodel.pytorch.Quantizer.Quantizer
- Return type:
None
- ifsrl_fn(x: Tensor) Tensor [source]
Quantized
qualia_plugin_snn.learningmodel.pytorch.layers.CustomNode.ATIF.ifsrl_fn()
.Input is quantized. Membrane potential is quantized before and after computation. Threshold is quantized by
get_coeffs()
.
- get_coeffs() Tensor [source]
Return the quantized Tensor of threshold
v_threshold
.- Returns:
Quantized Tensor of threshold
v_threshold
- Return type:
- set_coeffs(v_threshold: Tensor) None [source]
Quantized and replace the Tensor of threshold
v_threshold
.- Parameters:
v_threshold (Tensor) – New Tensor of quantized threshold to replace
v_threshold
- Return type:
None
- property weights_q: int | None
Number of fractional part bits for the membrane potential and hyperparameters in case of fixed-point quantization.
See
qualia_core.learningmodel.pytorch.Quantizer.Quantizer.fractional_bits()
.- Returns:
Fractional part bits for the membrane potential and hyperparameters or
None
if not applicable.