qualia_plugin_snn.learningmodel.pytorch.layers package
Subpackages
- qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly package
- Submodules
- qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.Add module
- qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.GlobalSumPool1d module
- qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.GlobalSumPool2d module
- qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.QuantizedAdd module
- qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.QuantizedGlobalSumPool1d module
- qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.QuantizedGlobalSumPool2d module
- qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.layers1d module
- qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.layers2d module
- qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.quantized_layers module
- qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.quantized_layers1d module
- qualia_plugin_snn.learningmodel.pytorch.layers.spikingjelly.quantized_layers2d module
- Module contents
- Submodules
Submodules
Module contents
Contains implementation for custom or quantized layers for spiking neural networks.
- class qualia_plugin_snn.learningmodel.pytorch.layers.ATIF[source]
Bases:
BaseNode
IFSRLSJ: Integrate and Fire soft-reset with learnable Vth and activation scaling, based on spikingjelly.
- __init__(v_threshold: float = 1.0, vth_init_l: float = 0.8, vth_init_h: float = 1.0, alpha: float = 1.0, device: str = 'cpu') None [source]
Construct
ATIF
.- Parameters:
v_threshold (float) – Factor to apply to the uniform initialization bounds
vth_init_l (float) – Lower bound for uniform initialization of threshold Tensor
vth_init_h (float) – Higher bound for uniform initialization of threshold Tensor
alpha (float) – Sigmoig surrogate scale factor
device (str) – Device to run the computation on
- Return type:
None
- get_coeffs() Tensor [source]
Return the Tensor of threshold
v_threshold
.- Returns:
Tensor of threshold
v_threshold
- Return type:
- set_coeffs(v_threshold: Tensor) None [source]
Replace the Tensor of threshold
v_threshold
.- Parameters:
v_threshold (Tensor) – New Tensor of threshold to replace
v_threshold
- Return type:
None
- single_step_forward(x: Tensor) Tensor [source]
Single-step mode forward of ATIF.
Calls
ifsrl_fn()
.
- property supported_backends: tuple[Literal['torch']]
Supported step_mode and backend.
Only single-step mode with torch backend is supported.
- Returns:
Tuple of
'torch'
ifstep_mode
is's'
- Raises:
ValueError – When
step_mode
is not's'
- class qualia_plugin_snn.learningmodel.pytorch.layers.IFSRL[source]
Bases:
Module
IFSRL: Integrate and Fire soft-reset with learnable Vth and activation scaling.
- __init__(v_threshold: float = 1.0, vth_init_l: float = 0.8, vth_init_h: float = 1.0, alpha: float = 1.0, device: str = 'cpu') None [source]
Construct
IFSRL
.- Parameters:
v_threshold (float) – Factor to apply to the uniform initialization bounds
vth_init_l (float) – Lower bound for uniform initialization of threshold Tensor
vth_init_h (float) – Higher bound for uniform initialization of threshold Tensor
alpha (float) – Sigmoig surrogate scale factor
device (str) – Device to run the computation on
- Return type:
None
- forward(input: Tensor) Tensor [source]
Forward of
ifsrl_fn()
.
- get_coeffs() Tensor [source]
Return the Tensor of threshold
vp_th
.- Returns:
Tensor of threshold
vp_th
- Return type: