You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/modeling/synapses.md
+19-91Lines changed: 19 additions & 91 deletions
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Synapses
2
2
3
-
The synapse is a key building blocks for connecting/wiring together the various
3
+
The synapse is a key building block for connecting/wiring together the various
4
4
component cells that one would use for characterizing a biomimetic neural system.
5
5
These particular objects are meant to perform, per simulated time step, a
6
6
specific type of transformation -- such as a linear transform or a
@@ -17,9 +17,7 @@ steps, or by integrating a differential equation, e.g., via eligibility traces.
17
17
18
18
### Static (Dense) Synapse
19
19
20
-
This synapse performs a linear transform of its input signals.
21
-
Note that this synaptic cable does not evolve and is meant to be
22
-
used for fixed value (dense) synaptic connections.
20
+
This synapse performs a linear transform of its input signals. Note that this synaptic cable does not evolve and is meant to be used for fixed value (dense) synaptic connections. Note that all synapse components that inherit from the `DenseSynapse` class support sparsification (via the `p_conn` probability of existence argument) to produce sparsely-connected synaptic connectivity patterns.
23
21
24
22
```{eval-rst}
25
23
.. autoclass:: ngclearn.components.StaticSynapse
@@ -48,8 +46,7 @@ Note that this synaptic cable does not evolve and is meant to be used for fixed
48
46
49
47
### Static Deconvolutional Synapse
50
48
51
-
This synapse performs a deconvolutional transform of its input signals.
52
-
Note that this synaptic cable does not evolve and is meant to be used for fixed value deconvolution/transposed convolution synaptic filters.
49
+
This synapse performs a deconvolutional transform of its input signals. Note that this synaptic cable does not evolve and is meant to be used for fixed value deconvolution/transposed convolution synaptic filters.
53
50
54
51
```{eval-rst}
55
52
.. autoclass:: ngclearn.components.DeconvSynapse
@@ -63,10 +60,9 @@ Note that this synaptic cable does not evolve and is meant to be used for fixed
63
60
64
61
## Dynamic Synapse Types
65
62
66
-
### Short-Term Plasticity(Dense) Synapse
63
+
### Short-Term Plasticity(Dense) Synapse
67
64
68
-
This synapse performs a linear transform of its input signals. Note that this
69
-
synapse is "dynamic" in the sense that it engages in short-term plasticity (STP), meaning that its efficacy values change as a function of its inputs (and simulated consumed resources), but it does not provide any long-term form of plasticity/adjustment.
65
+
This synapse performs a linear transform of its input signals. Note that this synapse is "dynamic" in the sense that it engages in short-term plasticity (STP), meaning that its efficacy values change as a function of its inputs/time (and simulated consumed resources), but it does not provide any long-term form of plasticity/adjustment.
@@ -80,22 +76,11 @@ synapse is "dynamic" in the sense that it engages in short-term plasticity (STP)
80
76
81
77
## Multi-Factor Learning Synapse Types
82
78
83
-
Hebbian rules operate in a local manner -- they generally use information more
84
-
immediately available to synapses in both space and time -- and can come in a
85
-
wide variety of flavors. One general way to categorize variants of Hebbian learning
86
-
is to clarify what (neural) statistics they operate on, e.g, do they work with
87
-
real-valued information or discrete spikes, and how many factors (or distinct
88
-
terms) are involved in calculating the update to synaptic values by the
89
-
relevant learning rule. <!--(Note that, in principle, all forms of plasticity in
90
-
ngc-learn technically work like local, factor-based rules. )-->
79
+
Hebbian rules operate in a local manner -- they generally use information more immediately available to synapses in both space and time -- and can come in a wide variety of flavors. One general way to categorize variants of Hebbian learning is to clarify what (neural) statistics/values they operate on, e.g, do they work with real-valued information or discrete spikes, and how many factors (or distinct terms) are involved in calculating the update to synaptic values by the relevant learning rule. <!--(Note that, in principle, all forms of plasticity in ngc-learn technically work like local, factor-based rules. )-->
91
80
92
81
### (Two-Factor) Hebbian Synapse
93
82
94
-
This synapse performs a linear transform of its input signals and evolves
95
-
according to a strictly two-factor update rule. In other words, the
96
-
underlying synaptic efficacy matrix is changed according to a product between
97
-
pre-synaptic compartment values (`pre`) and post-synaptic compartment (`post`)
98
-
values, which can contain any type of vector/matrix statistics.
83
+
This synapse performs a linear transform of its input signals and evolves according to a strictly two-factor/term update rule. In other words, the underlying synaptic efficacy matrix is changed according to a product between pre-synaptic compartment values (`pre`) and post-synaptic compartment (`post`) values, which can contain any type of vector/matrix statistics. This particular synapse further features some tools for advanced forms of Hebbian descent/ascent (such as applying the Hebbian update via adaptive learning rates such as adaptive moment estimation, i.e., Adam).
99
84
100
85
```{eval-rst}
101
86
.. autoclass:: ngclearn.components.HebbianSynapse
@@ -111,13 +96,7 @@ values, which can contain any type of vector/matrix statistics.
111
96
112
97
### (Two-Factor) BCM Synapse
113
98
114
-
This synapse performs a linear transform of its input signals and evolves
115
-
according to a multi-factor, Bienenstock-Cooper-Munro (BCM) update rule. The
116
-
underlying synaptic efficacy matrix is changed according to an evolved
117
-
synaptic threshold parameter `theta` and a product between
118
-
pre-synaptic compartment values (`pre`) and a nonlinear function of post-synaptic
119
-
compartment (`post`) values, which can contain any type of vector/matrix
120
-
statistics.
99
+
This synapse performs a linear transform of its input signals and evolves according to a multi-factor, Bienenstock-Cooper-Munro (BCM) update rule. The underlying synaptic efficacy matrix is changed according to an evolved synaptic threshold parameter `theta` and a product between pre-synaptic compartment values (`pre`) and a nonlinear function of post-synaptic compartment (`post`) values, which can contain any type of vector/matrix statistics.
121
100
122
101
```{eval-rst}
123
102
.. autoclass:: ngclearn.components.BCMSynapse
@@ -133,10 +112,7 @@ statistics.
133
112
134
113
### (Two-Factor) Hebbian Convolutional Synapse
135
114
136
-
This synapse performs a convolutional transform of its input signals and evolves
137
-
according to a two-factor update rule. The underlying synaptic filters are
138
-
changed according to products between pre-synaptic compartment values (`pre`)
139
-
and post-synaptic compartment (`post`) feature map values.
115
+
This synapse performs a convolutional transform of its input signals and evolves according to a two-factor update rule. The underlying synaptic filters are changed according to products between pre-synaptic compartment values (`pre`) and post-synaptic compartment (`post`) feature map values.
This synapse performs a deconvolutional (transposed convolutional) transform of
156
-
its input signals and evolves according to a two-factor update rule. The
157
-
underlying synaptic filters are changed according to products between
158
-
pre-synaptic compartment values (`pre`) and post-synaptic compartment (`post`)
159
-
feature map values.
131
+
This synapse performs a deconvolutional (transposed convolutional) transform of its input signals and evolves according to a two-factor update rule. The underlying synaptic filters are changed according to products between pre-synaptic compartment values (`pre`) and post-synaptic compartment (`post`) feature map values.
Synapses that evolve according to a spike-timing-dependent plasticity (STDP)
176
-
process operate, at a high level, much like multi-factor Hebbian rules (given
177
-
that STDP is a generalization of Hebbian adjustment to spike trains) and share
178
-
many of their properties. Nevertheless, a distinguishing factor for STDP-based
179
-
synapses is that they must involve action potential pulses (spikes) in their
180
-
calculations and they typically compute synaptic change according to the
181
-
relative timing of spikes. In principle, any of the synapses in this grouping
182
-
of components adapt their efficacies according to rules that are at least special
183
-
four-factor terms, i.e., a pre-synaptic spike (an "event"), a pre-synaptic delta
184
-
timing (which can come in the form of a trace), a post-synaptic spike (or event),
185
-
and a post-synaptic delta timing (also can be a trace). In addition, STDP rules
186
-
in ngc-learn typically enforce soft/hard synaptic strength bounding, i.e., there
187
-
is a maximum magnitude allowed for any single synaptic efficacy, and, by default,
188
-
an STDP synapse enforces that its synaptic strengths are non-negative.
147
+
Synapses that evolve according to a spike-timing-dependent plasticity (STDP) process operate, at a high level, much like multi-factor Hebbian rules (given that STDP is a generalization of Hebbian adjustment to spike trains) and share many of their properties. Nevertheless, a distinguishing factor for STDP-based synapses is that they must involve action potential pulses (spikes) in their calculations and they typically compute synaptic change according to the relative timing of spikes. In principle, any of the synapses in this grouping of components adapt their efficacies according to rules that are at least special four terms, i.e., a pre-synaptic spike (an "event"), a pre-synaptic delta timing (which can come in the form of a trace), a post-synaptic spike (or event), and a post-synaptic delta timing (also can be a trace). In addition, STDP rules in ngc-learn typically enforce soft/hard synaptic strength bounding, i.e., there is a maximum magnitude allowed for any single synaptic efficacy, and, by default, an STDP synapse enforces its synaptic strengths to be non-negative.
148
+
Note: these rules are technically considered to be "two-factor" rules since they only operate on pre- and post-synaptic activity (despite each factor being represented by two or more terms).
189
149
190
150
### Trace-based STDP
191
151
192
-
This is a four-factor STDP rule that adjusts the underlying synaptic strength
193
-
matrix via a weighted combination of long-term depression (LTD) and long-term
194
-
potentiation (LTP). For the LTP portion of the update, a pre-synaptic trace and
195
-
a post-synaptic event/spike-trigger are used, and for the LTD portion of the
196
-
update, a pre-synaptic event/spike-trigger and a post-synaptic trace are
197
-
utilized. Note that this specific rule can be configured to use different forms
198
-
of soft threshold bounding including a scheme that recovers a power-scaling
199
-
form of STDP (via the hyper-parameter `mu`).
152
+
This is a four-term STDP rule that adjusts the underlying synaptic strength matrix via a weighted combination of long-term depression (LTD) and long-term potentiation (LTP). For the LTP portion of the update, a pre-synaptic trace and a post-synaptic event/spike-trigger are used, and for the LTD portion of the update, a pre-synaptic event/spike-trigger and a post-synaptic trace are utilized. Note that this specific rule can be configured to use different forms of soft threshold bounding including a scheme that recovers a power-scaling form of STDP (via the hyper-parameter `mu`).
@@ -212,10 +165,7 @@ form of STDP (via the hyper-parameter `mu`).
212
165
213
166
### Exponential STDP
214
167
215
-
This is a four-factor STDP rule that directly incorporates a controllable
216
-
exponential synaptic strength dependency into its dynamics. This synapse's LTP
217
-
and LTD use traces and spike events in a manner similar to the trace-based STDP
218
-
described above.
168
+
This is a four-term STDP rule that directly incorporates a controllable exponential synaptic strength dependency into its dynamics. This synapse's LTP and LTD use traces and spike events in a manner similar to the trace-based STDP described above.
219
169
220
170
```{eval-rst}
221
171
.. autoclass:: ngclearn.components.ExpSTDPSynapse
@@ -231,9 +181,7 @@ described above.
231
181
232
182
### Event-Driven Post-Synaptic STDP Synapse
233
183
234
-
This is a synaptic evolved under a two-factor STDP rule that is driven by
235
-
only spike events.
236
-
184
+
This is a synaptic evolved under a two-term STDP rule that is driven by only spike events and operates within a defined pre-synaptic "window" of time.
This is a four-factor STDP rule for convolutional synapses that adjusts the
253
-
underlying filters via a weighted combination of long-term depression (LTD) and
254
-
long-term potentiation (LTP). For the LTP portion of the update, a pre-synaptic
255
-
trace and a post-synaptic event/spike-trigger are used, and for the LTD portion
256
-
of the update, a pre-synaptic event/spike-trigger and a post-synaptic trace are
257
-
utilized.
200
+
This is a four-term STDP rule for convolutional synapses/kernels that adjusts the underlying filters via a weighted combination of long-term depression (LTD) and long-term potentiation (LTP). For the LTP portion of the update, a pre-synaptic trace and a post-synaptic event/spike-trigger are used, and for the LTD portion of the update, a pre-synaptic event/spike-trigger and a post-synaptic trace are utilized.
This is a four-factor STDP rule for deconvolutional (transposed convolutional)
274
-
synapses that adjusts the underlying filters via a weighted combination of
275
-
long-term depression (LTD) and long-term potentiation (LTP). For the LTP portion
276
-
of the update, a pre-synaptic trace and a post-synaptic event/spike-trigger are
277
-
used, and for the LTD portion of the update, a pre-synaptic event/spike-trigger
278
-
and a post-synaptic trace are utilized.
216
+
This is a four-term STDP rule for deconvolutional (transposed convolutional) synapses that adjusts the underlying filters via a weighted combination of long-term depression (LTD) and long-term potentiation (LTP). For the LTP portion of the update, a pre-synaptic trace and a post-synaptic event/spike-trigger are used, and for the LTD portion of the update, a pre-synaptic event/spike-trigger and a post-synaptic trace are utilized.
@@ -292,21 +230,11 @@ and a post-synaptic trace are utilized.
292
230
## Modulated Forms of Plasticity
293
231
294
232
This family of synapses implemented within ngc-learn support modulated, often
295
-
at least three-factor, forms of synaptic adjustment. Modulators could include
296
-
reward/dopamine values or scalar error signals, and are generally assumed to be
297
-
administered to the synapse(s) externally (i.e., it is treated as another
298
-
input provided by some other entity, e.g., another neural circuit).
233
+
at least three-factor, forms of synaptic adjustment. Modulators could include reward/dopamine values or scalar error signals, and are generally assumed to be administered to the synapse(s) externally (i.e., it is treated as another input provided by some other entity, e.g., another neural circuit).
299
234
300
235
### Reward-Modulated Trace-based STDP (MSTDP-ET)
301
236
302
-
This is a modulated STDP (MSTDP) rule that adjusts the underlying synaptic strength
303
-
matrix via a weighted combination of long-term depression (LTD) and long-term
304
-
potentiation (LTP), scaled by an external signal such as a reward/dopamine value.
305
-
The STDP element of this form of plasticity inherits from trace-based STDP
306
-
(i.e. `TraceSTDPSynapse`). This synapse component further supports a configuration
307
-
for MSTDP-ET, MSTPD with eligibility traces; this means the synapse will treat its
308
-
synapses as two elements -- a synaptic efficacy and a coupled synaptic trace that
309
-
maintains the dynamics of STDP updates encountered over time.
237
+
This is a three-factor learning rule, i.e., pre-synaptic activity, post-synaptic activity, and a modulatory signal, known as modulated STDP (MSTDP). MSTDP adjusts the underlying synaptic strength matrix via a weighted combination of long-term depression (LTD) and long-term potentiation (LTP), scaled by an external signal such as a reward/dopamine value. The STDP element of this form of plasticity inherits from trace-based STDP (i.e. `TraceSTDPSynapse`). Note that this synapse component further supports a configuration for MSTDP-ET, or MSTPD with eligibility traces; MSTDP-ET means that the synapse will treat its synapses as two elements -- a synaptic efficacy and a coupled synaptic trace that maintains the dynamics of STDP updates encountered over time.
0 commit comments