Directories ¶
Path | Synopsis |
---|---|
attn_trn: test of trn-based attention in basic V1, V2, LIP localist network with gabor inputs.
|
attn_trn: test of trn-based attention in basic V1, V2, LIP localist network with gabor inputs. |
bench runs a benchmark model with 5 layers (3 hidden, Input, Output) all of the same size, for benchmarking different size networks.
|
bench runs a benchmark model with 5 layers (3 hidden, Input, Output) all of the same size, for benchmarking different size networks. |
deep_fsa runs a DeepAxon network on the classic Reber grammar finite state automaton problem.
|
deep_fsa runs a DeepAxon network on the classic Reber grammar finite state automaton problem. |
sim is a simple simulation to run the env example
|
sim is a simple simulation to run the env example |
eqplot plots an equation updating over time in a etable.Table and Plot2D. This is a good starting point for any plotting to explore specific equations.
|
eqplot plots an equation updating over time in a etable.Table and Plot2D. This is a good starting point for any plotting to explore specific equations. |
hip runs a hippocampus model on the AB-AC paired associate learning task
|
hip runs a hippocampus model on the AB-AC paired associate learning task |
hip_bench runs a hippocampus model for testing parameters and new learning ideas
|
hip_bench runs a hippocampus model for testing parameters and new learning ideas |
inhib: This simulation explores how inhibitory interneurons can dynamically control overall activity levels within the network, by providing both feedforward and feedback inhibition to excitatory pyramidal neurons.
|
inhib: This simulation explores how inhibitory interneurons can dynamically control overall activity levels within the network, by providing both feedforward and feedback inhibition to excitatory pyramidal neurons. |
kinaseq plots kinase learning simulation over time
|
kinaseq plots kinase learning simulation over time |
ra25 runs a simple random-associator four-layer axon network that uses the standard supervised learning paradigm to learn mappings between 25 random input / output patterns defined over 5x5 input / output layers (i.e., 25 units)
|
ra25 runs a simple random-associator four-layer axon network that uses the standard supervised learning paradigm to learn mappings between 25 random input / output patterns defined over 5x5 input / output layers (i.e., 25 units) |
neuron: This simulation illustrates the basic properties of neural spiking and rate-code activation, reflecting a balance of excitatory and inhibitory influences (including leak and synaptic inhibition).
|
neuron: This simulation illustrates the basic properties of neural spiking and rate-code activation, reflecting a balance of excitatory and inhibitory influences (including leak and synaptic inhibition). |
pcore: This project simulates the inhibitory dynamics in the STN and GPe leading to integration of Go vs.
|
pcore: This project simulates the inhibitory dynamics in the STN and GPe leading to integration of Go vs. |
ra25 runs a simple random-associator four-layer axon network that uses the standard supervised learning paradigm to learn mappings between 25 random input / output patterns defined over 5x5 input / output layers (i.e., 25 units)
|
ra25 runs a simple random-associator four-layer axon network that uses the standard supervised learning paradigm to learn mappings between 25 random input / output patterns defined over 5x5 input / output layers (i.e., 25 units) |
ra25x runs a simple random-associator four-layer axon network that uses the standard supervised learning paradigm to learn mappings between 25 random input / output patterns defined over 5x5 input / output layers (i.e., 25 units)
|
ra25x runs a simple random-associator four-layer axon network that uses the standard supervised learning paradigm to learn mappings between 25 random input / output patterns defined over 5x5 input / output layers (i.e., 25 units) |
rl_cond explores the temporal differences (TD) reinforcement learning algorithm under some basic Pavlovian conditioning environments.
|
rl_cond explores the temporal differences (TD) reinforcement learning algorithm under some basic Pavlovian conditioning environments. |
sir illustrates the dynamic gating of information into PFC active maintenance, by the basal ganglia (BG).
|
sir illustrates the dynamic gating of information into PFC active maintenance, by the basal ganglia (BG). |
Click to show internal directories.
Click to hide internal directories.