examples/

directory
v1.1.49 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 3, 2022 License: BSD-3-Clause

Directories

Path Synopsis
bench runs a benchmark model with 5 layers (3 hidden, Input, Output) all of the same size, for benchmarking different size networks.
bench runs a benchmark model with 5 layers (3 hidden, Input, Output) all of the same size, for benchmarking different size networks.
deep_fsa runs a DeepLeabra network on the classic Reber grammar finite state automaton problem.
deep_fsa runs a DeepLeabra network on the classic Reber grammar finite state automaton problem.
sim is a simple simulation to run the env example
sim is a simple simulation to run the env example
eqplot plots an equation updating over time in a etable.Table and Plot2D. This is a good starting point for any plotting to explore specific equations.
eqplot plots an equation updating over time in a etable.Table and Plot2D. This is a good starting point for any plotting to explore specific equations.
hip runs a hippocampus model on the AB-AC paired associate learning task
hip runs a hippocampus model on the AB-AC paired associate learning task
hip_bench runs a hippocampus model for testing parameters and new learning ideas
hip_bench runs a hippocampus model for testing parameters and new learning ideas
ra25 runs a simple random-associator four-layer leabra network that uses the standard supervised learning paradigm to learn mappings between 25 random input / output patterns defined over 5x5 input / output layers (i.e., 25 units)
ra25 runs a simple random-associator four-layer leabra network that uses the standard supervised learning paradigm to learn mappings between 25 random input / output patterns defined over 5x5 input / output layers (i.e., 25 units)
ra25 runs a simple random-associator four-layer leabra network that uses the standard supervised learning paradigm to learn mappings between 25 random input / output patterns defined over 5x5 input / output layers (i.e., 25 units)
ra25 runs a simple random-associator four-layer leabra network that uses the standard supervised learning paradigm to learn mappings between 25 random input / output patterns defined over 5x5 input / output layers (i.e., 25 units)
sir illustrates the dynamic gating of information into PFC active maintenance, by the basal ganglia (BG).
sir illustrates the dynamic gating of information into PFC active maintenance, by the basal ganglia (BG).

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL