examples/

directory
v1.8.10 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 29, 2023 License: BSD-3-Clause

Directories

Path Synopsis
bench runs a benchmark model with 5 layers (3 hidden, Input, Output) all of the same size, for benchmarking different size networks.
bench runs a benchmark model with 5 layers (3 hidden, Input, Output) all of the same size, for benchmarking different size networks.
bench runs a benchmark model with 5 layers (3 hidden, Input, Output) all of the same size, for benchmarking different size networks.
bench runs a benchmark model with 5 layers (3 hidden, Input, Output) all of the same size, for benchmarking different size networks.
objrec explores how a hierarchy of areas in the ventral stream of visual processing (up to inferotemporal (IT) cortex) can produce robust object recognition that is invariant to changes in position, size, etc of retinal input images.
objrec explores how a hierarchy of areas in the ventral stream of visual processing (up to inferotemporal (IT) cortex) can produce robust object recognition that is invariant to changes in position, size, etc of retinal input images.
boa: This project tests BG, OFC & ACC learning in a CS-driven approach task.
boa: This project tests BG, OFC & ACC learning in a CS-driven approach task.
deep_fsa runs a DeepAxon network on the classic Reber grammar finite state automaton problem.
deep_fsa runs a DeepAxon network on the classic Reber grammar finite state automaton problem.
deep_move runs a DeepAxon network predicting the effects of movement on visual inputs.
deep_move runs a DeepAxon network predicting the effects of movement on visual inputs.
deep_music runs a DeepAxon network on predicting the next note in a musical sequence of notes.
deep_music runs a DeepAxon network on predicting the next note in a musical sequence of notes.
hip runs a hippocampus model for testing parameters and new learning ideas
hip runs a hippocampus model for testing parameters and new learning ideas
inhib: This simulation explores how inhibitory interneurons can dynamically control overall activity levels within the network, by providing both feedforward and feedback inhibition to excitatory pyramidal neurons.
inhib: This simulation explores how inhibitory interneurons can dynamically control overall activity levels within the network, by providing both feedforward and feedback inhibition to excitatory pyramidal neurons.
mpi is a version of ra25 that runs under MPI to learn in parallel across multiple nodes, sharing DWt changes via MPI.
mpi is a version of ra25 that runs under MPI to learn in parallel across multiple nodes, sharing DWt changes via MPI.
neuron: This simulation illustrates the basic properties of neural spiking and rate-code activation, reflecting a balance of excitatory and inhibitory influences (including leak and synaptic inhibition).
neuron: This simulation illustrates the basic properties of neural spiking and rate-code activation, reflecting a balance of excitatory and inhibitory influences (including leak and synaptic inhibition).
pcore: This project simulates the inhibitory dynamics in the STN and GPe leading to integration of Go vs.
pcore: This project simulates the inhibitory dynamics in the STN and GPe leading to integration of Go vs.
effort_plot plots the PVLV effort cost equations.
effort_plot plots the PVLV effort cost equations.
ra25 runs a simple random-associator four-layer axon network that uses the standard supervised learning paradigm to learn mappings between 25 random input / output patterns defined over 5x5 input / output layers (i.e., 25 units)
ra25 runs a simple random-associator four-layer axon network that uses the standard supervised learning paradigm to learn mappings between 25 random input / output patterns defined over 5x5 input / output layers (i.e., 25 units)
ra25x runs a simple random-associator four-layer axon network that uses the standard supervised learning paradigm to learn mappings between 25 random input / output patterns defined over 5x5 input / output layers (i.e., 25 units)
ra25x runs a simple random-associator four-layer axon network that uses the standard supervised learning paradigm to learn mappings between 25 random input / output patterns defined over 5x5 input / output layers (i.e., 25 units)
rl_cond explores the temporal differences (TD) and Rescorla-Wagner reinforcement learning algorithms under some basic Pavlovian conditioning environments.
rl_cond explores the temporal differences (TD) and Rescorla-Wagner reinforcement learning algorithms under some basic Pavlovian conditioning environments.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL