Documentation ¶
Overview ¶
The pole balancing experiments is classic Reinforced Learning task proposed by Richard Sutton and Charles Anderson. In this experiment we will try to teach RF model of balancing pole placed on the moving cart.
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type CartDoublePoleGenerationEvaluator ¶
type CartDoublePoleGenerationEvaluator struct { // The output path to store execution results OutputPath string // The flag to indicate whether to apply Markov evaluation variant Markov bool // The flag to indicate whether to use continuous activation or discrete ActionType experiments.ActionType }
The double pole-balancing experiment both Markov and non-Markov versions
func (CartDoublePoleGenerationEvaluator) GenerationEvaluate ¶
func (ex CartDoublePoleGenerationEvaluator) GenerationEvaluate(pop *genetics.Population, epoch *experiments.Generation, context *neat.NeatContext) (err error)
Perform evaluation of one epoch on double pole balancing
type CartPole ¶
type CartPole struct {
// contains filtered or unexported fields
}
The structure to describe cart pole emulation
type CartPoleGenerationEvaluator ¶
type CartPoleGenerationEvaluator struct { // The output path to store execution results OutputPath string // The flag to indicate if cart emulator should be started from random position RandomStart bool // The number of emulation steps to be done balancing pole to win WinBalancingSteps int }
The single pole balancing experiment entry point. This experiment performs evolution on single pole balancing task in order to produce appropriate genome.
func (CartPoleGenerationEvaluator) GenerationEvaluate ¶
func (ex CartPoleGenerationEvaluator) GenerationEvaluate(pop *genetics.Population, epoch *experiments.Generation, context *neat.NeatContext) (err error)
This method evaluates one epoch for given population and prints results into output directory if any.