evolution

command module
v0.0.0-...-a129996 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 16, 2017 License: MIT Imports: 18 Imported by: 0

README

evolution

This is a local, not distributed, go, not python, implementation of the Evolution Strategies as a Scalable Alternative to Reinforcement Learning (Salimans et. al). The original starter from the paper can be found openai/evolution-strategies-starter. Under the covers it uses the openai/gym-http-api, more specifically binding-go, and uses unixpickle/anynet and unixpickle/anyvec for efficient high-level vector computation. Enjoy!

instructions

The goal is to solve CartPole-v0, This requires 195 epochs/reward over 100 episodes. Install openai/gym, openai/gym-http-api is a dependency required from the Go source.

Get the binary. Clone, download, or whatever you want, or just

$ go get github.com/wenkesj/evolution

In a seperate terminal, open the gym from wherever github.com/openai/gym-http-api is located in your fs.

$ python gym_http_server.py

Run the trainer and evaluater with whatever concauction you choose.

$ # 200 episodes of "training" by 2 agents and 100
$ # finalepisodes of evaluation with a single agent
$ # Saving results to a directory "~/agents2eps200"
$ evolution --outmonitor ~/agents2eps200 \
  --finalepisodes 100 \
  --episodes 200
  --agents 2

example results

cartpole average training example

So, after 42 episodes, the 2 agents evolve enough to simply destroy at the game on their own. In this simple case, we apply a cutoff average reward of 195 or above for both agents, signifying the parameters on average should be able to solve the game with a single offspring. So we test that fact,

cartpole average evaluation example

And it works! We get 198.5 average reward over 100 episodes!

roadmap

  • Parallelize where needed to avoid embarrassment 😏
  • 32/64 bit support?
  • Support multiple environments gym-http-api#47
  • Serialize/deserialize networks the anynet way
    • Goals $ evolution -net net.proto -env Pong-v0 ...
    • Input network for specific environment (i.e. Pong-v0)
    • Save/load
  • Optimizations
  • Plotting, statistics, performance profiling, uploading

disclaimer

This is a project for my Complex Systems and Networks class. This isn't meant to be comparable to the original work; I'm not a master coder/statistical god/andrej karpathy, I just thought this was a cool idea. This is an implementation with results and intrepretation.

Documentation

The Go Gopher

There is no documentation for this package.

Directories

Path Synopsis
Package agent implements a neural network worker.
Package agent implements a neural network worker.
Package env implements the gym-http-api utilites.
Package env implements the gym-http-api utilites.
Package noise implements noise sharing and Gaussian binning
Package noise implements noise sharing and Gaussian binning
Package opt implements Adam optimization
Package opt implements Adam optimization
Package policy implements a generic policy for evaluation of an environment.
Package policy implements a generic policy for evaluation of an environment.
Package util implements utility functions and data structures For argsort adaptation: Copyright 2013 The Gonum Authors.
Package util implements utility functions and data structures For argsort adaptation: Copyright 2013 The Gonum Authors.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL