Gravitywell
This project is deprecated in favour of ClusterAPI or other excellent provisioning tools
Update Now supports Digital Ocean API for cluster creation
Update AWS API now in Alpha for AWS EKS - Still building a CFN for auto node pool creation. For decent results use a more mature tool for AWS such as eksctl
Gravitywell is designed to *create kubernetes clusters and deploy your applications.
It uses YAML to store deployments and supports multiple versions of kubernetes resource definitions.
It lets you store your entire container infrastructure as code.
Supported providers:
- Google Cloud Platform
- Digital Ocean
- Minikube
- Amazon Web Services (Only partially at this time)
How is gravitywell different and why is it useful?
- Can deploy across multiple cloud providers at the same time.
- All other tools we found would either do the multiple cloud deployment, without the apps or visa versa.
- Based on cloud providers own container API
- Ain't nobody got time to be deploying custom networking or policies when its done for free.
- Uses dynamic interpolation with vortex so you aren't writing template files for days.
- Allows you to do more than just deploy clusters; it lets you bootstrap them with fully working dependant services.
- Think about getting mongodb,zookeeper,consul,nifi,nginx,api's and a bunch of other stuff going straight away
Getting Started
To get started you'll need golang installed or to fetch the binary from homebrew/releases page (OSX)
- Get with golang:
go get github.com/AlexsJones/gravitywell
- Download with homebrew:
brew tap AlexsJones/homebrew-gravitywell && brew install gravitywell
Tap
- Download as a cross-platform release: Latest release
docker run tibbar/gravitywell:latest /gravitywell
Docker hub
Prerequisites
The current implementation works with Google Cloud Platform & Amazon web services.
For Google Cloud Platform please set your service account for the right project
export GOOGLE_APPLICATION_CREDENTIALS=~/Downloads/alex-example-e28058e8985b.json
For Digital Ocean please set the TOKEN
export DIGITAL_OCEAN_TOKEN=30109aoimvaoim42oi2mg2
Digital ocean also requires additional tools for authentication found here
For Amazon Web Services please set the aws profile name and region
export AWS_DEFAULT_PROFILE=alexprod
export AWS_DEFAULT_REGION=us-west-2
Aws also requires additional tools for authentication found here
Running on GCP
At this point you are ready to run gravitywell.
For working with templates as per the examples you'll also need vortex
(go get github.com/AlexsJones/vortex)
This can be installed either via golang or as a binary also
Lets take it for a spin using the gcp example
If you've looked at the templates you'll see a helmesque style of interpolation
"gke_{{.projectname}}_{{.projectregion}}_{{.clustername}}" we're going to override
Build the cluster template
vortex --output deployment/cluster --template examples/gcp/templates/cluster --set "projectname=alex-example" --set "projectregion=us-east4" --set "clustername=gke_alex-example_us-east4_testcluster"
Build the example application templates
vortex --output deployment/applications --template examples/common/templates/application --set "projectname=alex-example" --set "projectregion=us-east4" --set "clustername=gke_alex-example_us-east4_testcluster"
The deployment directory should look like this
deployment
├── applications
│ ├── apache_tika.yaml
│ ├── mongodb.yaml
│ └── zookeeper.yaml
└── cluster
└── small.yaml
gravitywell create -f deployment/
This will now start to provision any clusters that are required and deploy applications
Running on Minikube
Adjust the example-minikube/cluster/small.yaml to suit your VMDriver
gravitywell create -f example-minikube/
Yes that's all!
Example files
This is what an example cluster may look like:
APIVersion: "v1"
Kind: "Cluster"
Strategy:
- Provider:
Name: "Google Cloud Platform"
Clusters:
- Cluster:
FullName: "gke_{{.projectname}}_{{.projectregion}}_{{.clustername}}"
ShortName: "{{.clustername}}"
Project: "{{.projectname}}"
Region: "us-east4"
Zones: ["us-east4-a"]
Labels:
type: "test"
InitialNodeCount: 1
InitialNodeType: "n1-standard-1"
OauthScopes: "https://www.googleapis.com/auth/monitoring.write,
https://www.googleapis.com/auth/logging.write,
https://www.googleapis.com/auth/trace.append,
https://www.googleapis.com/auth/devstorage.full_control,
https://www.googleapis.com/auth/compute"
NodePools:
- NodePool:
Name: "np1"
Count: 3
NodeType: "n1-standard-1"
Labels:
k8s-node-type: "test"
PostInstallHook:
- Execute:
Path: "."
Shell: "gcloud container clusters get-credentials {{.clustername}} --region={{.projectregion}} --project={{.projectname}}"
PostDeleteHook:
- Execute:
Path: "."
Shell: "pwd"
And this is an example application
APIVersion: "v1"
Kind: "Application"
Strategy:
- Cluster:
FullName: "gke_{{.projectname}}_{{.projectregion}}_{{.clustername}}"
ShortName: "{{.clustername}}"
Applications:
- Application:
Name: "kubernetes-apache-tika"
Namespace: "tika"
VCS:
FileSystem: # Optional
Git: "git@github.com:AlexsJones/kubernetes-apache-tika.git"
#Optional tree reference selectors - use one at a time and follow format
# refs/heads/{branchname}
# refs/tags/{tagname}
# refs/remotes/
GitReference: refs/heads/master #Optional and this example just pulls master
ActionList:
- Execute:
Kind: "shell"
Configuration:
Command: pwd
Path: ../ #Optional value
- Execute:
Kind: "shell"
Configuration:
Command: ./build_environment.sh default
- Execute:
Kind: "kubernetes"
Configuration:
Path: deployment #Optional value
AwaitDeployment: true #Optional defaults to false
Action lists can also be moved into the repositories being deployed to keep things clean!
Or a combination of inline, local and remote...
e.g.
APIVersion: "v1"
Kind: "Application"
Strategy:
- Cluster:
FullName: "gke_{{.projectname}}_{{.projectregion}}_{{.clustername}}"
ShortName: "{{.clustername}}"
Applications:
- Application:
Name: "kubernetes-apache-tika"
Namespace: "tika"
VCS:
FileSystem: # Optional
Git: "git@github.com:AlexsJones/kubernetes-apache-tika.git"
#Optional tree reference selectors - use one at a time and follow format
# refs/heads/{branchname}
# refs/tags/{tagname}
# refs/remotes/
GitReference: refs/heads/master #Optional and this example just pulls master
ActionList:
Executions:
- Execute:
Kind: "Shell"
Configuration:
Command: kubectl create ns zk
- Execute:
Kind: "RunActionList"
Configuration:
LocalPath: templates/external/gwdeploymentconfig.yaml
- Execute:
Kind: "RunActionList"
Configuration:
RemotePath: tika-extras/additional-actionlist.yaml
Where you can have an action list defined..
actions lists can call other action lists in a chain - helping to create templated commands
See an example here
#./templates/external/gwdeploymentconfig.yaml
APIVersion: "v1"
Kind: "ActionList"
ActionList:
- Execute:
Kind: "shell"
Configuration:
Command: ./build_environment.sh default
- Execute:
Kind: "RunActionList"
Configuration:
Path: example-gcp/templates/actionlist/actionlist-deployment.yaml
Flags
DryRun bool `short:"d" long:"dryrun" description:"Performs a dryrun."`
FileName string `short:"f" long:"filename" description:"filename to execute, also accepts a path."`
SSHKeyPath string `short:"s" long:"sshkeypath" description:"Custom ssh key path."`
MaxTimeout string `short:"m" long:"maxtimeout" description:"Max rollout time e.g. 60s or 1m"`
Verbose bool `short:"v" long:"verbose" description:"Enable verbose logging"`
Running the tests
go test ./... -v
from the gravitywell directory on your gopath
Contributing
Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.
Versioning
We use SemVer for versioning. For the versions available, see the tags on this repository.
Authors
- Alex Jones - Initial work
See also the list of contributors who participated in this project.
License
This project is licensed under the MIT License - see the LICENSE.md file for details
Acknowledgments
- Helm & terraform both great projects
- kubicorn does alot of very cool stuff
- https://eksctl.io/ was a fantastic reference for golang AWS sdk API
Special thanks