grpc-example

module
v0.0.0-...-76d045e Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 14, 2019 License: MIT

README

pipeline status Go Report Card coverage report License MIT

Strops - A Simple Command Line Dictionary Tool

Contents

What is Strops?

Strops is a basic demo gRPC application that performs 3 basic functions.

  • Order: Take string input and order the characters alphabetically. This will also work for numbers within strings.
  • Reverse: Reverse a string and return the result.
  • Define: Retrieve the definition of a word, alongside words that commonly follow that word in a sentence.

How does it work?

Strops is built using gRPC. A server runs on a Google Kubernetes Engine cluster, and a client communicates with the endpoint. This can be modified to use your own server implementation. The output of the commands are returned to the user in a JSON structure for use in other applications. The example client provided in the repository uses the Cobra framework, which is commonly used across the industry, with organisations such as Kubernetes, DigitalOcean and many more building their command line tools.

Local Installation

Pre-requisites
  • Docker
  • Minikube
  • Go Version 1.12.6 (latest at time of writing, previous versions will likely still work)
  • Go Modules enabled
Server

Clone the repository to your local machine. Run make build to build the server Docker image. When this is complete, run the server Docker image on the host network using the following command:

docker run --net host -e SERVER_OWL_URL={url} -e SERVER_OWL_TOKEN={token} -e SERVER_PORT={port} -e SERVER_DATAMUSE_URL={url} registry.gitlab.com/jackhughes/grpc-example:latest

When the container is running, use docker inspect to get it's IP address to use with the client.

Client

You can choose to build a binary of the client by using make build-client or "go run" the main.go file from the cmd/client path. The client defaults to my personal production cluster. You can overwrite this by using the -a and -p flags. -a is the address of the server (that you got from docker inspect) and -p is the port that you passed as an environment variable to the container.

The -h or --help commands will give you an idea of how the client works, but in summary there are 3 options available to you:

  • ./strops define hello hits the definition function call.
  • ./strops reverse hello hits the reverse definition call.
  • ./strops order hello hits the order definition call.

An example of passing address and port flags:

  • ./strops define hello -a 127.0.0.1 -p 8282

Testing

The application comes with a full suite of unit tests and benchmark tests for the server. In order to generate a code coverage report, you can use the make coverage command. This will place a HTML coverage report in the artifacts directory.

Alongside this, you can run benchmark tests by running make benchmark these tests test the efficiency of order and reverse commands.

CI/CD

The application is fully compliant with Gitlab's CI/CD suite. There are three stages that are performed on every push to a branch:

  • Verify
  • Test
  • Build

The verification step runs several tools to ensure code quality is at the desired level, this includes:

  • gocyclo
  • golint
  • deadcode
  • misspell

The test step will run unit tests and upload the coverage report to an artifacts directory within gitlab. An example of this can be seen here: Code Coverage.

The build step builds the Docker image of the server and uploads it to the Gitlab registry, with the current commit hash as it's tag.

The whole CI/CD process can be understood better by looking at the Makefile and the .gitlab-ci.yml files. The process of a build looks like this:

Pipeline

Please see example jobs here: Jobs

Conforming to 12 Factor App Standards

  1. Codebase
    • The codebase is versioned and controlled using Git and Gitlab.
  2. Dependencies
    • Dependencies are managed using Go Modules, see the go.mod and go.sum files.
  3. Config
    • The configuration is always held in environment variables. We load this config when the server loads and share it where appropriate. Secrets are also used to obfuscate sensitive data.
  4. Backing Services
    • Third party APIs are treated as part of the application.
  5. Build, release, run
    • Each step is clearly defined in the above CI/CD description. Running the application / deploying to Kubernetes is, at this point, a seperate process.
  6. Processes
    • The server application does not store state, and is volatile, it does not care if it exits and is then restarted by the container orchestrator.
  7. Port Binding
    • Services and containers are exposed via port binding. See the Kubernetes Service config, as well as the Dockerfile.
  8. Concurrency
    • The application makes use of concurrency on the definition function in order to make 2 web requests much faster than possible with a synchronous application.
  9. Disposability
    • The application uses the scratch base image, giving an image size of approximately 6mb. This means for ultra fast boot and push/pull times. The application can gracefully or non-gracefully exit, and be rescheduled by the container orchestrator.
  10. Dev/Prod Parity
    • Through the use of environment variables and minikube/kubernetes. The same application configuration can be used to run the server on both dev and prod environments. Alongside this, no code ever needs to check for an environment.
  11. Logs
    • The logs in this application are treated as a stream. Logs are output to STDOUT and STDERR where they can later be picked up and parsed by an application like Logstash.

Cloud Native Design

This application was designed to be cloud native from the ground up. The server runs in an ultra lightweight docker container based on a scratch image. This makes deployments, pushes and pulls exceedingly fast. The application is written in Go, a modern language with a key focus on building static binaries and developing software quickly. The application also makes use of Go's easy concurrency patterns to deliver faster and more performant software.

The microservices themselves are loosely coupled, and are made discoverable in this instance through the use of KubeDNS. This application could be scaled out to hundreds or thousands of pods and suffer no issues, (though the third party APIs may struggle at this point in time).

gRPC allows pre-defined specifications for these microservices to interact with eachother and with clients, alongside leveraging the benefits of HTTP2.

As the containers run images, they are completely independent of the operating systems that runs under the hood. They are ready to be deployed onto a host of cloud infrastructure service providers, such as AWS and GCE. There is no reason why these containers cannot be deployed to AWS ECS right now over Kubernetes in fact. This architecture itself is elastic, and more instances can be added to the pool whenever needed, if the cluster is maxing out on resources.

This project also makes large use of DevOps processes, i.e CI/CD pipelines to verify, test and build the code.

To Do

Eventstore

The next part of this application would be to store events and logs as they happen. I'd like to approach this by using a queuing service, such as RabbitMQ, Kafka, or AWS SQS. When an event happens, i.e a user requests a definition or an error log is returned from the application, I would create a topic on that queue and add the relevant metadata. Consumers would then poll these queues for new events. These consumers could be serverless functions or Kubernetes pods that would receive the data from the queue, transform the data (or not) and then persist to a storage engine. Something like DynamoDB or Elasticsearch immediately springs to mind. From this, I could generate alerts, graphs using tools like Prometheus and Grafana and monitor these applications with ease.

Consumer Access / Ingress Controllers

At the moment, the application generates a service which exposes a load balancer in GCE to the public internet. I'd prefer if this access was controlled by a Kubernetes Ingress controller, adding a DNS in front of the service. Users would then be able to access the application via a human readable domain name, alongside allowing greater access control on the cluster. The application could potentially not be exposed to the public internet, but rather a proxy sit in front of it to determine access to the application, i.e from trusted servers/locations.

Kubernetes Deployment

I need to add the deployment process to the CI/CD process.

Directories

Path Synopsis
cmd

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL