kibanacatalog

package
v0.9.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 6, 2017 License: Apache-2.0 Imports: 8 Imported by: 4

README

FireCamp Kibana Internals

The FireCamp Kibana container is based on the offical elastic.co kibana image, docker.elastic.co/kibana/kibana:5.6.3. The data volume will be mounted to the /data directory inside container. The Kibana data will be stored under /data/kibana.

Kibana Cluster

Topology

One Kibana member only runs in one availability zone. To tolerate one availability zone failure, the Kibana service could have 2 replicas. FireCamp will distribute one replica to one availability zone. When one availability zone goes down, you could direct the Kibana client to the member in the other availability zone.

Balance load across multiple ElasticSearch nodes

Currently Kibana simply connects to the first member of the ElasticSearch service. Usually the ElasticSearch service will have more than 1 data nodes. As Kibana suggested, to balance load across the ElasticSearch nodes, FireCamp Kibana service will run a local ElasticSearch instance in the coordinating only mode in the coming releases.

Memory

At startup, Kibana may use up to 1.5g memory, even when the ElasticSearch service does not have much data. By default, FireCamp reserves 2GB memory for Kibana container.

Security

The Kibana service inherits the FireCamp general security. The Kibana cluster will run in the internal service security group, which are not exposed to the internet. The nodes in the application security group could only access the Kibana service port, 5601. The Bastion nodes are the only nodes that could ssh to the Kibana nodes.

Proxy

FireCamp Kibana and Kibana are not exposed to the internet. Currently FireCamp ElasticSearch does not support security. It would be better to run Kibana inside the internal service group. For the Kibana client outside AWS such as your local laptop, you need to set up a proxy service in the application security group. The proxy service will forward the requests to Kibana. The Kibana client should connect to the proxy service.

The proxy service should set up the basic authentication to protect the access to Kibana. For example, Nginx should use the htpasswd.users file.

Enabling Kibana SSL is also suggested to have better protection.

SSL

SSL is supported to encrypt communications between the client such as browser and the Kibana server. You could provide the ssl key and certificate files when creating the Kibana service.

Logging

The Kibana logs are sent to the Cloud Logs, such as AWS CloudWatch logs.

Refs:

  1. Using Kibana in a Production Environment

Documentation

Index

Constants

View Source
const (

	// ContainerImage is the main running container.
	ContainerImage = common.ContainerNamePrefix + "kibana:" + defaultVersion

	// DefaultReserveMemoryMB is the default reserved memory size for Kibana
	DefaultReserveMemoryMB = 2048
)

Variables

This section is empty.

Functions

func GenDefaultCreateServiceRequest

func GenDefaultCreateServiceRequest(platform string, region string, azs []string, cluster string,
	service string, res *common.Resources, opts *manage.CatalogKibanaOptions, esNode string) *manage.CreateServiceRequest

GenDefaultCreateServiceRequest returns the default service creation request. kibana simply connects to the first elasticsearch node. TODO create a local coordinating elasticsearch instance.

func GenReplicaConfigs

func GenReplicaConfigs(platform string, cluster string, service string, azs []string, res *common.Resources, opts *manage.CatalogKibanaOptions, esNode string) []*manage.ReplicaConfig

GenReplicaConfigs generates the replica configs.

func ValidateRequest

func ValidateRequest(r *manage.CatalogCreateKibanaRequest) error

ValidateRequest checks if the request is valid

Types

This section is empty.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL