logging-operator
Logging operator for Kubernetes based on Fluentd and Fluent-bit. For more details please follow up with this post.
What is this operator for?
This operator helps you to pack together logging information with your applications. With the help of Custom Resource Definition you can describe the behaviour of your application within its charts. The operator does the rest.
Motivation
The logging operator automates the deployment and configuration of a Kubernetes logging pipeline. Under the hood the operator configures a fluent-bit daemonset for collecting container logs from the node file system. Fluent-bit enriches the logs with Kubernetes metadata and transfers them to fluentd. Fluentd receives, filters and transfer logs to multiple outputs. The whole flow can be defined in a single custom resource. Your logs will always be transferred on authenticated and encrypted channels.
Blogs
Logging-operator is a core part of the Pipeline platform, a Cloud Native application and devops platform that natively supports multi- and hybrid-cloud deployments with multiple authentication backends. Check out the developer beta:
Contents
Deploying with helm chart
The following steps set up an example configuration for sending nginx logs to S3.
Add BanzaiCloud chart repository:
$ helm repo add banzaicloud-stable https://kubernetes-charts.banzaicloud.com
$ helm repo update
Install logging-operator chart
$ helm install banzaicloud-stable/logging-operator
Install FluentD, FluentBit CRs from chart
$ helm install banzaicloud-stable/logging-operator-fluent
Deploying with Kubernetes Manifest
# Create all the CRDs used by the Operator
kubectl create -f deploy/crds/logging_v1alpha1_plugin_crd.yaml
kubectl create -f deploy/crds/logging_v1alpha1_fluentbit_crd.yaml
kubectl create -f deploy/crds/logging_v1alpha1_fluentd_crd.yaml
# If RBAC enabled create the required resources
kubectl create -f deploy/clusterrole.yaml
kubectl create -f deploy/clusterrole_binding.yaml
kubectl create -f deploy/service_account.yaml
# Create the Operator
kubectl create -f deploy/operator.yaml
# Create the fluent-bit daemonset by submiting a fluent-bit CR
kubectl create -f deploy/crds/logging_v1alpha1_fluentbit_cr.yaml
# Create the fluentd deployment by submitting a fluentd CR
kubectl create -f deploy/crds/logging_v1alpha1_fluentd_cr.yaml
Supported Plugins
Name |
Type |
Description |
Status |
Version |
Alibaba |
Output |
Store logs the Alibaba Cloud Object Storage Service |
GA |
0.0.2 |
Amazon S3 |
Output |
Store logs in Amazon S3 |
GA |
1.1.10 |
Azure |
Output |
Store logs in Azure Storega |
GA |
0.1.1 |
Google Storage |
Output |
Store logs in Google Cloud Storage |
GA |
0.4.0.beta1 |
Grafana Loki |
Output |
Transfer logs to Loki |
Testing |
0.2 |
ElasticSearch |
Output |
Send your logs to Elasticsearch |
GA |
3.5.2 |
HDFS |
Output |
Fluentd output plugin to write data into Hadoop HDFS over WebHDFS/HttpFs. |
GA |
1.2.3 |
Kubernetes Metadata Filter |
Filter |
Filter plugin to add Kubernetes metadata |
GA |
2.2.0 |
Parser |
Parser |
Parse logs with parser plugin |
GA |
|
Troubleshooting
If you encounter any problems that the documentation does not address, please file an issue or talk to us on the Banzai Cloud Slack channel #logging-operator.
Contributing
If you find this project useful here's how you can help:
- Send a pull request with your new features and bug fixes
- Help new users with issues they may encounter
- Support the development of this project and star this repo!
License
Copyright (c) 2017-2019 Banzai Cloud, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.