Kubectl Prof
This is a kubectl plugin that allows you to profile applications with low-overhead in Kubernetes environments by
generating
FlameGraphs and many other outputs
as JFR,
thread dump, heap dump and class histogram for Java applications by
using jcmd. For Python applications,
thread dump output and speed scope format file are also supported.
See Usage section.
More functionalities will be added in the future.
Running kubectl-prof
does not require any modification to existing pods.
This is an open source fork of kubectl-flame with several new features and bug
fixes.
Table of Contents
Requirements
- Supported languages: Go, Java (any JVM based language), Python, Ruby, NodeJS, Clang and Clang++.
- Kubernetes that use some of the following container runtimes:
- CRI-O (default)
- Containerd
Usage
Profiling Java Pod
In order to profile a Java application in pod mypod
for 1 minute and save the flamegraph into /tmp
run:
kubectl prof my-pod -t 5m -l java -o flamegraph --local-path=/tmp
NOTICE:
- if
--local-path
is omitted, flamegraph result will be saved into current directory
Profiling Alpine based container
Profiling Java application in alpine based containers require using --alpine
flag:
kubectl prof mypod -t 1m --lang java -o flamegraph --alpine
NOTICE: this is only required for Java apps, the --alpine
flag is unnecessary for other languages.
Profiling Java Pod and generate JFR output require using -o/--output jfr
option:
kubectl prof mypod -t 5m -l java -o jfr
Profiling Java Pod and generate JFR output but by using async-profiler
In this case, profiling Java Pod and generate JFR output require using -o/--output jfr
and --tool async-profiler
options:
kubectl prof mypod -t 5m -l java -o jfr --tool jcmd
In this case, profiling Java Pod and generate the thread dump output require using -o/--output threaddump
options:
kubectl prof mypod -l java -o threaddump
In this case, profiling Java Pod and generate the heap dump output require using -o/--output heapdump
options:
kubectl prof mypod -l java -o heapdump --tool jcmd
In this case, profiling Java Pod and generate the heap histogram output require using -o/--output heaphistogram
options:
kubectl prof mypod -l java -o heaphistogram --tool jcmd
Profiling specifying the container runtime
Supported container runtimes values are: crio
, containerd
.
kubectl prof mypod -t 1m --lang java --runtime crio
Profiling Python Pod
In order to profile a Python application in pod mypod
for 1 minute and save the flamegraph into /tmp
run:
kubectl prof mypod -t 1m --lang python -o flamegraph --local-path=/tmp
Profiling Python Pod and generate thread dump output
In this case, profiling Python Pod and generate the thread dump output require using -o/--output threaddump
option:
kubectl prof mypod -t 1m --lang python --local-path=/tmp -o threaddump
In this case, profiling Python Pod and generate the thread dump output require using -o/--output speedscope
option:
kubectl prof mypod -t 1m --lang python --local-path=/tmp -o speedscope
Profiling Golang Pod
In order to profile a Golang application in pod mypod
for 1 minute run:
kubectl prof mypod -t 1m --lang go -o flamegraph
Profiling Node Pod
In order to profile a Python application in pod mypod
for 1 minute run:
kubectl prof mypod -t 1m --lang node -o flamegraph
Profiling Ruby Pod
In order to profile a Ruby application in pod mypod
for 1 minute run:
kubectl prof mypod -t 1m --lang ruby -o flamegraph
Profiling Clang Pod
In order to profile a Clang application in pod mypod
for 1 minute run:
kubectl prof mypod -t 1m --lang clang -o flamegraph
Profiling Clang++ Pod
In order to profile a Clang++ application in pod mypod
for 1 minute run:
kubectl prof mypod -t 1m --lang clang++ -o flamegraph
Profiling with several options:
Profiling a pod for 5 minutes in intervals of 60 seconds for java language by giving the cpu limits, the container runtime, the agent image and the image pull policy
kubectl prof mypod -l java -o flamegraph -t 5m --interval 60s --cpu-limits=1 -r containerd --image=localhost/my-agent-image-jvm:latest --image-pull-policy=IfNotPresent
Profiling in contprof namespace a pod running in contprof-apps namespace by using the profiler service account for go language
kubectl prof mypod -n contprof --service-account=profiler --target-namespace=contprof-apps -l go
Profiling by setting custom resource requests and limits for the agent pod (default: neither requests nor limits are set) for python language
kubectl prof mypod --cpu-requests 100m --cpu-limits 200m --mem-requests 100Mi --mem-limits 200Mi -l python
For more detailed options run:
kubectl prof --help
Installation
Pre-built binaries
See the release page for the full list of pre-built
assets. And download the binary according yours architecture.
Installing for Linux x86_64
curl -sL https://github.com/josepdcs/kubectl-prof/releases/download/v1.0.0/kubectl-prof_v1.0.0_linux_x86_64.tar.gz -o kubectl-prof.tar.gz
tar xvfz kubectl-prof.tar.gz && sudo install kubectl-prof /usr/local/bin/
Building
Install source code and golang dependencies
$ go get -d github.com/josepdcs/kubectl-prof
$ cd $GOPATH/src/github.com/josepdcs/kubectl-prof
$ make install-deps
Build binary
$ make
Build Agents Containers
Modify Makefile, property DOCKER_BASE_IMAGE, and run:
$ make agents
How it works
kubectl-prof
launch a Kubernetes Job on the same node as the target pod. Under the hood kubectl-prof
can use the
following tools according the programming language:
- For Java:
- async-profiler in order to generate flame graphs or JFR
files and the rest of output type supported for this tool.
- jcmd in order to generate: JFR
files, thread dumps, heap dumps and heap histogram.
- For generating JFR files use the options:
--tool jcmd
and -o jfr
.
- For generating thread dumps use the options:
--tool jcmd
and -o threaddump
.
- For generating heap dumps use the options:
--tool jcmd
and -o heapdump
.
- For generating heap histogram use the options:
--tool jcmd
and -o histogram
.
- Note: Default tool is async-profiler if no
option
--tool
is given and default output is flame graphs if no option -o/--output
is also given.
- For Golang: ebpf profiling.
- For Python: py-spy.
- For generating thread dumps use the option:
-o threaddump
.
- For generating speed scope use the option :
-o speedscope
.
- For Ruby: rbspy.
- For NodeJS: ebpf profiling
and perf but last one is not recommended.
- In order for Javascript Symbols to be resolved, node process needs to be run with
--prof-basic-prof
flag.
- For Clang and Clang++: perf is the default profiler
but ebpf profiling is also supported.
kubectl-prof
also supports to work in modes discrete and continuous:
- In discrete mode: only one profiling result is requested. Once this result is obtained, the profiling process
finishes. This is the default behaviour when only using
-t time
option.
- In continuous mode: can produce more than one result. Given a session duration and an interval, a result is produced
every interval until the profiling session finishes. Only the last produced result is available. It is client
responsibility to store all the session results.
- For using this option you must use the
--interval time
option in addition to -t time
.
Contribute
Please refer to the contributing.md file for information about how to get involved. We welcome
issues, questions, and pull requests
Maintainers
Special thanks to the original Author of kubectl-flame
License
This project is licensed under the terms of the Apache 2.0 open source license. Please refer
to LICENSE for the full terms.
Project status