Knative performance tests are tests geared towards producing useful performance
metrics of the knative system. As such they can choose to take a blackbox
point-of-view of the system and use it just like an end-user might see it. They
can also go more whiteboxy to narrow down the components under test.
Load Generator
Knative has a
load generator library
that can be used to generate load. The load generator uses
Fortio for generating load based on the
generator options
set in the test. The load generator provides the
generator results
which includes all the data points for the load generation request and any
calculated latencies.
For eg.
opts := loadgenerator.GeneratorOptions{Duration: 1*time.Minute, NumThreads: 1}
resp, err := opts.RunLoadTest(false /* resolvableDomain */)
if err != nil {
t.Fatalf("Generating traffic via fortio failed: %v", err)
}
Prometheus metrics
Knative provides a
prometheus wrapper
that provides methods to wait for prometheus to scrap for the metrics once the
test is finished. It also provides a way to query the prometheus server for any
server-side metrics and then display those in testgrid
For eg.
promAPI, err := prometheus.PromAPI()
if err != nil {
logger.Error("Cannot setup prometheus API")
}
query := fmt.Sprintf("%s{namespace_name=%q, configuration_name=%q, revision_name=%q}", metric, test.ServingNamespace, names.Config, names.Revision)
val, err := prometheus.RunQuery(context.Background(), logger, promAPI, query)
if err != nil {
logger.Infof("Error querying metric %s: %v", metric, err)
}
Zipkin trace
Zipkin tracing can be enabled if needed by the performance test during setup.
perfClients, err := Setup(t, EnableZipkinTracing)
Once enabled, all requests made by the SpoofingClient
will have an additional
trace header. This can be used to get the entire request trace and store it in a
trace file in the artifacts directory using the
AddTrace()
method.
Sample Trace:
[
...
{
"traceId": "f5bd1989e056eec7aa790bbb914b77f9",
"parentId": "b7dfefa0fe93308b",
"id": "c994972581898fcd",
"kind": "SERVER",
"name": "activator-service.knative-serving.svc.cluster.local:80/*",
"timestamp": 1552595276313951,
"duration": 1872929,
"localEndpoint": {
"serviceName": "activator",
"ipv4": "10.16.4.89"
},
"tags": {
"component": "proxy",
"downstream_cluster": "-",
"error": "true",
"guid:x-request-id": "efacbe52-9a03-40a7-a7f2-d35f9c8b1dff",
"http.method": "GET",
"http.protocol": "HTTP/1.1",
"http.status_code": "500",
"http.url": "http://scale-to-n-scale-100-40-xxackadp.serving-tests.example.com/",
"node_id": "sidecar~10.16.4.89~activator-68664559c9-zl2sl.knative-serving~knative-serving.svc.cluster.local",
"request_size": "0",
"response_flags": "-",
"response_size": "0",
"upstream_cluster": "inbound|80||activator-service.knative-serving.svc.cluster.local",
"user_agent": "Go-http-client/1.1"
},
"shared": true
}
]
Displaying metrics
Once the test is done, each test can define which metrics they want to be
captured and shown on testgrid. For each metric, create a
testgrid testcase
by using the
CreatePerfTestCase() method.
For eg.
import perf "knative.dev/test-infra/shared/performance"
testName := "TestPerformanceLatency"
var tc []testgrid.TestCase
for name, val := range metrics {
tc = append(tc, perf.CreatePerfTestCase(val, name, testName))
}
Once we create all the test cases for the metrics, we can use the
CreateTestgridXML() method
to create the output xml that will be used as input to testgrid. If the test is
run locally, it will create a file called junit_<testName>.xml
and save it
locally under knative/serving/test/performance/artifacts/
(Note that this will
create a directory called artifacts, if not present. Will use the existing one,
if present). When the test is run with Prow, this file will be stored with the
other artifacts generated by Prow like the build log.
For eg.
if err = testgrid.CreateTestgridXML(tc, testName); err != nil {
t.Fatalf("Cannot create output xml: %v", err)
}
All the metrics are appended in the junit_knative.xml file and can be seen on
testgrid
Performance tests are simple go tests, that use Fortio for
generating load and Tesgrid to see the metrics on a
continuous basis. For whitebox tests, the performance tests also bring up the
prometheus service along with knative-serving. So, you can
query prometheus to get server side metrics with the load
generator metrics.
Testgrid
Knative uses testgrid to show all its metrics.
Performance metrics are shown on a separate tab for each repo like
serving. It will
pick up the junit_knative.xml file generated by the tests and display the metric
and the value in the grid in the performance tab automatically.