Investigating performance issues
Start your investigation with the “Revision - HTTP Requests” dashboard.
To open this dashboard, open the Grafana UI as described in Accessing Metrics and navigate to “Knative Serving - Revision HTTP Requests”.
Select your configuration and revision from the menu on top left of the page. You will see a page like this:
This dashboard gives visibility into the following for each revision:
- Request volume per HTTP response code
- Response time
- Response time per HTTP response code
- Request and response sizes
This dashboard can show traffic volume or latency discrepancies between different revisions. If, for example, a revision’s latency is higher than others revisions, then focus your investigation on the offending revision through the rest of this guide.
Next, look into request traces to find out where the time is spent for a single request.
Select your revision from the “Service Name” dropdown, and then click the “Find Traces” button. You’ll get a view that looks like this:
In this example, you can see that the request spent most of its time in the right before the last, so focus your investigation on that specific span.
Click that span to see a view like the following:
This view shows detailed information about the specific span, such as the micro service or external URL that was called. In this example, the call to a Grafana URL is taking the most time. Focus your investigation on why that URL is taking that long.
If request metrics or traces do not show any obvious hot spots, or if they show that most of the time is spent in your own code, look at autoscaler metrics next.
This view shows 4 key metrics from the Knative Serving autoscaler:
- Actual pod count: # of pods that are running a given revision
- Desired pod count: # of pods that autoscaler thinks should serve the revision
- Requested pod count: # of pods that the autoscaler requested from Kubernetes
- Panic mode: If 0, the autoscaler is operating in stable mode. If 1, the autoscaler is operating in .
A large gap between the actual pod count and the requested pod count indicates that the Kubernetes cluster is unable to keep up allocating new resources fast enough, or that the Kubernetes cluster is out of requested resources.
A large gap between the requested pod count and the desired pod count indicates that the Knative Serving autoscaler is unable to communicate with the Kubernetes API to make the request.
In the preceding example, the autoscaler requested 18 pods to optimally serve the traffic but was only granted 8 pods because the cluster is out of resources.
You can access total CPU and memory usage of your revision from the “Knative Serving - Revision CPU and Memory Usage” dashboard, which looks like this:
- user-container: This container runs the user code (application, function, or container).
- istio-proxy: Sidecar container to form an mesh.
- queue-proxy: Knative Serving owned sidecar container to enforce request concurrency limits.
- fluentd-proxy: Sidecar container to collect logs from /var/log.
…To be filled…