Kubernetes

Rolling restart

kubectl rollout restart deployment $DEPLOYMENT_NAME

Edit last deployment manifest

kubectl apply edit-last-applied deployment $DEPLOYMENT_NAME

Get the name and image

All pods

kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" \
| 
tr -s '[[:space:]]' '\n' \
| sort \
| uniq -c

Single pod

kubectl get pods $POD_NAME -o json \
| jq '.status.containerStatuses[] | { "image": .image, "imageID": .imageID }'

All deployments

kubectl get deploy --all-namespaces -o jsonpath="{range .items[*]}{@.metadata.name},{range .items[*]}{@.spec.template.spec.containers[*].image} " \
| tr -s '[[:space:]]' '\n' \
| sort

Get resource request and limits

CPU

Memory

How can I keep a Pod from crashing so that I can debug it?

Accessing the Kubernetes node

Use kubectl debug [1]

Delete pods in an error state

View logs for a crash pod

Use --previous [2]

Horizontal pod autoscaling

For CPU scaling use averageValue not averageUtilization.

averageValue provides the exact CPU amount the scaling should happen on, so it obvious, as such easy to understand and avoids issue if the CPU request or limit is changed.

Logging and mounting

The reason that both /var/log/containers and /var/lib/docker is mounted:

Symlinks are just like pointers to the real location. If that location is unreadable the symlink can't be followed. – jimmidyson

Taken from https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-elasticsearch/fluentd-es-image/td-agent.conf

This configuration file for Fluentd / td-agent is used to watch changes to Docker log files. The kubelet creates symlinks that capture the pod name, namespace, container name & Docker container ID to the docker logs for pods in the /var/log/containers directory on the host. If running this fluentd configuration in a Docker container, the /var/log directory should be mounted in the container.

QoS

Memory is an incompressible resource and so let's discuss the semantics of memory management a bit.

  • Best-Effort pods will be treated as lowest priority. Processes in these pods are the first to get killed if the system runs out of memory. These containers can use any amount of free memory in the node though.

  • Guaranteed pods are considered top-priority and are guaranteed to not be killed until they exceed their limits, or if the system is under memory pressure and there are no lower priority containers that can be evicted.

  • Burstable pods have some form of minimal resource guarantee, but can use more resources when available. Under system memory pressure, these containers are more likely to be killed once they exceed their requests and no Best-Effort pods exist.

node name

following works in v1.4.5

This is not in the documentation at the moment for the downward api (1/12/2016)

Scaling

For example, if the current metric value is 200m, and the desired value is 100m, the number of replicas will be doubled, since 200.0 / 100.0 == 2.0 If the current value is instead 50m, you'll halve the number of replicas, since 50.0 / 100.0 == 0.5. The control plane skips any scaling action if the ratio is sufficiently close to 1.0 (within a globally-configurable tolerance, 0.1 by default).

https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details

References

Last updated