Kubernetes

Rolling restart

kubectl rollout restart deployment $DEPLOYMENT_NAME

Edit last deployment manifest

kubectl apply edit-last-applied deployment $DEPLOYMENT_NAME

Get the name and image

All pods

kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" \
| 
tr -s '[[:space:]]' '\n' \
| sort \
| uniq -c

Single pod

kubectl get pods $POD_NAME -o json \
| jq '.status.containerStatuses[] | { "image": .image, "imageID": .imageID }'

All deployments

kubectl get deploy --all-namespaces -o jsonpath="{range .items[*]}{@.metadata.name},{range .items[*]}{@.spec.template.spec.containers[*].image} " \
| tr -s '[[:space:]]' '\n' \
| sort

Get resource request and limits

CPU

kubectl get pods -o=jsonpath='{.items[*]..resources.requests.cpu}' -A
kubectl get pods -o=jsonpath='{.items[*]..resources.limits.cpu}' -A

Memory

kubectl get pods -o=jsonpath='{.items[*]..resources.requests.memory}' -A
kubectl get pods -o=jsonpath='{.items[*]..resources.limits.memory}' -A

How can I keep a Pod from crashing so that I can debug it?

 containers:
  - name: something
    image: some-image
    # `shell -c` evaluates a string as shell input
    command: [ "sh", "-c"]
    # loop forever, outputting "yo" every 5 seconds
    args: ["while true; do echo 'yo' && sleep 5; done;"]

Accessing the Kubernetes node

Use kubectl debug [1]

kubectl debug node/my-node -it --image busybox

Delete pods in an error state

kubectl delete pods $(kubectl get pods -o wide --show-all | grep Error |  awk '{print $1}')

View logs for a crash pod

Use --previous [2]

kubectl logs <POD_NAME> --previous

Horizontal pod autoscaling

For CPU scaling use averageValue not averageUtilization.

averageValue provides the exact CPU amount the scaling should happen on, so it obvious, as such easy to understand and avoids issue if the CPU request or limit is changed.

Logging and mounting

The reason that both /var/log/containers and /var/lib/docker is mounted:

Symlinks are just like pointers to the real location. If that location is unreadable the symlink can't be followed. – jimmidyson

Taken from https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-elasticsearch/fluentd-es-image/td-agent.conf

This configuration file for Fluentd / td-agent is used to watch changes to Docker log files. The kubelet creates symlinks that capture the pod name, namespace, container name & Docker container ID to the docker logs for pods in the /var/log/containers directory on the host. If running this fluentd configuration in a Docker container, the /var/log directory should be mounted in the container.

The Kubernetes fluentd plugin is used to write the Kubernetes metadata to the log
record & add labels to the log record if properly configured. This enables users
to filter & search logs on any metadata.
For example a Docker container's logs might be in the directory:

 /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b

and in the file:

 997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log

where 997599971ee6... is the Docker ID of the running container.
The Kubernetes kubelet makes a symbolic link to this file on the host machine
in the /var/log/containers directory which includes the pod name and the Kubernetes
container name:

   synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
   ->
  /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log

The /var/log directory on the host is mapped to the /var/log directory in the container
running this instance of Fluentd and we end up collecting the file:

/var/log/containers/synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log

This results in the tag:

 var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log

The Kubernetes fluentd plugin is used to extract the namespace, pod name & container name
which are added to the log message as a kubernetes field object & the Docker container ID
is also added under the docker field object.

QoS

Memory is an incompressible resource and so let's discuss the semantics of memory management a bit.

  • Best-Effort pods will be treated as lowest priority. Processes in these pods are the first to get killed if the system runs out of memory. These containers can use any amount of free memory in the node though.

  • Guaranteed pods are considered top-priority and are guaranteed to not be killed until they exceed their limits, or if the system is under memory pressure and there are no lower priority containers that can be evicted.

  • Burstable pods have some form of minimal resource guarantee, but can use more resources when available. Under system memory pressure, these containers are more likely to be killed once they exceed their requests and no Best-Effort pods exist.

node name

following works in v1.4.5

          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName

This is not in the documentation at the moment for the downward api (1/12/2016)

References

Last updated