Warm tip: This article is reproduced from stackoverflow.com, please click
kubernetes

resetting restartCount of pods with static names

发布于 2020-03-27 15:42:03

For monitoring purposes, I want to rely on a pod's restartCount. However, I cannot seem to do that for certain apps, as restartCount is not reset even after rebooting the whole node the pod is scheduled to run on.

Usually, restarting a pod resets this, unless the pod name of the restarted pod is the same (e.g. true for etcd, kube-controller-manager, kube-scheduler and kube-apiserver).

For those cases, there is a longrunning minor issue as well as the idea to use kubectl patch.

To sum up the info there, kubectl edit will not allow to change anything in status. Unfortunately, neither does e.g.

kubectl -n kube-system patch pod kube-controller-manager-some.node.name --type='json' -p='[{"op": "replace", "path": "/status/containerStatuses/0/restartCount", "value": 14}]'

The Pod "kube-controller-manager-some.node.name" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)

So, I am checking if anyone has found a workaround?

Thanks!

Robert

Questioner
valuecoach
Viewed
81
HelloWorld 2020-01-31 22:06

This seems to be quite an old issue (2017). Take a look here.

I believe the solution to it was supposed to be implementing unique UIDs for static pods. This issue got reopened few days ago as another github issue and hasn't been implemented to this day.

I have found a workaround for it. You need to change static pod manifest file e.g. by adding some random annotation to pod.

Let me know if it was helpful.