I have kubernetes cluster with autoscale enabled and I am deploying pods dynamically when needed (not stateful), some pods are failing because node is running out of memory (expected). When it happens pod enters status: Failed
, reason: Evicted
, Message: node was low on resource
even when new nodes can be spawned.
Is there is way to reschedule/retry this pod on different/new node automatically ? if not, how can I re-deploy same pod without changing any config of it ?
Pods by itself doesn’t migrate to another node.
Some examples of controllers are:
Check this link to more information.