Updating DaemonSets
You need to update a log agent DaemonSet to a new image version across 20 nodes. Unlike a Deployment, you cannot take a DaemonSet offline during the update. The agents must keep running on as many nodes as possible throughout. DaemonSets support two update strategies: RollingUpdate, which handles this automatically, and OnDelete, which gives you manual control per node.
First, create a DaemonSet to work with:
nano update-agent.yamlapiVersion: apps/v1kind: DaemonSetmetadata: name: update-agentspec: selector: matchLabels: app: update-agent updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 template: metadata: labels: app: update-agent spec: containers: - name: agent image: busybox:1.35kubectl apply -f update-agent.yamlkubectl get pods -o wideNote the current image version in the Pod spec. You will update it in a moment.
RollingUpdate: one node at a time
With RollingUpdate and maxUnavailable: 1, the DaemonSet controller deletes one Pod and waits for its replacement to become Ready before moving to the next node. At most one node is missing its agent at any point during the update.
Trigger the update by changing the image:
kubectl set image daemonset/update-agent agent=busybox:1.36Watch the rollout progress:
kubectl rollout status daemonset/update-agentThe output shows each node as the update proceeds. Once the command returns, all nodes are running the new image.
A DaemonSet has maxUnavailable: 2 and runs on 6 nodes. During a rolling update, how many nodes can be missing their Pod simultaneously?
Reveal answer
2. maxUnavailable for a DaemonSet sets the maximum number of nodes that can have their Pod down at the same time during a rolling update. With maxUnavailable: 2 on 6 nodes, at most 2 nodes are unprotected at any moment.
Inspecting the rollout
kubectl describe daemonset update-agentLook at the Events section. You will see lines like SuccessfulCreate for each new Pod and SuccessfulDelete for each old one. During a rolling update, these events alternate as the controller cycles through nodes.
Check the current image across all Pods:
kubectl get pods -l app=update-agent -o yamlAll Pods should now show image: busybox:1.36. If any Pod still shows the old image, the rollout stalled. The most common reason is a node condition that prevents the new Pod from reaching Ready.
Rolling back
If the new image has a problem, roll back with the same command you use for Deployments:
kubectl rollout undo daemonset/update-agentThis restores the previous image on all nodes using the same rolling process. The rollback respects maxUnavailable just like a forward update.
DaemonSet rollout history is limited. Unlike Deployments, which keep old ReplicaSets as rollback points, DaemonSets only retain the immediately previous version. kubectl rollout undo goes back one step. There is no multi-step rollback for DaemonSets. If you need the ability to roll back further, maintain your manifests in version control and apply an older version manually.
OnDelete: manual per-node control
The OnDelete strategy updates a Pod only when you explicitly delete it. The DaemonSet controller will not touch running Pods. This gives you node-by-node control over when the update happens.
Change the strategy:
kubectl patch daemonset update-agent -p '{"spec":{"updateStrategy":{"type":"OnDelete"}}}'Now update the image:
kubectl set image daemonset/update-agent agent=busybox:1.35Check the Pods. They are still running busybox:1.36. The image change was recorded, but no Pods were updated. To apply the update on a specific node, delete the Pod on that node:
kubectl delete pod -l app=update-agent --field-selector spec.nodeName=sim-workerThe DaemonSet controller immediately creates a replacement Pod, this time with the new image. You control exactly when each node gets updated.
You are running a DaemonSet with OnDelete strategy and update the image. You check the Pods and they all still show the old image. Is this a bug?
Reveal answer
No. OnDelete is intentional behavior. The DaemonSet records the new desired image but does not touch existing Pods. Each Pod is only updated when you manually delete it. This is useful when you need to update nodes one at a time with human oversight between each step.
kubectl delete daemonset update-agentRollingUpdate handles most production scenarios well: it keeps coverage high, moves automatically through all nodes, and supports rollback. OnDelete is the choice when your update process requires manual validation between nodes, such as a firmware driver update that needs a maintenance window per node. The next module covers Jobs and CronJobs, which handle batch and scheduled workloads instead of continuously running processes.