What Is a DaemonSet
You need to collect logs from every node in your cluster. You create a Deployment with three replicas. But three replicas on a five-node cluster means two nodes will not be covered. You scale to five. A sixth node joins. Now you are under-covered again. You spend your time chasing a moving target.
A DaemonSet eliminates this problem. It ensures exactly one copy of a Pod runs on every node in the cluster, automatically. When a new node joins, the DaemonSet places a Pod on it. When a node is removed, the Pod is garbage-collected. You declare the intent once. Coverage becomes a fact rather than a task.
kubectl get nodesLook at the node count. A DaemonSet will place exactly one Pod on each of those nodes. Run that command again after creating a DaemonSet and you will see one Pod per node, correlated by the NODE column.
How it differs from a Deployment
A Deployment maintains a fixed replica count and lets the scheduler decide which nodes receive Pods. The scheduler optimizes for resource fit and spreading, but the number of Pods is fixed. A DaemonSet delegates to a different controller: instead of counting replicas, it counts nodes. The spec says nothing about how many Pods to run. It says one Pod per node.
A cluster has 4 nodes. You create a DaemonSet. How many Pods are running?
- 1 (DaemonSets run a single instance)
- 4 (one per node)
- Depends on the
replicasfield
Reveal answer
4, one per node. DaemonSets have no replicas field. The replica count is determined entirely by the number of nodes.
The DaemonSet manifest
The structure closely resembles a Deployment. The key difference is the absence of replicas and the absence of a strategy section. Those fields belong to a controller that manages a fixed count. A DaemonSet has neither.
nano log-agent.yamlStart with the outer shell:
apiVersion: apps/v1kind: DaemonSetmetadata: name: log-agentThe spec section requires two fields: selector and template. Their structure is identical to a Deployment.
spec: selector: matchLabels: app: log-agent template: metadata: labels: app: log-agent spec: containers: - name: agent image: busybox:1.36The full manifest:
nano log-agent.yamlapiVersion: apps/v1kind: DaemonSetmetadata: name: log-agentspec: selector: matchLabels: app: log-agent template: metadata: labels: app: log-agent spec: containers: - name: agent image: busybox:1.36kubectl apply -f log-agent.yamlObserving the placement
kubectl get pods -o wideThe -o wide flag adds a NODE column. Verify that one log-agent Pod appears on each node and that no node has two. This is the guarantee a DaemonSet provides.
kubectl get daemonset log-agentThe output shows DESIRED, CURRENT, READY, UP-TO-DATE, and AVAILABLE. DESIRED equals the number of nodes. There is no concept of “scale to 10” because the desired count is always derived from the node count.
If a Pod fails to schedule on a specific node due to resource pressure or taints, the DaemonSet's DESIRED count will be higher than CURRENT. This looks like a stuck rollout. The Events section of kubectl describe daemonset log-agent will name the specific node and the reason the Pod could not be placed there.
You describe a DaemonSet and see DESIRED: 3 but CURRENT: 2. What does that indicate?
Reveal answer
One Pod failed to schedule on one of the three nodes. The DaemonSet wants one Pod per node, so DESIRED equals the node count. A mismatch between DESIRED and CURRENT means at least one node is missing its Pod, usually because of a taint, resource constraint, or node not-ready condition. Check Events for the specific reason.
Clean up before the next lesson:
kubectl delete daemonset log-agentA DaemonSet is the right controller whenever the workload is node-scoped rather than replica-scoped: every node must run the thing, and no node should run more than one. The next lesson covers the most common real-world use cases and why they share this node-scoped requirement.