You notice that the desired number of replicas are not met and the kube-system pods are all running, so the problem might not be with controllers. Check the events.
k describe deployments.apps backend-api
Name: backend-api
Namespace: default
CreationTimestamp: Fri, 14 Nov 2025 12:26:42 +0000
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=backend-api
Replicas: 3 desired | 2 updated | 2 total | 2 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=backend-api
Containers:
backend-api:
Image: nginx
Port: <none>
Host Port: <none>
Limits:
cpu: 150m
memory: 150Mi
Requests:
cpu: 100m
memory: 128Mi
Environment: <none>
Mounts: <none>
Volumes: <none>
Node-Selectors: <none>
Tolerations: <none>
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
ReplicaFailure True FailedCreate
Progressing True ReplicaSetUpdated
OldReplicaSets: <none>
NewReplicaSet: backend-api-6b6454f5cf (2/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 5m36s deployment-controller Scaled up replica set backend-api-6b6454f5cf from 0 to 3
In the events, you will see that is says that creating pod is forbidden due to exceeding the quota. The requested memory of the pod is 128Mi and the used memory over the alloted memory is 256Mi/300Mi. Now, check the quota.
k get events
k get events
LAST SEEN TYPE REASON OBJECT MESSAGE
5m9s Normal Scheduled pod/backend-api-6b6454f5cf-nwl48 Successfully assigned default/backend-api-6b6454f5cf-nwl48 to cluster3-controlplane
5m8s Normal Pulling pod/backend-api-6b6454f5cf-nwl48 Pulling image "nginx"
5m5s Normal Pulled pod/backend-api-6b6454f5cf-nwl48 Successfully pulled image "nginx" in 2.862s (2.862s including waiting). Image size: 59774010 bytes.
5m5s Normal Created pod/backend-api-6b6454f5cf-nwl48 Created container: backend-api
5m5s Normal Started pod/backend-api-6b6454f5cf-nwl48 Started container backend-api
5m9s Normal Scheduled pod/backend-api-6b6454f5cf-sp4v8 Successfully assigned default/backend-api-6b6454f5cf-sp4v8 to cluster3-controlplane
5m8s Normal Pulling pod/backend-api-6b6454f5cf-sp4v8 Pulling image "nginx"
5m5s Normal Pulled pod/backend-api-6b6454f5cf-sp4v8 Successfully pulled image "nginx" in 3.161s (3.161s including waiting). Image size: 59774010 bytes.
5m5s Normal Created pod/backend-api-6b6454f5cf-sp4v8 Created container: backend-api
5m5s Normal Started pod/backend-api-6b6454f5cf-sp4v8 Started container backend-api
5m9s Normal SuccessfulCreate replicaset/backend-api-6b6454f5cf Created pod: backend-api-6b6454f5cf-sp4v8
5m9s Warning FailedCreate replicaset/backend-api-6b6454f5cf Error creating: pods "backend-api-6b6454f5cf-xbc8f" is forbidden: exceeded quota: cpu-mem-quota, requested: requests.memory=128Mi, used: requests.memory=256Mi, limited: requests.memory=300Mi
5m9s Normal SuccessfulCreate replicaset/backend-api-6b6454f5cf Created pod: backend-api-6b6454f5cf-nwl48
5m9s Warning FailedCreate replicaset/backend-api-6b6454f5cf Error creating: pods "backend-api-6b6454f5cf-cr9hp" is forbidden: exceeded quota: cpu-mem-quota, requested: requests.memory=128Mi, used: requests.memory=256Mi, limited: requests.memory=300Mi
5m9s Warning FailedCreate replicaset/backend-api-6b6454f5cf Error creating: pods "backend-api-6b6454f5cf-2496p" is forbidden: exceeded quota: cpu-mem-quota, requested: requests.memory=128Mi, used: requests.memory=256Mi, limited: requests.memory=300Mi
5m9s Warning FailedCreate replicaset/backend-api-6b6454f5cf Error creating: pods "backend-api-6b6454f5cf-stzzm" is forbidden: exceeded quota: cpu-mem-quota, requested: requests.memory=128Mi, used: requests.memory=256Mi, limited: requests.memory=300Mi
5m9s Warning FailedCreate replicaset/backend-api-6b6454f5cf Error creating: pods "backend-api-6b6454f5cf-zdz4z" is forbidden: exceeded quota: cpu-mem-quota, requested: requests.memory=128Mi, used: requests.memory=256Mi, limited: requests.memory=300Mi
5m9s Warning FailedCreate replicaset/backend-api-6b6454f5cf Error creating: pods "backend-api-6b6454f5cf-jbb6t" is forbidden: exceeded quota: cpu-mem-quota, requested: requests.memory=128Mi, used: requests.memory=256Mi, limited: requests.memory=300Mi
5m9s Warning FailedCreate replicaset/backend-api-6b6454f5cf Error creating: pods "backend-api-6b6454f5cf-q82wg" is forbidden: exceeded quota: cpu-mem-quota, requested: requests.memory=128Mi, used: requests.memory=256Mi, limited: requests.memory=300Mi
5m9s Warning FailedCreate replicaset/backend-api-6b6454f5cf Error creating: pods "backend-api-6b6454f5cf-bs9dg" is forbidden: exceeded quota: cpu-mem-quota, requested: requests.memory=128Mi, used: requests.memory=256Mi, limited: requests.memory=300Mi
5m8s Warning FailedCreate replicaset/backend-api-6b6454f5cf Error creating: pods "backend-api-6b6454f5cf-g7b96" is forbidden: exceeded quota: cpu-mem-quota, requested: requests.memory=128Mi, used: requests.memory=256Mi, limited: requests.memory=300Mi
58s Warning FailedCreate replicaset/backend-api-6b6454f5cf (combined from similar events): Error creating: pods "backend-api-6b6454f5cf-sjqrs" is forbidden: exceeded quota: cpu-mem-quota, requested: requests.memory=128Mi, used: requests.memory=256Mi, limited: requests.memory=300Mi
The quota says that there’s only 300Mi memory that can be used in in the namespace default. Currently, the total used memory is already 256Mi. From the deployment manifest above, we can see that each pod has a memory request size of 128Mi. Since two pods are already created, there’s a total 256Mi memory already allocated, so the third pod won’t make it.
k get quota cpu-mem-quota -oyaml
apiVersion: v1
kind: ResourceQuota
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ResourceQuota","metadata":{"annotations":{},"name":"cpu-mem-quota","namespace":"default"},"spec":{"hard":{"requests.cpu":"0.3","requests.memory":"300Mi"}}}
creationTimestamp: "2025-11-14T12:26:42Z"
name: cpu-mem-quota
namespace: default
resourceVersion: "1956"
uid: 17eaa5d2-7c34-4cfd-8128-88acf2dda25c
spec:
hard:
requests.cpu: 300m
requests.memory: 300Mi
status:
hard:
requests.cpu: 300m
requests.memory: 300Mi
used:
requests.cpu: 200m
requests.memory: 256Mi
To fix this, either increase the ResourceQuota values or reduce the requested memory of each replica. When the latter is applied, we can see that all desired replicas are created and quota are maxed out.
k get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
backend-api 3/3 3 3 11m
k get quota cpu-mem-quota -oyaml
apiVersion: v1
kind: ResourceQuota
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ResourceQuota","metadata":{"annotations":{},"name":"cpu-mem-quota","namespace":"default"},"spec":{"hard":{"requests.cpu":"0.3","requests.memory":"300Mi"}}}
creationTimestamp: "2025-11-14T12:26:42Z"
name: cpu-mem-quota
namespace: default
resourceVersion: "2646"
uid: 17eaa5d2-7c34-4cfd-8128-88acf2dda25c
spec:
hard:
requests.cpu: 300m
requests.memory: 300Mi
status:
hard:
requests.cpu: 300m
requests.memory: 300Mi
used:
requests.cpu: 300m
requests.memory: 300Mi