-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
failed to fit in any node #13
Comments
@wumt which k8s version do you use and how did you deploy k8s? And could you please use markdown formatting for logs? |
[root@10-2-8-230 ~]# kubectl describe pod es-data-2875003034-b6q8g --namespace=monitoring
Name: es-data-2875003034-b6q8g
Namespace: monitoring
Node: /
Labels: component=elasticsearch
pod-template-hash=2875003034
role=data
Status: Pending
IP:
Controllers: ReplicaSet/es-data-2875003034
Containers:
es-data:
Image: kayrus/docker-elasticsearch-kubernetes:2.4.4
Ports: 9300/TCP, 28651/TCP
Args:
/run.sh
-Des.path.conf=/etc/elasticsearch
Readiness: tcp-socket :9300 delay=0s timeout=1s period=10s #success=3 #failure=3
Volume Mounts:
/data from storage (rw)
/etc/elasticsearch from es-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tcpbr (ro)
Environment Variables:
NAMESPACE: monitoring (v1:metadata.namespace)
CLUSTER_NAME: <set to the key 'es-cluster-name' of config map 'es-env'>
NUMBER_OF_REPLICAS: <set to the key 'es-number-of-replicas' of config map 'es-env'>
NODE_MASTER: false
NODE_DATA: true
HTTP_ENABLE: false
ES_HEAP_SIZE: <set to the key 'es-data-heap' of config map 'es-env'>
ES_CLIENT_ENDPOINT: <set to the key 'es-client-endpoint' of config map 'es-env'>
ES_PERSISTENT: <set to the key 'es-persistent-storage' of config map 'es-env'>
Conditions:
Type Status
PodScheduled False
Volumes:
storage:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
es-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: es-config
default-token-tcpbr:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tcpbr
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1d 25s 5726 {default-scheduler } Warning FailedScheduling pod (es-data-2875003034-b6q8g) failed to fit in any node
fit failure summary on nodes : MatchInterPodAffinity (2), PodFitsHostPorts (2), PodToleratesNodeTaints (1) |
@kayrus my k8s version is 1.5.1 ,i used kubeadm to deploy k8s |
Is it a new k8s cluster? Can you provide the output of the following command: kubectl get pods -o wide --all-namespaces At first glance it looks like there are already deployed es-data pods. |
@wumt did you resolve the issue? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
[root@10-2-8-230 elk-kubernetes]# kubectl describe pod es-data-2875003034-qpfq7 --namespace=monitoring
Name: es-data-2875003034-qpfq7
Namespace: monitoring
Node: /
Labels: component=elasticsearch
pod-template-hash=2875003034
role=data
Status: Pending
IP:
Controllers: ReplicaSet/es-data-2875003034
Containers:
es-data:
Image: kayrus/docker-elasticsearch-kubernetes:2.4.4
Ports: 9300/TCP, 28651/TCP
Args:
/run.sh
-Des.path.conf=/etc/elasticsearch
Readiness: tcp-socket :9300 delay=0s timeout=1s period=10s #success=3 #failure=3
Volume Mounts:
/data from storage (rw)
/etc/elasticsearch from es-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tcpbr (ro)
Environment Variables:
NAMESPACE: monitoring (v1:metadata.namespace)
CLUSTER_NAME: <set to the key 'es-cluster-name' of config map 'es-env'>
NUMBER_OF_REPLICAS: <set to the key 'es-number-of-replicas' of config map 'es-env'>
NODE_MASTER: false
NODE_DATA: true
HTTP_ENABLE: false
ES_HEAP_SIZE: <set to the key 'es-data-heap' of config map 'es-env'>
ES_CLIENT_ENDPOINT: <set to the key 'es-client-endpoint' of config map 'es-env'>
ES_PERSISTENT: <set to the key 'es-persistent-storage' of config map 'es-env'>
Conditions:
Type Status
PodScheduled False
Volumes:
storage:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
es-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: es-config
default-token-tcpbr:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tcpbr
QoS Class: BestEffort
Tolerations:
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
17m 31s 63 {default-scheduler } Warning FailedScheduling pod (es-data-2875003034-qpfq7) failed to fit in any node
fit failure summary on nodes : MatchInterPodAffinity (2), PodFitsHostPorts (2), PodToleratesNodeTaints (1)
The text was updated successfully, but these errors were encountered: