Warm tip: This article is reproduced from stackoverflow.com, please click
deployment kubectl minikube

Why an application deployment status is "Available:0" when service is deployed properly in minikube?

发布于 2020-03-27 10:16:23

i am trying to deploy the back-end component of my application for testing REST API's. i have dockerized the components and created an image in minikube.i have created a yaml file for deploying and creating services. Now when i try to deploy it through sudo kubectl create -f frontend-deployment.yaml, it deploys without any error but when i check the status of deployments this is what is shown :

NAME   READY   UP-TO-DATE   AVAILABLE   AGE
back   0/3     3            0           2m57s

Interestingly the service corresponding to this deployment is available.

NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
back         ClusterIP   10.98.73.249   <none>        8080/TCP         3m9s

i also tried to create deployment by running deplyment statemnts individually like sudo kubectl run back --image=back --port=8080 --image-pull-policy Never but the result was same.

Here is how my `deployment.yaml file looks like :

kind: Service
apiVersion: v1
metadata:
  name: back
spec:
  selector:
    app: back
  ports:
  - protocol: TCP
    port: 8080
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: back
spec:
  selector:
      matchLabels:
        app: back
  replicas: 3
  template:
    metadata:
      labels:
        app: back
    spec:
      containers:
        - name: back
          image: back
          imagePullPolicy: Never
          ports:
            - containerPort: 8080

How can i make this deployment up and running as this causes internal server error on my front end side of application?

Description of pod back

Name:           back-7fd9995747-nlqhq
Namespace:      default
Priority:       0
Node:           minikube/10.0.2.15
Start Time:     Mon, 15 Jul 2019 12:49:52 +0200
Labels:         pod-template-hash=7fd9995747
                run=back
Annotations:    <none>
Status:         Running
IP:             172.17.0.7
Controlled By:  ReplicaSet/back-7fd9995747
Containers:
  back:
    Container ID:   docker://8a46e16c52be24b12831bb38d2088b8059947d099299d15755d77094b9cb5a8b
    Image:          back:latest
    Image ID:       docker://sha256:69218763696932578e199b9ab5fc2c3e9087f9482ac7e767db2f5939be98a534
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Mon, 15 Jul 2019 12:49:54 +0200
      Finished:     Mon, 15 Jul 2019 12:49:54 +0200
    Ready:          False
    Restart Count:  1
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-c247f (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-c247f:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-c247f
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age              From               Message
  ----     ------     ----             ----               -------
  Normal   Scheduled  6s               default-scheduler  Successfully assigned default/back-7fd9995747-nlqhq to minikube
  Normal   Pulled     4s (x2 over 5s)  kubelet, minikube  Container image "back:latest" already present on machine
  Normal   Created    4s (x2 over 5s)  kubelet, minikube  Created container back
  Normal   Started    4s (x2 over 5s)  kubelet, minikube  Started container back
  Warning  BackOff    2s (x2 over 3s)  kubelet, minikube  Back-off restarting failed container
Questioner
rehan
Viewed
73
mebius99 2019-07-03 21:19

As you can see zero of three Pods have Ready status:

NAME   READY   AVAILABLE
back   0/3     0 

To find out what is going on you should check the underlying Pods:

$ kubectl get pods -l app=back

and then look at the Events in their description:

$ kubectl describe pod back-...