Kubernetes – Jenkins integration

I’ve bootstrapped with kubeadm Kubernetes 1.9 RBAC cluster and I’ve started inside a POD Jenkins based on jenkins/jenkins:lts. I would like to try out https://github.com/jenkinsci/kubernetes-plugin .
I have already created a serviceaccount based on the proposal in https://gist.github.com/lachie83/17c1fff4eb58cf75c5fb11a4957a64d2

> kubectl -n dev-infra create sa jenkins
> kubectl create clusterrolebinding jenkins --clusterrole cluster-admin --serviceaccount=dev-infra:jenkins
> kubectl -n dev-infra get sa jenkins -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: 2018-02-16T12:06:26Z
  name: jenkins
  namespace: dev-infra
  resourceVersion: "1295580"
  selfLink: /api/v1/namespaces/dev-infra/serviceaccounts/jenkins
  uid: d040041c-1311-11e8-a4f8-005056039a14
secrets:
- name: jenkins-token-vmt79

> kubectl -n dev-infra get secret jenkins-token-vmt79 -o yaml
apiVersion: v1
data:
  ca.crt: LS0tL...0tLQo=
  namespace: ZGV2LWluZnJh
  token: ZXlK...tdVE=
kind: Secret
metadata:
  annotations:
    kubernetes.io/service-account.name: jenkins
    kubernetes.io/service-account.uid: d040041c-1311-11e8-a4f8-005056039a14
  creationTimestamp: 2018-02-16T12:06:26Z
  name: jenkins-token-vmt79
  namespace: dev-infra
  resourceVersion: "1295579"
  selfLink: /api/v1/namespaces/dev-infra/secrets/jenkins-token-vmt79
  uid: d041fa6c-1311-11e8-a4f8-005056039a14
type: kubernetes.io/service-account-token

After that I go to Manage Jenkins -> Configure System -> Cloud -> Kubernetes and set the Kubernetes URL to the Cluster API that I use also in my kubectl KUBECONFIG server: url:port.

When I hit test connection I get “Error testing connection https://url:port: Failure executing: GET at: https://url:port/api/v1/namespaces/dev-infra/pods. Message: Forbidden!Configured service account doesn’t have access. Service account may have been revoked. pods is forbidden: User “system:serviceaccount:dev-infra:default” cannot list pods in the namespace “dev-infra”.

I don’t want to give to the dev-infra:default user a cluster-admin role and I want to use the jenkins sa I created. I can’t understand how to configure the credentials in Jenkins. When I hit add credentials on the https://github.com/jenkinsci/kubernetes-plugin/blob/master/configuration.png I get

<select class="setting-input dropdownList">
<option value="0">Username with password</option>
<option value="1">Docker Host Certificate Authentication</option>
<option value="2">Kubernetes Service Account</option>
<option value="3">OpenShift OAuth token</option>
<option value="4">OpenShift Username and Password</option>
<option value="5">SSH Username with private key</option>
<option value="6">Secret file</option>
<option value="7">Secret text</option>
<option value="8">Certificate</option></select>

I could not find a clear example how to configure Jenkins Kubernetes Cloud connector to use my Jenkins to authenticate with service account jenkins.
Could you please help me to find step-by-step guide – what kind of of credentials I need?

Regards,
Pavel

GKE kubernetes delayed_job pod logs

I have a deployment object with the following rake jobs:work command:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: staging-delayed-job-deployment
  namespace: staging
spec:
  template:
    metadata:
      labels:
        env: staging
        name: delayed-job
    spec:
      containers:
        - name: job
          image: gcr.io/ej-gc-dev/redacted:<%= ENV['IMAGE_TAG'] %>
          command: ["/bin/bash", "-l", "-c"]
          args: ["RAILS_ENV=production bundle exec rake jobs:work"]

When I run kubectl logs I get nothing. How do I get the rake jobs:work output to show in kubectl logs ? i.e if run directly in the pod it gives output like this:

[Worker(host:staging-deployment-cc7dc559f-bswvr pid:2381)] Starting job worker
[Worker(host:staging-deployment-cc7dc559f-bswvr pid:2381)] Job UpdateHubspotPersonaJob (id=67) RUNNING
[Worker(host:staging-deployment-cc7dc559f-bswvr pid:2381)] Job UpdateHubspotPersonaJob (id=67) COMPLETED after 0.4903
[Worker(host:staging-deployment-cc7dc559f-bswvr pid:2381)] Job Mailerjack.registration_created_user_welcome (id=68) RUNNING
[Worker(host:staging-deployment-cc7dc559f-bswvr pid:2381)] Job Mailerjack.registration_created_user_welcome (id=68) COMPLETED after 0.9115
[Worker(host:staging-deployment-cc7dc559f-bswvr pid:2381)] Job UpdateHubspotPersonaJob (id=69) RUNNING
[Worker(host:staging-deployment-cc7dc559f-bswvr pid:2381)] Job UpdateHubspotPersonaJob (id=69) COMPLETED after 0.1752
[Worker(host:staging-deployment-cc7dc559f-bswvr pid:2381)] Job Mailerjack.registration_created_user_welcome (id=70) RUNNING
[Worker(host:staging-deployment-cc7dc559f-bswvr pid:2381)] Job Mailerjack.registration_created_user_welcome (id=70) COMPLETED after 0.4770
[Worker(host:staging-deployment-cc7dc559f-bswvr pid:2381)] 4 jobs processed at 1.7649 j/s, 0 failed

I want that to show up when I run kubectl logs

Deploy Hyperledger composer network to Heroku OR Kubernetes

I am new to Hyperledger. I have defined my model for the network and successfully deployed it locally(over my system). Everything is working as expected. I want to replicate the same and make it public so that other team members can use it too.

How can I deploy the same over cloud hosting services like Heroku or Kubernetes?

I just want that the blockchain services should be available publicly.

Querying external etcd with prometheus

I’ve got prometheus running ontop of kubernetes with the following scrape config, as described by the documentation. Where the .pem files are located on disk within the prometheus container.

https://prometheus.io/docs/prometheus/latest/configuration/configuration/#

scrape_configs:
- job_name: etcd
  static_configs:
  - targets: ['10.0.0.222:2379','10.0.0.221:2379','10.0.0.220:2379']
  tls_config:
  # CA certificate to validate API server certificate with.
    ca_file: /prometheus/ca.pem
    cert_file: /prometheus/cert.pem
    key_file: /prometheus/key.pem

I see that etcd as a target in prometheus, however its returning garbage.

https://i.imgur.com/rdRI4V7.png

I am able to hit the metrics endpoint doing a local curl by passing in the client certificate information like so.

What am I doing wrong?

sudo curl --cacert /etc/ssl/etcd/ssl/ca.pem https://127.0.0.1:2379/metrics -L --cert /etc/ssl/etcd/ssl/node-kubemaster-rwva1-prod-2.pem --key /etc/ssl/etcd/ssl/node-kubemaster-rwva1-prod-2-key.pem^C

gke removing instance groups from internal load balancers

I have a sandbox gke cluster, with some services and some internal load balancers.

Services are mostly defined like this:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: my-app
  name: my-app
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
    cloud.google.com/load-balancer-type: "Internal"
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: my-app
  sessionAffinity: None
  type: LoadBalancer

But eventually someone reports that the endpoint is not working anymore (like twice a week), I go investigate and the load balancer has no instance groups attached anymore.

The only “weird” things we do are to scale all our app’s pods down to 0 replicas when out of business hours and to use preemptible instances on the node pool… I thought it could be related to the first, but I forced scale down some services now and their load balancers still fine.

It may be related to preemptible though, seems like that if the pods are all in one instance (specially kube-system pods), when the node goes down the pods go down at all once, and it seems like it can recover properly from that.

Other weird thing I see happening is the k8s-ig--foobar coming to have 0 instances.

Has anyone experienced something like this? I couldn’t find any docs about this.

kube-dns Failed to list *v1.Service: Unauthorized

I am attempting to create a HA Kubernetes cluster in Azure using kubeadm as documented here https://kubernetes.io/docs/setup/independent/high-availability/

I have everything working when using only 1 master node but when changing to 3 master nodes kube-dns fails to start with the error Failed to list *v1.Endpoints: Unauthorized

Here are the logs from kube-dns

I0215 13:27:54.164354       1 dns.go:48] version: 1.14.6-3-gc36cb11
I0215 13:27:54.165256       1 server.go:69] Using configuration read from directory: /kube-dns-config with period 10s
I0215 13:27:54.165297       1 server.go:112] FLAG: --alsologtostderr="false"
I0215 13:27:54.165317       1 server.go:112] FLAG: --config-dir="/kube-dns-config"
I0215 13:27:54.165323       1 server.go:112] FLAG: --config-map=""
I0215 13:27:54.165328       1 server.go:112] FLAG: --config-map-namespace="kube-system"
I0215 13:27:54.165332       1 server.go:112] FLAG: --config-period="10s"
I0215 13:27:54.165345       1 server.go:112] FLAG: --dns-bind-address="0.0.0.0"
I0215 13:27:54.165349       1 server.go:112] FLAG: --dns-port="10053"
I0215 13:27:54.165356       1 server.go:112] FLAG: --domain="cluster.local."
I0215 13:27:54.165362       1 server.go:112] FLAG: --federations=""
I0215 13:27:54.165374       1 server.go:112] FLAG: --healthz-port="8081"
I0215 13:27:54.165379       1 server.go:112] FLAG: --initial-sync-timeout="1m0s"
I0215 13:27:54.165384       1 server.go:112] FLAG: --kube-master-url=""
I0215 13:27:54.165389       1 server.go:112] FLAG: --kubecfg-file=""
I0215 13:27:54.165400       1 server.go:112] FLAG: --log-backtrace-at=":0"
I0215 13:27:54.165407       1 server.go:112] FLAG: --log-dir=""
I0215 13:27:54.165412       1 server.go:112] FLAG: --log-flush-frequency="5s"
I0215 13:27:54.165416       1 server.go:112] FLAG: --logtostderr="true"
I0215 13:27:54.165421       1 server.go:112] FLAG: --nameservers=""
I0215 13:27:54.165425       1 server.go:112] FLAG: --stderrthreshold="2"
I0215 13:27:54.165429       1 server.go:112] FLAG: --v="2"
I0215 13:27:54.165434       1 server.go:112] FLAG: --version="false"
I0215 13:27:54.165440       1 server.go:112] FLAG: --vmodule=""
I0215 13:27:54.165545       1 server.go:194] Starting SkyDNS server (0.0.0.0:10053)
I0215 13:27:54.165786       1 server.go:213] Skydns metrics enabled (/metrics:10055)
I0215 13:27:54.165797       1 dns.go:146] Starting endpointsController
I0215 13:27:54.165802       1 dns.go:149] Starting serviceController
I0215 13:27:54.165933       1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0215 13:27:54.165957       1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
E0215 13:27:54.175979       1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Unauthorized
E0215 13:27:54.176489       1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Unauthorized
I0215 13:27:54.665978       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0215 13:27:55.166135       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
E0215 13:27:55.178554       1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Unauthorized
E0215 13:27:55.181275       1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Unauthorized

kube-dns config

Name:           kube-dns-6f4fd4bdf-6lsn5
Namespace:      kube-system
Node:           k8s-master-0/10.240.0.32
Start Time:     Thu, 15 Feb 2018 13:11:24 +0000
Labels:         k8s-app=kube-dns
                pod-template-hash=290980689
Annotations:    <none>
Status:         Running
IP:             10.240.0.9
Controlled By:  ReplicaSet/kube-dns-6f4fd4bdf
Containers:
  kubedns:
    Container ID:  docker://b4198de951a4ec59a95f41e2cfbff585ad33e349a42af0c17007e884118c7591
    Image:         gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7
    Image ID:      docker-pullable://gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:f5bddc71efe905f4e4b96f3ca346414be6d733610c1525b98fff808f93966680
    Ports:         10053/UDP, 10053/TCP, 10055/TCP
    Args:
      --domain=cluster.local.
      --dns-port=10053
      --config-dir=/kube-dns-config
      --v=2
    State:          Running
      Started:      Thu, 15 Feb 2018 13:15:48 +0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Thu, 15 Feb 2018 13:14:18 +0000
      Finished:     Thu, 15 Feb 2018 13:15:18 +0000
    Ready:          False
    Restart Count:  3
    Limits:
      memory:  170Mi
    Requests:
      cpu:      100m
      memory:   70Mi
    Liveness:   http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:  http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
    Environment:
      PROMETHEUS_PORT:  10055
    Mounts:
      /kube-dns-config from kube-dns-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-zzc2r (ro)
  dnsmasq:
    Container ID:  docker://27adbf0655e2f1ad8088f578865743591e46dbf7f45cbab05c0fd3ccebf324d3
    Image:         gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
    Image ID:      docker-pullable://gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:6cfb9f9c2756979013dbd3074e852c2d8ac99652570c5d17d152e0c0eb3321d6
    Ports:         53/UDP, 53/TCP
    Args:
      -v=2
      -logtostderr
      -configDir=/etc/k8s/dns/dnsmasq-nanny
      -restartDnsmasq=true
      --
      -k
      --cache-size=1000
      --no-negcache
      --log-facility=-
      --server=/cluster.local/127.0.0.1#10053
      --server=/in-addr.arpa/127.0.0.1#10053
      --server=/ip6.arpa/127.0.0.1#10053
    State:          Running
      Started:      Thu, 15 Feb 2018 13:14:17 +0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Thu, 15 Feb 2018 13:11:57 +0000
      Finished:     Thu, 15 Feb 2018 13:14:17 +0000
    Ready:          True
    Restart Count:  1
    Requests:
      cpu:        150m
      memory:     20Mi
    Liveness:     http-get http://:10054/healthcheck/dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/k8s/dns/dnsmasq-nanny from kube-dns-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-zzc2r (ro)
  sidecar:
    Container ID:  docker://7d0037548078b94ecf064a38637b034989f33c9013d0c46b8ae922574334c8cb
    Image:         gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
    Image ID:      docker-pullable://gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:f80f5f9328107dc516d67f7b70054354b9367d31d4946a3bffd3383d83d7efe8
    Port:          10054/TCP
    Args:
      --v=2
      --logtostderr
      --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
      --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
    State:          Running
      Started:      Thu, 15 Feb 2018 13:12:03 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        10m
      memory:     20Mi
    Liveness:     http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-zzc2r (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  kube-dns-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-dns
    Optional:  true
  kube-dns-token-zzc2r:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-dns-token-zzc2r
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age              From                   Message
  ----     ------                 ----             ----                   -------
  Normal   SuccessfulMountVolume  6m               kubelet, k8s-master-0  MountVolume.SetUp succeeded for volume "kube-dns-config"
  Normal   SuccessfulMountVolume  6m               kubelet, k8s-master-0  MountVolume.SetUp succeeded for volume "kube-dns-token-zzc2r"
  Normal   Scheduled              6m               default-scheduler      Successfully assigned kube-dns-6f4fd4bdf-6lsn5 to k8s-master-0
  Normal   Pulling                6m               kubelet, k8s-master-0  pulling image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7"
  Normal   Pulled                 5m               kubelet, k8s-master-0  Successfully pulled image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7"
  Normal   Pulling                5m               kubelet, k8s-master-0  pulling image "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7"
  Normal   Pulled                 5m               kubelet, k8s-master-0  Successfully pulled image "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7"
  Normal   Created                5m               kubelet, k8s-master-0  Created container
  Normal   Started                5m               kubelet, k8s-master-0  Started container
  Normal   Pulling                5m               kubelet, k8s-master-0  pulling image "gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7"
  Normal   Pulled                 5m               kubelet, k8s-master-0  Successfully pulled image "gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7"
  Normal   Started                5m               kubelet, k8s-master-0  Started container
  Normal   Created                5m               kubelet, k8s-master-0  Created container
  Normal   Created                4m (x2 over 5m)  kubelet, k8s-master-0  Created container
  Normal   Started                4m (x2 over 5m)  kubelet, k8s-master-0  Started container
  Normal   Pulled                 4m               kubelet, k8s-master-0  Container image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7" already present on machine
  Warning  Unhealthy              4m (x6 over 5m)  kubelet, k8s-master-0  Readiness probe failed: Get http://10.240.0.9:8081/readiness: dial tcp 10.240.0.9:8081: getsockopt: connection refused
  Warning  Unhealthy              4m (x2 over 4m)  kubelet, k8s-master-0  Liveness probe failed: HTTP probe failed with statuscode: 503

kube-apiserver logs

I0215 13:11:31.098781       1 server.go:121] Version: v1.9.3
I0215 13:11:31.098876       1 cloudprovider.go:59] --external-hostname was not specified. Trying to get it from the cloud provider.
I0215 13:11:32.125058       1 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0215 13:11:32.126389       1 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0215 13:11:32.127860       1 feature_gate.go:190] feature gates: map[Initializers:true]
I0215 13:11:32.127896       1 initialization.go:90] enabled Initializers feature as part of admission plugin setup
I0215 13:11:32.129903       1 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0215 13:11:32.135428       1 master.go:225] Using reconciler: lease
W0215 13:11:33.634736       1 genericapiserver.go:342] Skipping API batch/v2alpha1 because it has no resources.
W0215 13:11:33.648464       1 genericapiserver.go:342] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0215 13:11:33.649385       1 genericapiserver.go:342] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0215 13:11:33.876072       1 genericapiserver.go:342] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
[restful] 2018/02/15 13:11:34 log.go:33: [restful/swagger] listing is available at https://10.240.0.96:6443/swaggerapi
[restful] 2018/02/15 13:11:34 log.go:33: [restful/swagger] https://10.240.0.96:6443/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2018/02/15 13:11:35 log.go:33: [restful/swagger] listing is available at https://10.240.0.96:6443/swaggerapi
[restful] 2018/02/15 13:11:35 log.go:33: [restful/swagger] https://10.240.0.96:6443/swaggerui/ is mapped to folder /swagger-ui/
I0215 13:11:36.002627       1 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0215 13:11:39.587991       1 serve.go:89] Serving securely on [::]:6443
I0215 13:11:39.588156       1 controller.go:84] Starting OpenAPI AggregationController
I0215 13:11:39.588251       1 crd_finalizer.go:242] Starting CRDFinalizer
I0215 13:11:39.588439       1 apiservice_controller.go:112] Starting APIServiceRegistrationController
I0215 13:11:39.588467       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0215 13:11:39.590264       1 customresource_discovery_controller.go:152] Starting DiscoveryController
I0215 13:11:39.590313       1 naming_controller.go:274] Starting NamingConditionController
I0215 13:11:39.590348       1 crdregistration_controller.go:110] Starting crd-autoregister controller
I0215 13:11:39.590383       1 controller_utils.go:1019] Waiting for caches to sync for crd-autoregister controller
I0215 13:11:39.596108       1 available_controller.go:262] Starting AvailableConditionController
I0215 13:11:39.596207       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
W0215 13:11:39.772499       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [10.240.0.32 10.240.0.64 10.240.0.96]
I0215 13:11:39.789361       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0215 13:11:39.790586       1 controller_utils.go:1026] Caches are synced for crd-autoregister controller
I0215 13:11:39.790687       1 autoregister_controller.go:136] Starting autoregister controller
I0215 13:11:39.790718       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0215 13:11:39.797365       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0215 13:11:39.891025       1 cache.go:39] Caches are synced for autoregister controller
I0215 13:11:43.677686       1 trace.go:76] Trace[776235327]: "Create /api/v1/nodes" (started: 2018-02-15 13:11:39.665275463 +0000 UTC m=+8.654833726) (total time: 4.012349422s):
Trace[776235327]: [4.003379614s] [4.001304813s] About to store object in database
I0215 13:11:43.692112       1 trace.go:76] Trace[387714741]: "Create /api/v1/namespaces/default/events" (started: 2018-02-15 13:11:39.664852862 +0000 UTC m=+8.654411025) (total time: 4.027240736s):
Trace[387714741]: [4.000939412s] [4.000902412s] About to store object in database
I0215 13:11:45.572082       1 trace.go:76] Trace[88905993]: "Create /api/v1/namespaces/kube-system/configmaps" (started: 2018-02-15 13:11:41.555835981 +0000 UTC m=+10.545394144) (total time: 4.016224183s):
Trace[88905993]: [4.00121587s] [4.00084167s] About to store object in database
I0215 13:11:46.391837       1 controller.go:538] quota admission added evaluator for: {rbac.authorization.k8s.io roles}
I0215 13:11:46.595514       1 controller.go:538] quota admission added evaluator for: {rbac.authorization.k8s.io rolebindings}
I0215 13:11:46.648369       1 controller.go:538] quota admission added evaluator for: { serviceaccounts}
I0215 13:11:46.733498       1 controller.go:538] quota admission added evaluator for: {apps deployments}
I0215 13:11:46.868194       1 controller.go:538] quota admission added evaluator for: {apps daemonsets}
I0215 13:11:47.799634       1 trace.go:76] Trace[1007777561]: "Create /api/v1/namespaces/default/events" (started: 2018-02-15 13:11:43.693288299 +0000 UTC m=+12.682846462) (total time: 4.106318613s):
Trace[1007777561]: [4.000786622s] [4.000527722s] About to store object in database
Trace[1007777561]: [4.106260513s] [105.473891ms] Object stored in database
E0215 13:11:50.173172       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0215 13:11:50.192650       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
I0215 13:11:50.580495       1 logs.go:41] http: TLS handshake error from 168.63.129.16:60525: EOF
E0215 13:11:51.176679       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0215 13:11:51.208875       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0215 13:11:52.183980       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0215 13:11:52.210901       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0215 13:11:53.186609       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0215 13:11:53.213319       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0215 13:11:54.189189       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0215 13:11:54.215265       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
I0215 13:11:55.131278       1 trace.go:76] Trace[512345981]: "Create /api/v1/namespaces/kube-system/events" (started: 2018-02-15 13:11:53.526743795 +0000 UTC m=+22.516302058) (total time: 1.604495136s):
Trace[512345981]: [1.604394936s] [1.604222736s] Object stored in database
E0215 13:11:55.191423       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0215 13:11:55.217414       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0215 13:11:56.193658       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0215 13:11:56.219994       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0215 13:11:57.196629       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0215 13:11:57.221852       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0215 13:11:58.199296       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0215 13:11:58.224227       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0215 13:11:59.201463       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0215 13:11:59.226377       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0215 13:12:00.203840       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0215 13:12:00.228291       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]

Error from server (BadRequest): error when creating “pod.yaml”:

I am getting the following error when I run

kubectl create -f pod.yaml

error

Error from server (BadRequest): error when creating "pod.yaml": Pod in 
version "ratings:v1" cannot be handled as a Pod: no kind "Pod" is 
registered for version "ratings:v1"

minikube is up and running and I even tried to change it to kind: Deployment but I got another error saying:

error: unable to recognize "pod.yaml": no matches for /, Kind=Deployment

yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: customer-ratings
  labels:
    app: product-ratings-vue
spec:
  replicas: 1
  selector:
    matchLabels:
      app: product-ratings-vue
  template:
    metadata:
      labels:
        app: product-ratings-vue 
    spec:
      containers: 
      - name: api-service
        image: api-service
        ports:
          - containerPort: 8080
          - containerPort: 8000
        resources: {}
        volumeMounts:
          - mountPath: /usr/local/tomcat/logs
            name: api-service-claim 

# ekomi-import       
      - name: ekomi-import
        image: ekomi-import
        resources: {}

# cache
      - name: cache
        image: cache
        resources:
          limits:
            memory: "536870912"

# storage
      - name: storage
        image: storage
        ports:
         - containerPort: 7000
         - containerPort: 7001
         - containerPort: 7199
         - containerPort: 9042
         - containerPort: 9160
        resources: {}
# view
      - name: view
        image: view
        ports:
         - containerPort: 3000
        resources: {}

      volumes:
        - name: api-service-claim
          persistentVolumeClaim:
            claimName: api-service-claim
 # tomcat
      - name: tomcat
        image: tomcat
# node
      - name: node
        image: node
        resources: {}
# openJdk
      - name: node
      - image: node
        resources: {}

Nginx not responsive enough when used as reverse proxy for node app inside Kubernetes cluster

I have the below architecture

UI (served on nginx) -> Gateway (node app, Nginx is a reverse proxy here) -> 
Backend python service served on UWSGI. All these 3 api services are running 
as pods in a Kubernetes cluster.

When I upload the file from the UI which would hit the reverse proxy nginx of the gateway app, the upload is very slow 30MB file takes 10 mins (not a network issue for sure).
Eventual file upload flow would be following:

ui -> nginx_gateway (reverse rpoxy) -> gateway node app -> python service on 
uwsgi.

The progress bar displaying upload progress displays only the time taken for the first hop which is from the UI being served from a nginx server to the nginx server of the gateway acting as a reverse proxy to the node application.

Does anybody have any pointers as to where could be the issue ?

PS: If I deploy these services as standalone on my local system along with nginx, I do not see any performance issue. But if I deploy these services as kubernetes cluster on minikube locally on windows, I see the same performance issue during file upload.