3. Google Kubernetes Engine

1. Offre GCP/GitLab à 500$

Every new Google Cloud Platform account receives $300 in credit upon signup. In partnership with Google, GitLab is able to offer an additional $200 for new GCP accounts to get started with GitLab’s GKE integration. Here's a link to apply for your $200 credit.

2. Installer et configurer google-cloud-sdk (gcloud) et kubectl

Référence : https://cloud.google.com/kubernetes-engine/docs/quickstart

apt-get update && apt-get -y upgrade
apt install sudo
export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)"
echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
apt-get update && apt-get install google-cloud-sdk
apt-get install kubectl
gcloud init
gcloud config set compute/region europe-west1
gcloud config set compute/zone europe-west1-c

3. Créer un cluster standard

gcloud container clusters create demo-cluster
gcloud container clusters get-credentials demo-cluster

4. Démarrer une application Hello

kubectl run hello-server --image gcr.io/google-samples/hello-app:1.0 --port 8080
kubectl expose deployment hello-server --type LoadBalancer   --port 80 --target-port 8080
kubectl get service hello-server

5. Contexte

kubectl config current-context

6. Statut du cluster

kubectl version

7. Version locale et version du serveur API.

kubectl get componentstatuses
NAME                 STATUS    MESSAGE              ERROR
etcd-1               Healthy   {"health": "true"}
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}
  • controller-manager : responsable de l'exécution des différents contrôleurs (santé des réplicas par exemple).
  • scheduler : place les pods sur les différents noeuds.
  • etcd : stocke tous les objets API.

8. Noeuds workers

kubectl get nodes
NAME                                          STATUS   ROLES    AGE   VERSION
gke-demo-cluster-default-pool-e0c8d40f-30p2   Ready    <none>   6m    v1.11.7-gke.12
gke-demo-cluster-default-pool-e0c8d40f-jm8b   Ready    <none>   6m    v1.11.7-gke.12
gke-demo-cluster-default-pool-e0c8d40f-q0vk   Ready    <none>   6m    v1.11.7-gke.12
kubectl describe node gke-demo-cluster-default-pool-e0c8d40f-30p2
Name:               gke-demo-cluster-default-pool-e0c8d40f-30p2
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/fluentd-ds-ready=true
                    beta.kubernetes.io/instance-type=n1-standard-1
                    beta.kubernetes.io/os=linux
                    cloud.google.com/gke-nodepool=default-pool
                    cloud.google.com/gke-os-distribution=cos
                    failure-domain.beta.kubernetes.io/region=europe-west1
                    failure-domain.beta.kubernetes.io/zone=europe-west1-d
                    kubernetes.io/hostname=gke-demo-cluster-default-pool-e0c8d40f-30p2
Annotations:        container.googleapis.com/instance_id: 1214842643001598663
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 17 Apr 2019 07:58:11 +0000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type                          Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                          ------  -----------------                 ------------------                ------                       -------
  ReadonlyFilesystem            False   Wed, 17 Apr 2019 08:08:55 +0000   Wed, 17 Apr 2019 07:56:46 +0000   FilesystemIsNotReadOnly      Filesystem is not read-only
  FrequentUnregisterNetDevice   False   Wed, 17 Apr 2019 08:08:55 +0000   Wed, 17 Apr 2019 08:01:47 +0000   UnregisterNetDevice          node is functioning properly
  FrequentKubeletRestart        False   Wed, 17 Apr 2019 08:08:55 +0000   Wed, 17 Apr 2019 08:01:47 +0000   FrequentKubeletRestart       kubelet is functioning properly
  FrequentDockerRestart         False   Wed, 17 Apr 2019 08:08:55 +0000   Wed, 17 Apr 2019 08:01:48 +0000   FrequentDockerRestart        docker is functioning properly
  FrequentContainerdRestart     False   Wed, 17 Apr 2019 08:08:55 +0000   Wed, 17 Apr 2019 08:01:49 +0000   FrequentContainerdRestart    containerd is functioning properly
  CorruptDockerOverlay2         False   Wed, 17 Apr 2019 08:08:55 +0000   Wed, 17 Apr 2019 08:01:47 +0000   CorruptDockerOverlay2        docker overlay2 is functioning properly
  KernelDeadlock                False   Wed, 17 Apr 2019 08:08:55 +0000   Wed, 17 Apr 2019 07:56:46 +0000   KernelHasNoDeadlock          kernel has no deadlock
  NetworkUnavailable            False   Wed, 17 Apr 2019 07:58:36 +0000   Wed, 17 Apr 2019 07:58:36 +0000   RouteCreated                 RouteController created a route
  OutOfDisk                     False   Wed, 17 Apr 2019 08:09:23 +0000   Wed, 17 Apr 2019 07:58:11 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure                False   Wed, 17 Apr 2019 08:09:23 +0000   Wed, 17 Apr 2019 07:58:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure                  False   Wed, 17 Apr 2019 08:09:23 +0000   Wed, 17 Apr 2019 07:58:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure                   False   Wed, 17 Apr 2019 08:09:23 +0000   Wed, 17 Apr 2019 07:58:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                         True    Wed, 17 Apr 2019 08:09:23 +0000   Wed, 17 Apr 2019 07:58:21 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  10.132.0.4
  ExternalIP:  35.187.168.205
  Hostname:    gke-demo-cluster-default-pool-e0c8d40f-30p2
Capacity:
 cpu:                1
 ephemeral-storage:  98868448Ki
 hugepages-2Mi:      0
 memory:             3787664Ki
 pods:               110
Allocatable:
 cpu:                940m
 ephemeral-storage:  47093746742
 hugepages-2Mi:      0
 memory:             2702224Ki
 pods:               110
System Info:
 Machine ID:                 89bcf47837b646d37776b4be9bc86b0d
 System UUID:                89BCF478-37B6-46D3-7776-B4BE9BC86B0D
 Boot ID:                    2b4ef023-eb00-479c-b2f9-4e96141212a5
 Kernel Version:             4.14.91+
 OS Image:                   Container-Optimized OS from Google
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.3.2
 Kubelet Version:            v1.11.7-gke.12
 Kube-Proxy Version:         v1.11.7-gke.12
PodCIDR:                     10.48.2.0/24
ProviderID:                  gce://k8s-test-237907/europe-west1-d/gke-demo-cluster-default-pool-e0c8d40f-30p2
Non-terminated Pods:         (4 in total)
  Namespace                  Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits   AGE
  ---------                  ----                                                      ------------  ----------  ---------------  -------------   ---
  kube-system                fluentd-gcp-v3.2.0-qwfh6                                  100m (10%)    1 (106%)    200Mi (7%)       500Mi (18%)     10m
  kube-system                heapster-v1.6.0-beta.1-6c7f45769-9h95h                    138m (14%)    138m (14%)  301856Ki (11%)   301856Ki (11%)  10m
  kube-system                kube-proxy-gke-demo-cluster-default-pool-e0c8d40f-30p2    100m (10%)    0 (0%)      0 (0%)           0 (0%)          10m
  kube-system                metrics-server-v0.2.1-fd596d746-gnmx2                     53m (5%)      148m (15%)  154Mi (5%)       404Mi (15%)     10m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests        Limits
  --------           --------        ------
  cpu                391m (41%)      1286m (136%)
  memory             664352Ki (24%)  1227552Ki (45%)
  ephemeral-storage  0 (0%)          0 (0%)
Events:
  Type    Reason                     Age                From                                                          Message
  ----    ------                     ----               ----                                                          -------
  Normal  Starting                   11m                kubelet, gke-demo-cluster-default-pool-e0c8d40f-30p2          Starting kubelet.
  Normal  NodeHasSufficientDisk      11m (x2 over 11m)  kubelet, gke-demo-cluster-default-pool-e0c8d40f-30p2          Node gke-demo-cluster-default-pool-e0c8d40f-30p2 status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory    11m (x2 over 11m)  kubelet, gke-demo-cluster-default-pool-e0c8d40f-30p2          Node gke-demo-cluster-default-pool-e0c8d40f-30p2 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure      11m (x2 over 11m)  kubelet, gke-demo-cluster-default-pool-e0c8d40f-30p2          Node gke-demo-cluster-default-pool-e0c8d40f-30p2 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID       11m (x2 over 11m)  kubelet, gke-demo-cluster-default-pool-e0c8d40f-30p2          Node gke-demo-cluster-default-pool-e0c8d40f-30p2 status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced    11m                kubelet, gke-demo-cluster-default-pool-e0c8d40f-30p2          Updated Node Allocatable limit across pods
  Normal  NodeReady                  11m                kubelet, gke-demo-cluster-default-pool-e0c8d40f-30p2          Node gke-demo-cluster-default-pool-e0c8d40f-30p2 status is now: NodeReady
  Normal  Starting                   10m                kube-proxy, gke-demo-cluster-default-pool-e0c8d40f-30p2       Starting kube-proxy.
  Normal  FrequentKubeletRestart     7m41s              systemd-monitor, gke-demo-cluster-default-pool-e0c8d40f-30p2  Node condition FrequentKubeletRestart is now: False, reason: FrequentKubeletRestart
  Normal  CorruptDockerOverlay2      7m41s              docker-monitor, gke-demo-cluster-default-pool-e0c8d40f-30p2   Node condition CorruptDockerOverlay2 is now: False, reason: CorruptDockerOverlay2
  Normal  UnregisterNetDevice        7m41s              kernel-monitor, gke-demo-cluster-default-pool-e0c8d40f-30p2   Node condition FrequentUnregisterNetDevice is now: False, reason: UnregisterNetDevice
  Normal  FrequentDockerRestart      7m40s              systemd-monitor, gke-demo-cluster-default-pool-e0c8d40f-30p2  Node condition FrequentDockerRestart is now: False, reason: FrequentDockerRestart
  Normal  FrequentContainerdRestart  7m39s              systemd-monitor, gke-demo-cluster-default-pool-e0c8d40f-30p2  Node condition FrequentContainerdRestart is now: False, reason: FrequentContainerdRestart

9. Composants

kubectl cluster-info
Kubernetes master is running at https://104.199.32.91
GLBCDefaultBackend is running at https://104.199.32.91/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
Heapster is running at https://104.199.32.91/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://104.199.32.91/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://104.199.32.91/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

10. Applications

kubectl get pods
kubectl get services
kubectl get deployments

11. Terminer le déploiement de l'application et le cluster

kubectl delete service hello-server
gcloud container clusters delete demo-cluster