Running our first containers on Kubernetes
- Starting a simple pod with
- Behind the scenes of
- What are these different things?
- Viewing container output
- Streaming logs in real time
- Scaling our application
- What if we wanted something different?
- What about that deprecation warning?
- Various ways of creating resources
- Viewing logs of multiple pods
- Why can't we stream the logs of many pods?
- Shortcomings of
kubectl logs -l ... --tail N
- Aren't we flooding 18.104.22.168?
- First things first: we cannot run a container
- We are going to run a pod, and in that pod there will be a single container
- In that container in the pod, we are going to run a simple
- Then we are going to start additional copies of the pod
Starting a simple pod with
- We need to specify at least a name and the image we want to use
22.214.171.124, Cloudflare's public DNS resolver:
kubectl run pingpong --image alpine ping 126.96.36.199
(Starting with Kubernetes 1.12, we get a message telling us that
kubectl run is deprecated. Let's ignore it for now.)
Behind the scenes of
- Let's look at the resources that were created by
List most resource types:
kubectl get all
We should see the following things:
deployment.apps/pingpong(the deployment that we just created)
replicaset.apps/pingpong-xxxxxxxxxx(a replica set created by the deployment)
pod/pingpong-xxxxxxxxxx-yyyyy(a pod created by the replica set)
Note: as of 1.10.1, resource types are displayed in more detail.
What are these different things?
A deployment is a high-level construct
allows scaling, rolling updates, rollbacks
multiple deployments can be used together to implement a canary deployment
delegates pods management to replica sets
A replica set is a low-level construct
makes sure that a given number of identical pods are running
rarely used directly
A replication controller is the (deprecated) predecessor of a replica set
kubectl runcreated a deployment,
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/pingpong 1 1 1 1 10m
- That deployment created a replica set,
NAME DESIRED CURRENT READY AGE replicaset.apps/pingpong-7c8bbcd9bc 1 1 1 10m
- That replica set created a pod,
NAME READY STATUS RESTARTS AGE pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m
We'll see later how these folks play together for:
- scaling, high availability, rolling updates
Viewing container output
Let's use the
We will pass either a pod name, or a type/name
(E.g. if we specify a deployment or replica set, it will get the first pod in it)
Unless specified otherwise, it will only show logs of the first container in the pod
(Good thing there's only one in ours!)
View the result of our
kubectl logs deploy/pingpong
Streaming logs in real time
kubectl logssupports convenient options:
--followto stream logs in real time (à la
--tailto indicate how many lines you want to see (from the end)
--sinceto get logs only after a given timestamp
View the latest logs of our
kubectl logs deploy/pingpong --tail 1 --follow
Scaling our application
- We can create additional copies of our container (I mean, our pod) with
kubectl scale deploy/pingpong --replicas 3
Note that this command does exactly the same thing:
kubectl scale deployment pingpong --replicas 3
Note: what if we tried to scale
We could! But the deployment would notice it right away, and scale back to the initial level.
pingpongwatches its replica set
The replica set ensures that the right number of pods are running
What happens if pods disappear?
In a separate window, list pods, and keep watching them:
kubectl get pods -w
- Destroy a pod:
kubectl delete pod pingpong-xxxxxxxxxx-yyyyy
What if we wanted something different?
What if we wanted to start a "one-shot" container that doesn't get restarted?
We could use
kubectl run --restart=OnFailureor
kubectl run --restart=Never
These commands would create jobs or pods instead of deployments
Under the hood,
kubectl runinvokes "generators" to create resource descriptions
We could also write these resource descriptions ourselves (typically in YAML),
and create them on the cluster with
kubectl apply -f(discussed later)
kubectl run --schedule=..., we can also create cronjobs
What about that deprecation warning?
As we can see from the previous slide,
kubectl runcan do many things
The exact type of resource created is not obvious
To make things more explicit, it is better to use
kubectl create deploymentto create a deployment
kubectl create jobto create a job
kubectl create cronjobto run a job periodically
(since Kubernetes 1.14)
kubectl runwill be used only to start one-shot pods
Various ways of creating resources
- easy way to get started
kubectl create <resource>
- explicit, but lacks some features
- can't create a CronJob before Kubernetes 1.14
- can't pass command-line arguments to deployments
kubectl create -f foo.yamlor
kubectl apply -f foo.yaml
- all features are available
- requires writing YAML
Viewing logs of multiple pods
When we specify a deployment name, only one single pod's logs are shown
We can view the logs of multiple pods by specifying a selector
A selector is a logic expression using labels
Conveniently, when you
kubectl run somename, the associated objects have a
View the last line of log from all pods with the
kubectl logs -l run=pingpong --tail 1
Streaming logs of multiple pods
- Can we stream the logs of all our
kubectl logs -l run=pingpong --tail 1 -f
-f is only possible since Kubernetes 1.14!
Let's try to understand why ...
Streaming logs of many pods
- Let's see what happens if we try to stream the logs for more than 5 pods
Scale up our deployment:
kubectl scale deployment pingpong --replicas=8
Stream the logs:
kubectl logs -l run=pingpong --tail 1 -f
We see a message like the following one:
error: you are attempting to follow 8 log streams, but maximum allowed concurency is 5, use --max-log-requests to increase the limit
Why can't we stream the logs of many pods?
kubectlopens one connection to the API server per pod
For each pod, the API server opens one extra connection to the corresponding kubelet
If there are 1000 pods in our deployment, that's 1000 inbound + 1000 outbound connections on the API server
This could easily put a lot of stress on the API server
Prior Kubernetes 1.14, it was decided to not allow multiple connections
From Kubernetes 1.14, it is allowed, but limited to 5 connections
(this can be changed with
For more details about the rationale, see PR #67573
We don't see which pod sent which log line
If pods are restarted / replaced, the log stream stops
If new pods are added, we don't see their logs
To stream the logs of multiple pods, we need to write a selector
There are external tools to address these shortcomings
kubectl logs -l ... --tail N
If we run this with Kubernetes 1.12, the last command shows multiple lines
This is a regression when
--tailis used together with
It always shows the last 10 lines of output for each container
(instead of the number of lines specified on the command line)
The problem was fixed in Kubernetes 1.13
See #70554 for details.
Aren't we flooding 188.8.131.52?
If you're wondering this, good question!
Don't worry, though:
APNIC's research group held the IP addresses 184.108.40.206 and 220.127.116.11. While the addresses were valid, so many people had entered them into various random systems that they were continuously overwhelmed by a flood of garbage traffic. APNIC wanted to study this garbage traffic but any time they'd tried to announce the IPs, the flood would overwhelm any conventional network.
It's very unlikely that our concerted pings manage to produce even a modest blip at Cloudflare's NOC!