- Options for our first production cluster
- One big cluster vs. multiple small ones
- Relevant sections
- Stateful services (databases etc.)
- Stateful services (second take)
- HTTP traffic handling
- Managing the configuration of our applications
- Managing stack deployments
- Cluster federation
- Cluster federation
- Developer experience
Alright, how do I get started and containerize my apps?
Suggested containerization checklist:
- write a Dockerfile for one service in one app
- write Dockerfiles for the other (buildable) services
- write a Compose file for that whole app
- make sure that devs are empowered to run the app in containers
- set up automated builds of container images from the code repo
- set up a CI pipeline using these container images
- set up a CD pipeline (for staging/QA) using these images
And then it is time to look at orchestration!
Options for our first production cluster
Get a managed cluster from a major cloud provider (AKS, EKS, GKE...)
+, difficulty: medium)
Hire someone to deploy it for us
++, difficulty: easy)
Do it ourselves
+/+++, difficulty: hard)
One big cluster vs. multiple small ones
Yes, it is possible to have prod+dev in a single cluster
(and implement good isolation and security with RBAC, network policies...)
But it is not a good idea to do that for our first deployment
Start with a production cluster + at least a test cluster
Implement and check RBAC and isolation on the test cluster
(e.g. deploy multiple test versions side-by-side)
Make sure that all our devs have usable dev clusters
(whether it's a local minikube or a full-blown multi-node cluster)
Namespaces let you run multiple identical stacks side by side
Two namespaces (e.g.
green) can each have their own
Each of the two
redisservices has its own
CoreDNS creates two entries, mapping to these two
Pods in the
bluenamespace get a search suffix of
As a result, resolving
redisfrom a pod in the
bluenamespace yields the "local"
This does not provide isolation! That would be the job of network policies.
(covers permissions model, user and service accounts management ...)
Stateful services (databases etc.)
As a first step, it is wiser to keep stateful services outside of the cluster
Exposing them to pods can be done with multiple solutions:
redis.blue.svc.cluster.localwill be a
ClusterIPservices with explicit
(instead of letting Kubernetes generate the endpoints from a selector)
(application-level proxies that can provide credentials injection and more)
Stateful services (second take)
If we want to host stateful services on Kubernetes, we can use:
a storage provider
persistent volumes, persistent volume claims
Good questions to ask:
what's the operational cost of running this service ourselves?
what do we gain by deploying this stateful service on Kubernetes?
HTTP traffic handling
Services are layer 4 constructs
HTTP is a layer 7 protocol
It is handled by ingresses (a different resource kind)
- virtual host routing
- session stickiness
- URI mapping
- and much more!
Logging is delegated to the container engine
Logs are exposed through the API
Logs are also accessible through local files (
Log shipping to a central platform is usually done through these files
(e.g. with an agent bind-mounting the log directory)
The kubelet embeds cAdvisor, which exposes container metrics
(cAdvisor might be separated in the future for more flexibility)
It is a good idea to start with Prometheus
(even if you end up using something else)
Starting from Kubernetes 1.8, we can use the Metrics API
Heapster was a popular add-on
(but is being deprecated starting with Kubernetes 1.11)
Managing the configuration of our applications
Two constructs are particularly useful: secrets and config maps
They allow to expose arbitrary information to our containers
Avoid storing configuration in container images
(There are some exceptions to that rule, but it's generally a Bad Idea)
Never store sensitive information in container images
(It's the container equivalent of the password on a post-it note on your screen)
This section shows how to manage app config with config maps (among others)
Managing stack deployments
The best deployment tool will vary, depending on:
- the size and complexity of your stack(s)
- how often you change it (i.e. add/remove components)
- the size and skills of your team
A few examples:
Sorry Star Trek fans, this is not the federation you're looking for!
(If I add "Your cluster is in another federation" I might get a 3rd fandom wincing!)
Kubernetes master operation relies on etcd
etcd uses the Raft protocol
Raft recommends low latency between nodes
What if our cluster spreads to multiple regions?
Break it down in local clusters
Regroup them in a cluster federation
Synchronize resources across clusters
Discover resources across clusters
We've put this last, but it's pretty important!
How do you on-board a new developer?
What do they need to install to get a dev stack?
How does a code change make it from dev to prod?
How does someone add a component to a stack?