Kubernetes Basics: A Comprehensive Guide to Container Orchestration
A comprehensive guide to Kubernetes fundamentals, covering core concepts, implementation strategies, and best practices for modern web development in 2025.
Kubernetes Basics: A Comprehensive Guide to Container Orchestration
In the rapidly evolving landscape of web development, Kubernetes has established itself as a cornerstone technology for developers in 2025. Whether you're building small personal projects or large-scale enterprise applications, understanding the nuances of container orchestration is essential for deploying scalable, resilient, and maintainable systems.
This comprehensive guide will take you from basic concepts to advanced techniques, with real-world examples and code snippets you can apply immediately.
Why Kubernetes Matters in 2025
The Evolution of Deployment
Before diving into Kubernetes, let's understand what problems it solves and how deployment has evolved:
Deployment Timeline
| Era | Technology | Challenges | Typical Scale |
|---|---|---|---|
| Physical Servers | Manual configuration, resource underutilization | 1-10 servers | |
| Virtual Machines | Better resource utilization, still manual | 10-100 VMs | |
| Docker Containers | Consistent environments, portability | 100-1000 containers | |
| Kubernetes | Automated orchestration, self-healing | 1000-1,000,000+ pods |
What Kubernetes Solves
- Automated Scaling: Automatically add or remove containers based on load
- Self-Healing: Restart failed containers, replace failed nodes
- Load Balancing: Distribute traffic across multiple instances
- Rolling Updates: Deploy new versions without downtime
- Resource Efficiency: Optimize resource usage across clusters
Real-World Impact
Consider a typical web application without vs. with Kubernetes:
Without Kubernetes
# docker-compose.yml
version: '3.8'
services:
web:
image: myapp:latest
ports:
- "80:80"
environment:
- NODE_ENV=production
deploy:
replicas: 3 # Static, manual scaling
Problems:
- Manual scaling: Must update and restart services
- No self-healing: Failed containers stay down
- Resource waste: Static allocation doesn't match demand
- Manual load balancing: Need external load balancer
- Downtime during updates: All containers restart at once
With Kubernetes
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:latest
ports:
- containerPort: 80
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
livenessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 80
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp-deployment
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
Benefits:
- Automatic scaling: 3-10 replicas based on CPU usage
- Self-healing: Failed pods automatically restarted
- Resource optimization: Dynamic allocation based on actual needs
- Built-in load balancing: Service distributes traffic
- Zero-downtime updates: Rolling updates with health checks
"The only way to go fast, is to go well." — Robert C. Martin
Core Concepts and Architecture
1. The Kubernetes Control Plane
Understanding the control plane is crucial for grasping how Kubernetes makes decisions:
class KubernetesControlPlane:
"""
Kubernetes Control Plane Components
"""
def __init__(self):
# API Server: The central management entity
self.api_server = {
"function": "Exposes Kubernetes API",
"port": 6443,
"stores": "etcd",
"authenticates": "All requests"
}
# Scheduler: Assigns pods to nodes
self.scheduler = {
"function": "Schedules pods to nodes",
"considers": [
"resource_requirements",
"hardware_policy",
"affinity_anti_affinity",
"taints_tolerations",
"pod_priority"
],
"runs_on": "Master node"
}
# Controller Manager: Maintains desired state
self.controller_manager = {
"function": "Runs controller processes",
"controllers": [
"node_controller",
"replication_controller",
"endpoints_controller",
"service_account_controller",
"daemonset_controller",
"deployment_controller"
],
"watches": "API server for changes"
}
# etcd: Consistent key-value store
self.etcd = {
"function": "Stores cluster state",
"type": "distributed_key_value_store",
"consistency": "strong",
"backed_by": "etcd3"
}
def describe_flow(self, request):
"""
Describe request flow through control plane
"""
flow = [
"1. User/Client sends request to API Server",
"2. API Server authenticates and authorizes request",
"3. API Server stores/updates data in etcd",
"4. Controller Manager detects change via watch",
"5. Controller Manager takes action to reconcile state",
"6. Scheduler assigns pods to available nodes",
"7. Kubelet on node receives pod spec",
"8. Kubelet creates pod via container runtime",
"9. Container runtime pulls image and starts containers"
]
return flow
control_plane = KubernetesControlPlane()
print("\nRequest Flow:")
for step in control_plane.describe_flow("create_deployment"):
print(f" {step}")
2. Kubernetes Objects
Understanding the fundamental objects is key to working with Kubernetes:
Pod: The Smallest Deployable Unit
# pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
volumeMounts:
- name: html-volume
mountPath: /usr/share/nginx/html
readOnly: true
volumes:
- name: html-volume
configMap:
name: nginx-config
Service: Network Abstraction
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Deployment: Declarative Updates
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "200m"
ConfigMap & Secret: Configuration Management
# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
database_host: "postgres-service"
cache_host: "redis-service"
log_level: "info"
---
apiVersion: v1
kind: Secret
metadata:
name: app-secret
type: Opaque
data:
database_password: cG9zdHdvcmQxN2Rhd2g= # base64 encoded
api_key: YWJjZGVmNzg5MDIzNDU2Nzg5Cg==
3. Node Architecture
Understanding how Kubernetes manages nodes is crucial for capacity planning:
class Node:
"""
Kubernetes Node
"""
def __init__(self, node_name):
self.name = node_name
# Node resources
self.resources = {
"cpu_cores": 64,
"memory_gb": 256,
"storage_gb": 2000,
"network_gbps": 10
}
# Allocated resources
self.allocated = {
"cpu_cores": 0,
"memory_gb": 0,
"storage_gb": 0
}
# Running pods
self.pods = []
def can_schedule_pod(self, pod_requirements):
"""
Check if pod can be scheduled on this node
"""
# Check CPU
if self.allocated["cpu_cores"] + pod_requirements["cpu"] > self.resources["cpu_cores"]:
return {"can_schedule": False, "reason": "insufficient_cpu"}
# Check memory
if self.allocated["memory_gb"] + pod_requirements["memory"] > self.resources["memory_gb"]:
return {"can_schedule": False, "reason": "insufficient_memory"}
# Check storage
if self.allocated["storage_gb"] + pod_requirements["storage"] > self.resources["storage_gb"]:
return {"can_schedule": False, "reason": "insufficient_storage"}
return {"can_schedule": True}
def allocate_pod(self, pod):
"""
Allocate resources for pod
"""
self.allocated["cpu_cores"] += pod["cpu"]
self.allocated["memory_gb"] += pod["memory"]
self.allocated["storage_gb"] += pod["storage"]
self.pods.append(pod)
return {
"success": True,
"remaining_cpu": self.resources["cpu_cores"] - self.allocated["cpu_cores"],
"remaining_memory": self.resources["memory_gb"] - self.allocated["memory_gb"]
}
class Scheduler:
"""
Kubernetes Scheduler
"""
def __init__(self):
self.nodes = [
Node("node-1"),
Node("node-2"),
Node("node-3")
]
def schedule_pod(self, pod):
"""
Schedule pod to best available node
"""
suitable_nodes = []
# Find nodes that can accommodate pod
for node in self.nodes:
if node.can_schedule_pod(pod["requirements"])["can_schedule"]:
suitable_nodes.append(node)
if not suitable_nodes:
return {"scheduled": False, "reason": "no_suitable_node"}
# Score nodes based on various factors
scores = []
for node in suitable_nodes:
score = self._score_node(node, pod)
scores.append({"node": node, "score": score})
# Select highest scoring node
best_node = max(scores, key=lambda x: x["score"])["node"]
result = best_node.allocate_pod(pod)
return {
"scheduled": True,
"node": best_node.name,
"result": result
}
def _score_node(self, node, pod):
"""
Score node based on multiple factors
"""
score = 0
# Least requested resources (spread pods)
score -= node.allocated["cpu_cores"]
score -= node.allocated["memory_gb"]
# Prefer nodes with same pod affinity
if pod.get("affinity") and node.name in pod["affinity"]["node_names"]:
score += 100
# Avoid nodes with taints
if node.get("taints") and pod.get("tolerations"):
for taint in node["taints"]:
if taint not in pod["tolerations"]:
score -= 1000
return score
Practical Implementation
1. Setting Up Your First Cluster
Using Minikube for Local Development
# Install Minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
# Start cluster
minikube start --cpus=4 --memory=8192 --driver=docker
# Verify cluster is running
kubectl cluster-info
kubectl get nodes
# Enable dashboard
minikube dashboard
Deploying Your First Application
# hello-kubernetes.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
labels:
app: hello-kubernetes
spec:
replicas: 2
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: gcr.io/google-samples/node-hello:1.0
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 64Mi
limits:
cpu: 200m
memory: 128Mi
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-service
spec:
type: LoadBalancer
selector:
app: hello-kubernetes
ports:
- protocol: TCP
port: 80
targetPort: 8080
# Apply configuration
kubectl apply -f hello-kubernetes.yaml
# Check deployment status
kubectl get deployments
kubectl get pods
kubectl get services
# Access application
minikube service hello-kubernetes-service
2. Scaling Applications
Manual Scaling
# Scale deployment to 5 replicas
kubectl scale deployment hello-kubernetes --replicas=5
# Verify scaling
kubectl get pods -l app=hello-kubernetes
kubectl describe deployment hello-kubernetes
Horizontal Pod Autoscaler
# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: hello-kubernetes-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: hello-kubernetes
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 70
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 50
periodSeconds: 15
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 15
# Install Metrics Server (required for HPA)
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
# Apply HPA
kubectl apply -f hpa.yaml
# Test autoscaling
kubectl get hpa
kubectl top pods
3. ConfigMaps and Secrets
Managing Application Configuration
# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
# Key-value pairs
app.name: "My Application"
app.version: "1.0.0"
log.level: "info"
# File-like keys
nginx.conf: |
upstream backend {
server backend-service:3000;
}
server {
listen 80;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
}
}
---
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
data:
# Values must be base64 encoded
database.url: cG9zdGdyZXMuZXhhbXBsZTo1UzMzMDY= # postgres://example:54306
database.username: YWRtaW51c2Vy # adminuser
database.password: c3VjcjQxcG9zc3dvcmQ= # securePassword123
api.key: YWJjZGVmNzg5MDIzNDU2Nzg5Cg==
# Encode secrets
echo -n "securePassword123" | base64
# Apply configurations
kubectl apply -f configmap.yaml
# Verify
kubectl get configmaps
kubectl get secrets
kubectl describe secret app-secrets
Using ConfigMaps and Secrets in Pods
# pod-with-config.yaml
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app
image: myapp:latest
env:
# Use ConfigMap values as environment variables
- name: APP_NAME
valueFrom:
configMapKeyRef:
name: app-config
key: app.name
- name: APP_VERSION
valueFrom:
configMapKeyRef:
name: app-config
key: app.version
# Use Secret values as environment variables
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: database.url
- name: DATABASE_USERNAME
valueFrom:
secretKeyRef:
name: app-secrets
key: database.username
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: app-secrets
key: database.password
# Mount ConfigMap as volume
volumeMounts:
- name: config-volume
mountPath: /etc/config
readOnly: true
volumes:
- name: config-volume
configMap:
name: app-config
4. Rolling Updates and Rollbacks
Updating Applications
# Update image version
kubectl set image deployment/hello-kubernetes hello-kubernetes=v2.0
# Watch rollout status
kubectl rollout status deployment/hello-kubernetes
# Check pods
kubectl get pods -l app=hello-kubernetes
Rolling Update Strategy
# deployment-with-rolling-update.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # Can have 1 extra pod during update
maxUnavailable: 1 # Only 1 pod can be unavailable
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: hello-kubernetes:v2.0
ports:
- containerPort: 8080
# Health checks
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
failureThreshold: 3
# Graceful shutdown
terminationMessagePath: /tmp/shutdown-message
terminationMessagePolicy: File
Rolling Back
# View rollout history
kubectl rollout history deployment/hello-kubernetes
# Rollback to previous version
kubectl rollout undo deployment/hello-kubernetes
# Rollback to specific revision
kubectl rollout undo deployment/hello-kubernetes --to-revision=2
# Watch rollback
kubectl rollout status deployment/hello-kubernetes
5. Persistent Storage
PersistentVolume and PersistentVolumeClaim
# pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
hostPath:
path: /mnt/data/postgres
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: standard
Using PVC in Deployment
# postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:15
ports:
- containerPort: 5432
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secrets
key: password
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pvc
6. Ingress and Load Balancing
Setting Up Ingress
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
rules:
- host: app.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
tls:
- hosts:
- app.example.com
secretName: app-tls
# Install NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.5/deploy/static/provider/cloud/deploy.yaml
# Apply ingress
kubectl apply -f ingress.yaml
# Verify
kubectl get ingress
kubectl describe ingress app-ingress
Advanced Topics
1. Custom Resource Definitions (CRDs)
# crd.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: databases.example.com
spec:
group: example.com
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
size:
type: string
engine:
type: string
enum: ["mysql", "postgres", "mongodb"]
status:
type: object
properties:
phase:
type: string
enum: ["Creating", "Ready", "Failed"]
message:
type: string
scope: Namespaced
names:
plural: databases
singular: database
kind: Database
shortNames:
- db
2. Operators
# database-operator.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: database-operator
spec:
replicas: 1
selector:
matchLabels:
app: database-operator
template:
metadata:
labels:
app: database-operator
spec:
serviceAccountName: database-operator
containers:
- name: operator
image: database-operator:latest
imagePullPolicy: Always
env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: OPERATOR_NAME
value: "database-operator"
3. Service Mesh with Istio
# Install Istio
curl -L https://istio.io/downloadIstio | sh -
# Configure namespace for automatic injection
kubectl label namespace default istio-injection=enabled
# Deploy application (automatic sidecar injection)
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.19/samples/bookinfo/platform/kube/bookinfo.yaml
# Verify sidecar injection
kubectl get pods -o jsonpath='{.items[*].spec.containers[*].name}'
Monitoring and Debugging
1. Logging with EFK Stack
# elasticsearch.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:8.9.0
env:
- name: discovery.type
value: single-node
ports:
- containerPort: 9200
- containerPort: 9300
resources:
requests:
cpu: 500m
memory: 2Gi
limits:
cpu: 1
memory: 4Gi
# fluentd.yaml
apiVersion: v1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-system
spec:
selector:
matchLabels:
name: fluentd
template:
metadata:
labels:
name: fluentd
spec:
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
2. Metrics with Prometheus
# prometheus-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: monitoring
data:
prometheus.yml: |
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_app]
regex: '(.+)'
target_label: application
- source_labels: [__meta_kubernetes_pod_namespace]
regex: '(.+)'
target_label: namespace
# Install Prometheus
helm install prometheus prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--create-namespace \
--set grafana.enabled=true \
--set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false
# Access Grafana
kubectl port-forward -n monitoring svc/prometheus-grafana 3000:80
3. Debugging Tools
# Get pod details
kubectl describe pod <pod-name>
# View pod logs
kubectl logs <pod-name>
# View logs for specific container
kubectl logs <pod-name> -c <container-name>
# Execute command in pod
kubectl exec -it <pod-name> -- /bin/bash
# Check events
kubectl get events --sort-by='.lastTimestamp'
# Network debugging
kubectl run -it --rm debug --image=nicolaka/netshoot --restart=Never
# DNS debugging
kubectl exec -it <pod-name> -- nslookup <service-name>
Best Practices
1. Resource Management
# Good: Proper resource requests and limits
apiVersion: v1
kind: Pod
metadata:
name: resource-managed-pod
spec:
containers:
- name: app
image: myapp:latest
resources:
requests:
cpu: 100m # Minimum guaranteed CPU
memory: 128Mi # Minimum guaranteed memory
limits:
cpu: 500m # Maximum CPU allowed
memory: 256Mi # Maximum memory allowed
2. Security
# Good: Security contexts
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: myapp:latest
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
3. Health Checks
# Good: Proper health checks
apiVersion: v1
kind: Pod
metadata:
name: healthy-pod
spec:
containers:
- name: app
image: myapp:latest
# Liveness probe: Check if container needs restart
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30 # Wait 30s before first check
periodSeconds: 10 # Check every 10s
timeoutSeconds: 1 # Timeout after 1s
failureThreshold: 3 # Fail 3 times before restart
successThreshold: 1 # 1 success is healthy
# Readiness probe: Check if container ready for traffic
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 1
failureThreshold: 3
successThreshold: 1
Frequently Asked Questions (FAQ)
Q: What's the difference between Docker and Kubernetes?
A: Docker containers the applications, while Kubernetes orchestrates the containers. Docker runs containers on a single machine, while Kubernetes manages containers across multiple machines, providing scaling, self-healing, and load balancing.
Q: How do I choose between Kubernetes and serverless?
A: Choose Kubernetes when:
- You need more control over the runtime environment
- You have long-running workloads
- You need complex networking or storage
- You want to use existing Docker containers
Choose serverless when:
- You have unpredictable traffic patterns
- You want to pay only for actual usage
- You don't want to manage infrastructure
- You have short-running tasks
Q: How do I secure my Kubernetes cluster?
A: Implement:
- Role-based access control (RBAC)
- Network policies
- Secrets management
- Image security scanning
- Pod security policies
- Regular security audits
Q: What's the learning curve for Kubernetes?
A: Kubernetes has a steep learning curve, but you can start with basic concepts and gradually learn more advanced topics. Most developers become productive with basic operations in 2-4 weeks of regular use.
Conclusion
Mastering Kubernetes is more than just learning a new tool; it's about understanding how to orchestrate containerized applications at scale. By mastering Kubernetes, you gain the ability to:
- Deploy applications reliably and efficiently
- Scale applications automatically based on demand
- Achieve high availability with self-healing
- Implement continuous delivery with rolling updates
- Monitor and debug distributed systems
The container orchestration revolution is here, and Kubernetes is leading it. Start small, experiment often, and gradually build your expertise.
Happy container orchestrating!