Study Guide: Kubernetes and Cloud Native Associate (KCNA)
My KCNA certification is up for renewal, and as I’m preparing to retake the exam, it dawned on me that I wrote up a study guide for it never actually published it. Doh! 🫣
So here it is 🤗
The KCNA is a great entry point for anyone looking to validate their foundational knowledge of Kubernetes and the cloud native ecosystem. I thought it would be helpful to document my study notes and share them with others who are also preparing for the exam.
This article is a bit long but it will cover everything you need to know to pass the KCNA exam. I’ll be updating this article as I find additional content to share, so make sure to bookmark it and check back often.
Exam Overview
First thing you should do is take a read through the KCNA Exam Domains & Competencies to get a better understanding of what topics are covered in the exam and register for the exam if you haven’t already.
Here are some key details about the exam:
- Format: Multiple choice, online, proctored
- Duration: 90 minutes
- Validity: 2 years
- Retakes: One free retake included
- Experience Level: Beginner
The exam covers five main domains:
| Domain | Weight |
|---|---|
| Kubernetes Fundamentals | 46% |
| Container Orchestration | 22% |
| Cloud Native Architecture | 16% |
| Cloud Native Observability | 8% |
| Cloud Native Application Delivery | 8% |
Unlike other Kubernetes certification exams (CKA, CKAD, CKS), the KCNA exam is multiple choice and does not require you to complete any hands-on tasks. But to get a better understanding of the concepts, I highly recommend you spin up a local Kubernetes cluster and practice the concepts covered in the exam.
So let’s get that out of the way.
Kubernetes Cluster Setup
For most of the practice exercises, you can use Minikube. Minikube is a tool that you can use to create a single-node Kubernetes cluster on your local machine.
Follow the installation instructions appropriate for your operating system to install Minikube.
Once you have Minikube installed, you can start a local Kubernetes cluster by running the following command:
minikube start
Enable the metrics server by running the following command:
minikube addons enable metrics-server
To interact with your cluster you will need to have kubectl installed. You can install kubectl by following the installation instructions appropriate for your operating system.
Verify that kubectl is installed and configured correctly by running the following command:
kubectl cluster-info
Let’s run a quick sample application to test that our cluster is up and running. Create a simple deployment by running the following command:
kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.39 -- /agnhost netexec --http-port=8080
Using the minikube dashboard, you can view the deployment you just created. Run the following command to open the dashboard:
minikube dashboard
Hit Ctrl+C to exit the dashboard.
Optionally, you can install Headlamp to manage your Kubernetes cluster via a web UI or k9s for a terminal UI.
Kubernetes Fundamentals - 46%
This is the largest domain of the exam, covering nearly half of all questions. You’ll need a solid understanding of Kubernetes resources, architecture, the API, containers, and scheduling. Let’s dive into each of these topics.
Kubernetes Resources
Kubernetes resources are the objects that you create to run your applications. Understanding the different types of resources and when to use them is essential. You don’t need to memorize every field in every resource, but you should know what each resource does and when you’d reach for it.
Here’s a quick rundown of the workload resources you’ll encounter:
Workload Resources:
| Resource | Description |
|---|---|
| Pod | The smallest deployable unit; one or more containers sharing network and storage |
| ReplicaSet | Ensures a specified number of Pod replicas are running |
| Deployment | Manages ReplicaSets and provides declarative updates for Pods |
| StatefulSet | Manages stateful applications with stable network identities and persistent storage |
| DaemonSet | Ensures a Pod runs on all (or selected) nodes |
| Job | Creates Pods that run to completion |
| CronJob | Creates Jobs on a schedule |
For networking, you’ll want to understand how traffic flows in and out of your cluster:
Service & Networking Resources:
| Resource | Description |
|---|---|
| Service | Exposes Pods to network traffic (ClusterIP, NodePort, LoadBalancer) |
| Ingress | Manages external HTTP/HTTPS access to Services |
| NetworkPolicy | Controls traffic flow between Pods |
And finally, configuration and storage resources are how you manage application settings and persistent data:
Configuration & Storage Resources:
| Resource | Description |
|---|---|
| ConfigMap | Stores non-sensitive configuration data as key-value pairs |
| Secret | Stores sensitive data (passwords, tokens) as base64-encoded values |
| PersistentVolume (PV) | Cluster-level storage resource |
| PersistentVolumeClaim (PVC) | Request for storage by a user |
Additional Reading:
- https://kubernetes.io/docs/concepts/workloads/
- https://kubernetes.io/docs/concepts/services-networking/
- https://kubernetes.io/docs/concepts/configuration/
Practice Exercises:
- https://kubernetes.io/docs/tutorials/kubernetes-basics/
- https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/
Kubernetes Architecture
Kubernetes follows a master-worker architecture with a control plane and worker nodes. If you’re using a managed Kubernetes service like AKS, EKS, or GKE, the cloud provider manages the control plane for you. But it’s still important to understand what’s happening under the hood.
The control plane is the brain of your cluster. Here are the key components:
Control Plane Components:
| Component | Description |
|---|---|
| kube-apiserver | Front-end for the Kubernetes control plane; all communication goes through it |
| etcd | Distributed key-value store that holds all cluster state and configuration |
| kube-scheduler | Watches for newly created Pods and assigns them to nodes |
| kube-controller-manager | Runs controller processes (Node, Replication, Endpoints, ServiceAccount controllers) |
| cloud-controller-manager | Integrates with cloud provider APIs (optional, for cloud deployments) |
On the worker nodes, you have components that actually run your workloads:
Node Components:
| Component | Description |
|---|---|
| kubelet | Agent that runs on each node; communicates with kube-apiserver and ensures containers are running in Pods |
| kube-proxy | Maintains network rules for Pod communication |
| Container Runtime | Software that runs containers (containerd, CRI-O) |
Understanding how these components interact is crucial. For example:
- You submit a Deployment via
kubectlto the API server - The API server validates and stores it in etcd
- The controller manager creates a ReplicaSet which creates Pod objects
- The scheduler assigns Pods to nodes
- The kubelet on each node pulls images and starts containers
Additional Reading:
- https://kubernetes.io/docs/concepts/overview/components/
- https://kubernetes.io/docs/concepts/architecture/
Practice Exercises:
Kubernetes API
The Kubernetes API is the foundation of the declarative configuration model. Everything in Kubernetes is an API object.
Key Concepts:
- API Groups: Resources are organized into groups (core, apps, batch, networking.k8s.io)
- API Versions: alpha → beta → stable (v1)
- Resource Versioning: Objects have
resourceVersionfor optimistic concurrency - Declarative vs Imperative: Kubernetes prefers declarative (YAML manifests) over imperative commands
Common kubectl Commands:
# Get API resources
kubectl api-resources
# Get API versions
kubectl api-versions
# Explain a resource
kubectl explain deployment
kubectl explain deployment.spec.replicas
# Apply a manifest
kubectl apply -f deployment.yaml
# Get resources
kubectl get pods
kubectl get pods -o wide
kubectl get pods -o yaml
Additional Reading:
- https://kubernetes.io/docs/concepts/overview/kubernetes-api/
- https://kubernetes.io/docs/reference/using-api/
Practice Exercises:
- https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/
- https://kubernetes.io/docs/reference/kubectl/quick-reference/
Containers
Containers are the foundation of Kubernetes workloads. Understanding how containers work is essential.
Key Concepts:
- Container Images: Immutable templates that include application code and dependencies
- Container Registries: Repositories for storing and distributing images (Docker Hub, gcr.io, quay.io)
- Image Tags vs Digests: Tags are mutable; digests (sha256) are immutable
- Multi-container Pods: Sidecar, init, and ambassador patterns
Container Runtimes:
Kubernetes uses the Container Runtime Interface (CRI) to interact with container runtimes. In practice, you’ll mostly see containerd these days since it’s the default for most managed Kubernetes services:
| Runtime | Description |
|---|---|
| containerd | Industry-standard container runtime, widely used |
| CRI-O | Lightweight runtime designed specifically for Kubernetes |
Note: Docker (dockershim) was deprecated in Kubernetes v1.24. Most clusters now use containerd.
Additional Reading:
- https://kubernetes.io/docs/concepts/containers/
- https://kubernetes.io/docs/concepts/architecture/cri/
- https://opencontainers.org/
Practice Exercises:
- https://kubernetes.io/docs/tutorials/hello-minikube/
- https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
Scheduling
The Kubernetes scheduler is responsible for placing Pods on nodes. Understanding how scheduling decisions are made is important.
Scheduling Factors:
- Resource Requests and Limits: CPU and memory requirements
- Node Selectors: Simple key-value matching for node selection
- Affinity and Anti-Affinity: More expressive rules for Pod placement
- Taints and Tolerations: Prevent Pods from scheduling on certain nodes
- Pod Topology Spread: Distribute Pods across failure domains
Example: Node Selector
spec:
nodeSelector:
disktype: ssd
Example: Taint and Toleration
# Taint a node
kubectl taint nodes node1 key=value:NoSchedule
# Pod toleration
spec:
tolerations:
- key: "key"
operator: "Equal"
value: "value"
effect: "NoSchedule"
Additional Reading:
- https://kubernetes.io/docs/concepts/scheduling-eviction/
- https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/
- https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
Practice Exercises:
- https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/
- https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/
Container Orchestration - 22%
Container orchestration is about automating the deployment, scaling, and management of containerized applications. This domain covers the fundamentals of orchestration, runtimes, security, networking, service mesh, and storage. While Kubernetes is the focus of this exam, it’s helpful to understand how it fits into the broader container orchestration landscape.
Container Orchestration Fundamentals
Container orchestration solves the challenges of running containers at scale:
- Scheduling: Automatically placing containers on available resources
- Scaling: Adding or removing container instances based on demand
- Service Discovery: Finding and connecting to other services
- Load Balancing: Distributing traffic across container instances
- Self-healing: Restarting failed containers and replacing unhealthy instances
- Rolling Updates: Updating applications without downtime
While Kubernetes dominates the container orchestration space today, it’s worth knowing about the alternatives. You might encounter these in legacy environments or specific use cases:
Popular Container Orchestrators:
| Platform | Description |
|---|---|
| Kubernetes | Industry-standard, CNCF graduated project |
| Docker Swarm | Docker’s native orchestration (simpler, less feature-rich) |
| Apache Mesos | Data center resource manager (can run Kubernetes) |
| Nomad | HashiCorp’s workload orchestrator |
Additional Reading:
Practice Exercises:
Runtime
Container runtimes are responsible for running containers. Kubernetes uses the Container Runtime Interface (CRI) to support multiple runtimes.
Runtime Hierarchy:
- High-level runtime: Manages container lifecycle (containerd, CRI-O)
- Low-level runtime: Actually runs the container (runc, kata-containers, gVisor)
For workloads that need extra isolation (think multi-tenant environments or running untrusted code), sandboxed runtimes add an additional security layer:
Sandboxed Runtimes provide additional isolation:
| Runtime | Description |
|---|---|
| gVisor | User-space kernel for container isolation |
| Kata Containers | Lightweight VMs for containers |
RuntimeClass allows you to select different runtimes for different workloads:
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: gvisor
handler: runsc
Additional Reading:
- https://kubernetes.io/docs/concepts/containers/runtime-class/
- https://github.com/opencontainers/runtime-spec
Practice Exercises:
Security
Security in container orchestration spans multiple layers. This is a topic that deserves its own deep dive (check out my KCSA study guide if you want to go further), but here are the fundamentals you need to know for the KCNA.
Pod Security Standards define three profiles that range from permissive to restrictive:
Pod Security Standards:
| Profile | Description |
|---|---|
| Privileged | Unrestricted, no isolation |
| Baseline | Minimally restrictive, prevents known privilege escalations |
| Restricted | Heavily restricted, follows Pod hardening best practices |
Key Security Concepts:
- RBAC (Role-Based Access Control): Controls who can do what in the cluster
- Network Policies: Control traffic between Pods
- Security Contexts: Configure Pod and container security settings
- Secrets Management: Secure storage for sensitive data
- Image Security: Scanning, signing, and using trusted registries
Additional Reading:
- https://kubernetes.io/docs/concepts/security/
- https://kubernetes.io/docs/concepts/security/pod-security-standards/
Practice Exercises:
- https://kubernetes.io/docs/tutorials/security/
- https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/
Networking
Kubernetes networking is built on a few fundamental principles:
- Every Pod gets its own IP address
- Pods can communicate with all other Pods without NAT
- Nodes can communicate with all Pods without NAT
- The IP a Pod sees itself as is the same IP that others see it as
These principles are implemented by various components working together:
Networking Components:
| Component | Description |
|---|---|
| CNI Plugins | Implement the Container Network Interface (Calico, Flannel, Cilium, Weave) |
| kube-proxy | Manages iptables/IPVS rules for Service routing |
| CoreDNS | Provides DNS-based service discovery |
Services are how you expose your applications. The type you choose depends on where the traffic is coming from:
Service Types:
| Type | Description |
|---|---|
| ClusterIP | Internal cluster IP (default) |
| NodePort | Exposes service on each node’s IP at a static port |
| LoadBalancer | Provisions external load balancer (cloud providers) |
| ExternalName | Maps service to external DNS name |
Additional Reading:
- https://kubernetes.io/docs/concepts/services-networking/
- https://kubernetes.io/docs/concepts/cluster-administration/networking/
Practice Exercises:
- https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/
- https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/
Service Mesh
A service mesh is an infrastructure layer that handles service-to-service communication. Honestly, you probably don’t need a service mesh for most applications, but they become valuable when you’re running a lot of microservices and need consistent observability, security, and traffic management across all of them.
Here’s what a service mesh typically provides:
- Mutual TLS (mTLS): Encrypted communication between services
- Traffic Management: Load balancing, routing, retries, timeouts
- Observability: Metrics, logs, and distributed tracing
- Access Control: Fine-grained authorization policies
There are several service mesh options out there. Istio is feature-rich but can be complex to operate. Linkerd is lighter weight and easier to get started with. I’ve written about my service mesh considerations if you want to go deeper on this topic.
Popular Service Meshes:
| Mesh | Description |
|---|---|
| Istio | Feature-rich, CNCF project |
| Linkerd | Lightweight, CNCF graduated project |
| Consul Connect | HashiCorp’s service mesh |
| Cilium | eBPF-based networking and service mesh |
Architecture: Most service meshes use a sidecar proxy pattern where a proxy (like Envoy) runs alongside each application container.
Additional Reading:
- https://kubernetes.io/docs/concepts/services-networking/service-mesh/
- https://www.cncf.io/blog/2020/03/04/what-is-a-service-mesh/
- https://paulyu.dev/article/service-mesh-considerations/
Practice Exercises:
Storage
Kubernetes provides abstractions for managing storage. This is one of those areas where the abstraction really shines—you can write your application to use a PersistentVolumeClaim and the underlying storage can be anything from local disk to cloud block storage to a distributed file system.
Here are the key concepts you need to understand:
Key Concepts:
| Concept | Description |
|---|---|
| Volume | Directory accessible to containers in a Pod |
| PersistentVolume (PV) | Cluster-level storage resource provisioned by admin |
| PersistentVolumeClaim (PVC) | Request for storage by a user |
| StorageClass | Describes “classes” of storage with provisioners |
| CSI (Container Storage Interface) | Standard for exposing storage systems to containers |
Access modes determine how a volume can be mounted. This is important to understand because not all storage backends support all modes:
Access Modes:
| Mode | Description |
|---|---|
| ReadWriteOnce (RWO) | Mounted as read-write by a single node |
| ReadOnlyMany (ROX) | Mounted as read-only by many nodes |
| ReadWriteMany (RWX) | Mounted as read-write by many nodes |
Additional Reading:
Practice Exercises:
- https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
- https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
Cloud Native Architecture - 16%
Cloud native architecture is about designing applications that fully exploit the advantages of the cloud computing model. This domain covers autoscaling, serverless, the CNCF community, roles and personas, and open standards. This is where you’ll learn about the broader ecosystem beyond just Kubernetes itself.
Autoscaling
Kubernetes provides multiple ways to scale workloads automatically:
Horizontal Pod Autoscaler (HPA): Scales the number of Pod replicas based on CPU, memory, or custom metrics.
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
Vertical Pod Autoscaler (VPA): Adjusts resource requests and limits for Pods.
Cluster Autoscaler: Adds or removes nodes based on pending Pods and resource utilization.
KEDA (Kubernetes Event-Driven Autoscaling): Scales based on event sources like message queues, databases, and custom metrics.
Additional Reading:
- https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
- https://github.com/kubernetes/autoscaler
- https://keda.sh/
Practice Exercises:
Serverless
Serverless computing abstracts infrastructure management, allowing developers to focus on code. In the Kubernetes ecosystem, serverless means running workloads without managing the underlying infrastructure—your code scales to zero when not in use and spins up on demand.
Knative is the most prominent serverless platform for Kubernetes, but there are others worth knowing about:
Serverless Kubernetes Platforms:
| Platform | Description |
|---|---|
| Knative | Kubernetes-native serverless platform (CNCF incubating) |
| OpenFaaS | Functions as a Service on Kubernetes |
| Kubeless | Kubernetes-native serverless framework |
| Fission | Fast serverless functions for Kubernetes |
Knative Components:
- Knative Serving: Request-driven compute that scales to zero
- Knative Eventing: Event-driven architecture with loose coupling
Additional Reading:
- https://knative.dev/docs/
- https://www.cncf.io/blog/2022/03/02/knative-accepted-as-a-cncf-incubating-project/
Practice Exercises:
Community and Governance
The CNCF (Cloud Native Computing Foundation) is the home of Kubernetes and many other cloud native projects. Understanding how the CNCF works is part of the exam, so take some time to explore the CNCF landscape—it’s massive!
CNCF projects go through maturity levels as they prove themselves:
CNCF Project Maturity Levels:
| Level | Description |
|---|---|
| Sandbox | Early stage, experimental projects |
| Incubating | Growing adoption, production usage |
| Graduated | Mature, widely adopted projects (e.g., Kubernetes, Prometheus, Envoy) |
Kubernetes Governance:
- Special Interest Groups (SIGs): Focus on specific areas (SIG-Network, SIG-Storage, SIG-Security)
- Working Groups: Cross-SIG initiatives
- Kubernetes Enhancement Proposals (KEPs): Process for proposing new features
- Release Cadence: 3 releases per year (~4 months between releases)
Additional Reading:
Roles and Personas
Understanding the different roles in a cloud native organization helps you understand who uses what and why. In reality, these roles often overlap—especially in smaller teams where you might wear multiple hats:
| Role | Responsibilities |
|---|---|
| Application Developer | Writes code, builds containers, defines Kubernetes manifests |
| Platform Engineer | Builds and maintains the platform, manages clusters |
| DevOps Engineer | Bridges development and operations, CI/CD pipelines |
| SRE (Site Reliability Engineer) | Ensures reliability, manages incidents, capacity planning |
| Security Engineer | Implements security policies, audits, compliance |
| Cluster Administrator | Manages Kubernetes clusters, upgrades, monitoring |
Additional Reading:
Open Standards
Cloud native computing relies on open standards for interoperability. This is actually one of the things I love about the cloud native ecosystem—you’re not locked into a single vendor’s implementation. These standards ensure that tools and platforms can work together:
| Standard | Description |
|---|---|
| OCI (Open Container Initiative) | Standards for container formats and runtimes |
| CRI (Container Runtime Interface) | Kubernetes interface for container runtimes |
| CNI (Container Network Interface) | Standard for network plugins |
| CSI (Container Storage Interface) | Standard for storage plugins |
| SMI (Service Mesh Interface) | Standard for service mesh implementations |
OCI Specifications:
- Image Spec: Defines container image format
- Runtime Spec: Defines container runtime behavior
- Distribution Spec: Defines how images are distributed
Additional Reading:
- https://opencontainers.org/
- https://github.com/container-storage-interface/spec
- https://github.com/containernetworking/cni
Cloud Native Observability - 8%
Observability is about understanding the internal state of your systems by examining their outputs. This domain covers telemetry, Prometheus, and cost management. Even though it’s only 8% of the exam, observability is crucial for operating cloud native applications in production.
Telemetry & Observability
The three pillars of observability are:
| Pillar | Description |
|---|---|
| Metrics | Numerical measurements over time (CPU usage, request count) |
| Logs | Discrete events with timestamps and context |
| Traces | Request paths through distributed systems |
OpenTelemetry: OpenTelemetry is the CNCF standard for collecting telemetry data. It provides:
- Unified APIs for metrics, logs, and traces
- Language-specific SDKs
- Collector for processing and exporting data
The CNCF ecosystem has a rich set of observability tools. You don’t need to know all of them in depth, but you should be familiar with what each one does:
Observability Tools in the CNCF Ecosystem:
| Tool | Purpose |
|---|---|
| Prometheus | Metrics collection and alerting |
| Grafana | Visualization and dashboards |
| Jaeger | Distributed tracing |
| Fluentd/Fluent Bit | Log collection and forwarding |
| OpenTelemetry | Unified telemetry collection |
Additional Reading:
Practice Exercises:
Prometheus
Prometheus is the de facto standard for monitoring in the cloud native ecosystem. It’s a CNCF graduated project.
Key Concepts:
- Pull-based model: Prometheus scrapes metrics from targets
- Time-series database: Stores metrics with timestamps
- PromQL: Query language for metrics
- Alertmanager: Handles alerts from Prometheus
- Exporters: Expose metrics from third-party systems
Prometheus supports different metric types, and understanding the difference matters when you’re writing PromQL queries:
Metric Types:
| Type | Description |
|---|---|
| Counter | Cumulative value that only increases |
| Gauge | Value that can go up or down |
| Histogram | Samples observations and counts them in buckets |
| Summary | Similar to histogram but calculates quantiles |
Example PromQL Queries:
# CPU usage rate over 5 minutes
rate(container_cpu_usage_seconds_total[5m])
# Memory usage
container_memory_usage_bytes
# Request rate by status code
sum(rate(http_requests_total[5m])) by (status_code)
Additional Reading:
Practice Exercises:
Cost Management
Managing cloud native infrastructure costs is essential for sustainable operations:
Cost Optimization Strategies:
- Right-sizing: Match resource requests/limits to actual usage
- Autoscaling: Scale down during low demand
- Spot/Preemptible Instances: Use cheaper, interruptible VMs for fault-tolerant workloads
- Resource Quotas: Limit resource consumption per namespace
- Limit Ranges: Set default and maximum resource limits
There are also tools specifically designed to help you understand and optimize your Kubernetes costs:
Cost Visibility Tools:
| Tool | Description |
|---|---|
| Kubecost | Kubernetes cost monitoring and optimization |
| OpenCost | CNCF sandbox project for cost monitoring |
| Cloud provider tools | AWS Cost Explorer, Azure Cost Management, GCP Cost Management |
Resource Quotas Example:
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
namespace: development
spec:
hard:
requests.cpu: "10"
requests.memory: 20Gi
limits.cpu: "20"
limits.memory: 40Gi
Additional Reading:
- https://www.kubecost.com/
- https://www.opencost.io/
- https://kubernetes.io/docs/concepts/policy/resource-quotas/
Practice Exercises:
Cloud Native Application Delivery - 8%
Application delivery encompasses the practices and tools for deploying applications reliably and efficiently. This domain covers application delivery fundamentals, GitOps, and CI/CD. This is where the rubber meets the road in terms of getting your applications into production.
Application Delivery Fundamentals
Modern application delivery focuses on:
- Declarative Configuration: Define desired state, not imperative steps
- Version Control: All configuration in Git
- Automation: Reduce manual steps and human error
- Reproducibility: Same process produces same results
- Auditability: Track who changed what and when
Kubernetes supports several deployment strategies out of the box. Rolling updates are the default, but you might reach for blue-green or canary deployments when you need more control over how traffic shifts to new versions:
Deployment Strategies:
| Strategy | Description |
|---|---|
| Rolling Update | Gradually replace old Pods with new ones (Kubernetes default) |
| Blue-Green | Run two environments, switch traffic between them |
| Canary | Route a small percentage of traffic to the new version |
| A/B Testing | Route based on user attributes for experimentation |
For managing Kubernetes manifests, you’ll likely use one of these tools. Personally, I tend to reach for Kustomize for simpler projects since it’s built into kubectl, and Helm when I need to package and distribute applications or deal with more complex templating:
Configuration Management Tools:
| Tool | Description |
|---|---|
| Helm | Package manager for Kubernetes (charts) |
| Kustomize | Template-free YAML customization (built into kubectl) |
| Jsonnet | Data templating language |
Additional Reading:
- https://helm.sh/docs/
- https://kustomize.io/
- https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
Practice Exercises:
- https://helm.sh/docs/intro/quickstart/
- https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/
GitOps
GitOps is a paradigm where Git is the single source of truth for declarative infrastructure and applications.
GitOps Principles:
- Declarative: Entire system described declaratively
- Versioned and Immutable: Desired state stored in Git
- Pulled Automatically: Agents pull desired state from Git
- Continuously Reconciled: Agents ensure actual state matches desired state
Argo CD and Flux are the two big players in this space. Both are CNCF graduated projects and both do the job well. I’ve used both and they each have their strengths:
GitOps Tools:
| Tool | Description |
|---|---|
| Argo CD | Declarative GitOps CD for Kubernetes (CNCF graduated) |
| Flux | GitOps toolkit for Kubernetes (CNCF graduated) |
Benefits of GitOps:
- Auditability: Git history shows all changes
- Rollback: Revert to previous Git commit
- Consistency: Same process for all environments
- Security: No direct cluster access needed for deployments
Additional Reading:
Practice Exercises:
CI/CD
Continuous Integration and Continuous Delivery/Deployment automate the software delivery process.
CI (Continuous Integration):
- Developers frequently merge code changes
- Automated builds and tests run on every change
- Fast feedback on code quality
CD (Continuous Delivery/Deployment):
- Continuous Delivery: Automated release process, manual deployment approval
- Continuous Deployment: Fully automated deployment to production
There are a lot of CI/CD tools out there. The ones you’ll encounter most often in cloud native environments are:
CI/CD Tools:
| Tool | Description |
|---|---|
| Jenkins | Open-source automation server |
| GitHub Actions | CI/CD built into GitHub |
| GitLab CI | CI/CD built into GitLab |
| Tekton | Kubernetes-native CI/CD (CNCF project) |
| Argo Workflows | Kubernetes-native workflow engine |
Kubernetes-Native CI/CD: Tekton provides Kubernetes Custom Resources for defining pipelines:
- Task: A collection of steps
- Pipeline: A series of Tasks
- PipelineRun: An execution of a Pipeline
- Trigger: Event-based Pipeline execution
Additional Reading:
- https://tekton.dev/
- https://argoproj.github.io/argo-workflows/
- https://www.cncf.io/blog/2021/03/09/cncf-ci-cd-landscape/
Practice Exercises:
Conclusion
That was a lot of information to cover, but I hope you found it helpful as you prepare for the Kubernetes and Cloud Native Associate (KCNA) exam. Remember, the KCNA is a foundational exam that tests your understanding of concepts rather than hands-on skills. Focus on understanding the “why” behind each topic, not just the “how.”
Good luck with your studies and reach out if you have any questions on the exam experience or with any of the topics covered in this study guide.
To Kubestronaut and beyond! 🚀
Resources
In addition to the official KCNA Exam Domains & Competencies and content above, here are some additional resources that you may find helpful as you prepare for the KCNA exam:
Official Resources:
- KCNA Curriculum PDF
- FREE Introduction to Kubernetes Course (LFS158)
- Kubernetes and Cloud Native Essentials (LFS250)
Kubernetes Documentation:
CNCF Resources:
Practice Labs:
Next Steps:
After passing the KCNA, consider pursuing these certifications:
- Kubernetes and Cloud Native Security Associate (KCSA) - Check out my study guide for this one!
- Certified Kubernetes Application Developer (CKAD)
- Certified Kubernetes Administrator (CKA)