PAUL'S BLOG

Learn. Build. Share. Repeat.

Study Guide: Kubernetes and Cloud Native Associate (KCNA)

2025-12-19 19 min read Certification

My KCNA certification is up for renewal, and as I’m preparing to retake the exam, it dawned on me that I wrote up a study guide for it never actually published it. Doh! 🫣

So here it is 🤗

The KCNA is a great entry point for anyone looking to validate their foundational knowledge of Kubernetes and the cloud native ecosystem. I thought it would be helpful to document my study notes and share them with others who are also preparing for the exam.

This article is a bit long but it will cover everything you need to know to pass the KCNA exam. I’ll be updating this article as I find additional content to share, so make sure to bookmark it and check back often.

Exam Overview

First thing you should do is take a read through the KCNA Exam Domains & Competencies to get a better understanding of what topics are covered in the exam and register for the exam if you haven’t already.

Here are some key details about the exam:

  • Format: Multiple choice, online, proctored
  • Duration: 90 minutes
  • Validity: 2 years
  • Retakes: One free retake included
  • Experience Level: Beginner

The exam covers five main domains:

DomainWeight
Kubernetes Fundamentals46%
Container Orchestration22%
Cloud Native Architecture16%
Cloud Native Observability8%
Cloud Native Application Delivery8%

Unlike other Kubernetes certification exams (CKA, CKAD, CKS), the KCNA exam is multiple choice and does not require you to complete any hands-on tasks. But to get a better understanding of the concepts, I highly recommend you spin up a local Kubernetes cluster and practice the concepts covered in the exam.

So let’s get that out of the way.


Kubernetes Cluster Setup

For most of the practice exercises, you can use Minikube. Minikube is a tool that you can use to create a single-node Kubernetes cluster on your local machine.

Follow the installation instructions appropriate for your operating system to install Minikube.

Once you have Minikube installed, you can start a local Kubernetes cluster by running the following command:

minikube start

Enable the metrics server by running the following command:

minikube addons enable metrics-server

To interact with your cluster you will need to have kubectl installed. You can install kubectl by following the installation instructions appropriate for your operating system.

Verify that kubectl is installed and configured correctly by running the following command:

kubectl cluster-info

Let’s run a quick sample application to test that our cluster is up and running. Create a simple deployment by running the following command:

kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.39 -- /agnhost netexec --http-port=8080

Using the minikube dashboard, you can view the deployment you just created. Run the following command to open the dashboard:

minikube dashboard

Hit Ctrl+C to exit the dashboard.

Optionally, you can install Headlamp to manage your Kubernetes cluster via a web UI or k9s for a terminal UI.


Kubernetes Fundamentals - 46%

This is the largest domain of the exam, covering nearly half of all questions. You’ll need a solid understanding of Kubernetes resources, architecture, the API, containers, and scheduling. Let’s dive into each of these topics.

Kubernetes Resources

Kubernetes resources are the objects that you create to run your applications. Understanding the different types of resources and when to use them is essential. You don’t need to memorize every field in every resource, but you should know what each resource does and when you’d reach for it.

Here’s a quick rundown of the workload resources you’ll encounter:

Workload Resources:

ResourceDescription
PodThe smallest deployable unit; one or more containers sharing network and storage
ReplicaSetEnsures a specified number of Pod replicas are running
DeploymentManages ReplicaSets and provides declarative updates for Pods
StatefulSetManages stateful applications with stable network identities and persistent storage
DaemonSetEnsures a Pod runs on all (or selected) nodes
JobCreates Pods that run to completion
CronJobCreates Jobs on a schedule

For networking, you’ll want to understand how traffic flows in and out of your cluster:

Service & Networking Resources:

ResourceDescription
ServiceExposes Pods to network traffic (ClusterIP, NodePort, LoadBalancer)
IngressManages external HTTP/HTTPS access to Services
NetworkPolicyControls traffic flow between Pods

And finally, configuration and storage resources are how you manage application settings and persistent data:

Configuration & Storage Resources:

ResourceDescription
ConfigMapStores non-sensitive configuration data as key-value pairs
SecretStores sensitive data (passwords, tokens) as base64-encoded values
PersistentVolume (PV)Cluster-level storage resource
PersistentVolumeClaim (PVC)Request for storage by a user

Additional Reading:

Practice Exercises:

  1. https://kubernetes.io/docs/tutorials/kubernetes-basics/
  2. https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/

Kubernetes Architecture

Kubernetes follows a master-worker architecture with a control plane and worker nodes. If you’re using a managed Kubernetes service like AKS, EKS, or GKE, the cloud provider manages the control plane for you. But it’s still important to understand what’s happening under the hood.

The control plane is the brain of your cluster. Here are the key components:

Control Plane Components:

ComponentDescription
kube-apiserverFront-end for the Kubernetes control plane; all communication goes through it
etcdDistributed key-value store that holds all cluster state and configuration
kube-schedulerWatches for newly created Pods and assigns them to nodes
kube-controller-managerRuns controller processes (Node, Replication, Endpoints, ServiceAccount controllers)
cloud-controller-managerIntegrates with cloud provider APIs (optional, for cloud deployments)

On the worker nodes, you have components that actually run your workloads:

Node Components:

ComponentDescription
kubeletAgent that runs on each node; communicates with kube-apiserver and ensures containers are running in Pods
kube-proxyMaintains network rules for Pod communication
Container RuntimeSoftware that runs containers (containerd, CRI-O)

Understanding how these components interact is crucial. For example:

  1. You submit a Deployment via kubectl to the API server
  2. The API server validates and stores it in etcd
  3. The controller manager creates a ReplicaSet which creates Pod objects
  4. The scheduler assigns Pods to nodes
  5. The kubelet on each node pulls images and starts containers

Additional Reading:

Practice Exercises:

  1. https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/

Kubernetes API

The Kubernetes API is the foundation of the declarative configuration model. Everything in Kubernetes is an API object.

Key Concepts:

  • API Groups: Resources are organized into groups (core, apps, batch, networking.k8s.io)
  • API Versions: alpha → beta → stable (v1)
  • Resource Versioning: Objects have resourceVersion for optimistic concurrency
  • Declarative vs Imperative: Kubernetes prefers declarative (YAML manifests) over imperative commands

Common kubectl Commands:

# Get API resources
kubectl api-resources

# Get API versions
kubectl api-versions

# Explain a resource
kubectl explain deployment
kubectl explain deployment.spec.replicas

# Apply a manifest
kubectl apply -f deployment.yaml

# Get resources
kubectl get pods
kubectl get pods -o wide
kubectl get pods -o yaml

Additional Reading:

Practice Exercises:

  1. https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/
  2. https://kubernetes.io/docs/reference/kubectl/quick-reference/

Containers

Containers are the foundation of Kubernetes workloads. Understanding how containers work is essential.

Key Concepts:

  • Container Images: Immutable templates that include application code and dependencies
  • Container Registries: Repositories for storing and distributing images (Docker Hub, gcr.io, quay.io)
  • Image Tags vs Digests: Tags are mutable; digests (sha256) are immutable
  • Multi-container Pods: Sidecar, init, and ambassador patterns

Container Runtimes:

Kubernetes uses the Container Runtime Interface (CRI) to interact with container runtimes. In practice, you’ll mostly see containerd these days since it’s the default for most managed Kubernetes services:

RuntimeDescription
containerdIndustry-standard container runtime, widely used
CRI-OLightweight runtime designed specifically for Kubernetes

Note: Docker (dockershim) was deprecated in Kubernetes v1.24. Most clusters now use containerd.

Additional Reading:

Practice Exercises:

  1. https://kubernetes.io/docs/tutorials/hello-minikube/
  2. https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

Scheduling

The Kubernetes scheduler is responsible for placing Pods on nodes. Understanding how scheduling decisions are made is important.

Scheduling Factors:

  • Resource Requests and Limits: CPU and memory requirements
  • Node Selectors: Simple key-value matching for node selection
  • Affinity and Anti-Affinity: More expressive rules for Pod placement
  • Taints and Tolerations: Prevent Pods from scheduling on certain nodes
  • Pod Topology Spread: Distribute Pods across failure domains

Example: Node Selector

spec:
  nodeSelector:
    disktype: ssd

Example: Taint and Toleration

# Taint a node
kubectl taint nodes node1 key=value:NoSchedule
# Pod toleration
spec:
  tolerations:
  - key: "key"
    operator: "Equal"
    value: "value"
    effect: "NoSchedule"

Additional Reading:

Practice Exercises:

  1. https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/
  2. https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/

Container Orchestration - 22%

Container orchestration is about automating the deployment, scaling, and management of containerized applications. This domain covers the fundamentals of orchestration, runtimes, security, networking, service mesh, and storage. While Kubernetes is the focus of this exam, it’s helpful to understand how it fits into the broader container orchestration landscape.

Container Orchestration Fundamentals

Container orchestration solves the challenges of running containers at scale:

  • Scheduling: Automatically placing containers on available resources
  • Scaling: Adding or removing container instances based on demand
  • Service Discovery: Finding and connecting to other services
  • Load Balancing: Distributing traffic across container instances
  • Self-healing: Restarting failed containers and replacing unhealthy instances
  • Rolling Updates: Updating applications without downtime

While Kubernetes dominates the container orchestration space today, it’s worth knowing about the alternatives. You might encounter these in legacy environments or specific use cases:

Popular Container Orchestrators:

PlatformDescription
KubernetesIndustry-standard, CNCF graduated project
Docker SwarmDocker’s native orchestration (simpler, less feature-rich)
Apache MesosData center resource manager (can run Kubernetes)
NomadHashiCorp’s workload orchestrator

Additional Reading:

Practice Exercises:

  1. https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/

Runtime

Container runtimes are responsible for running containers. Kubernetes uses the Container Runtime Interface (CRI) to support multiple runtimes.

Runtime Hierarchy:

  1. High-level runtime: Manages container lifecycle (containerd, CRI-O)
  2. Low-level runtime: Actually runs the container (runc, kata-containers, gVisor)

For workloads that need extra isolation (think multi-tenant environments or running untrusted code), sandboxed runtimes add an additional security layer:

Sandboxed Runtimes provide additional isolation:

RuntimeDescription
gVisorUser-space kernel for container isolation
Kata ContainersLightweight VMs for containers

RuntimeClass allows you to select different runtimes for different workloads:

apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: gvisor
handler: runsc

Additional Reading:

Practice Exercises:

  1. https://kubernetes.io/docs/concepts/containers/runtime-class/#usage

Security

Security in container orchestration spans multiple layers. This is a topic that deserves its own deep dive (check out my KCSA study guide if you want to go further), but here are the fundamentals you need to know for the KCNA.

Pod Security Standards define three profiles that range from permissive to restrictive:

Pod Security Standards:

ProfileDescription
PrivilegedUnrestricted, no isolation
BaselineMinimally restrictive, prevents known privilege escalations
RestrictedHeavily restricted, follows Pod hardening best practices

Key Security Concepts:

  • RBAC (Role-Based Access Control): Controls who can do what in the cluster
  • Network Policies: Control traffic between Pods
  • Security Contexts: Configure Pod and container security settings
  • Secrets Management: Secure storage for sensitive data
  • Image Security: Scanning, signing, and using trusted registries

Additional Reading:

Practice Exercises:

  1. https://kubernetes.io/docs/tutorials/security/
  2. https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/

Networking

Kubernetes networking is built on a few fundamental principles:

  1. Every Pod gets its own IP address
  2. Pods can communicate with all other Pods without NAT
  3. Nodes can communicate with all Pods without NAT
  4. The IP a Pod sees itself as is the same IP that others see it as

These principles are implemented by various components working together:

Networking Components:

ComponentDescription
CNI PluginsImplement the Container Network Interface (Calico, Flannel, Cilium, Weave)
kube-proxyManages iptables/IPVS rules for Service routing
CoreDNSProvides DNS-based service discovery

Services are how you expose your applications. The type you choose depends on where the traffic is coming from:

Service Types:

TypeDescription
ClusterIPInternal cluster IP (default)
NodePortExposes service on each node’s IP at a static port
LoadBalancerProvisions external load balancer (cloud providers)
ExternalNameMaps service to external DNS name

Additional Reading:

Practice Exercises:

  1. https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/
  2. https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/

Service Mesh

A service mesh is an infrastructure layer that handles service-to-service communication. Honestly, you probably don’t need a service mesh for most applications, but they become valuable when you’re running a lot of microservices and need consistent observability, security, and traffic management across all of them.

Here’s what a service mesh typically provides:

  • Mutual TLS (mTLS): Encrypted communication between services
  • Traffic Management: Load balancing, routing, retries, timeouts
  • Observability: Metrics, logs, and distributed tracing
  • Access Control: Fine-grained authorization policies

There are several service mesh options out there. Istio is feature-rich but can be complex to operate. Linkerd is lighter weight and easier to get started with. I’ve written about my service mesh considerations if you want to go deeper on this topic.

Popular Service Meshes:

MeshDescription
IstioFeature-rich, CNCF project
LinkerdLightweight, CNCF graduated project
Consul ConnectHashiCorp’s service mesh
CiliumeBPF-based networking and service mesh

Architecture: Most service meshes use a sidecar proxy pattern where a proxy (like Envoy) runs alongside each application container.

Additional Reading:

Practice Exercises:

  1. https://istio.io/latest/docs/setup/getting-started/
  2. https://linkerd.io/2/getting-started/

Storage

Kubernetes provides abstractions for managing storage. This is one of those areas where the abstraction really shines—you can write your application to use a PersistentVolumeClaim and the underlying storage can be anything from local disk to cloud block storage to a distributed file system.

Here are the key concepts you need to understand:

Key Concepts:

ConceptDescription
VolumeDirectory accessible to containers in a Pod
PersistentVolume (PV)Cluster-level storage resource provisioned by admin
PersistentVolumeClaim (PVC)Request for storage by a user
StorageClassDescribes “classes” of storage with provisioners
CSI (Container Storage Interface)Standard for exposing storage systems to containers

Access modes determine how a volume can be mounted. This is important to understand because not all storage backends support all modes:

Access Modes:

ModeDescription
ReadWriteOnce (RWO)Mounted as read-write by a single node
ReadOnlyMany (ROX)Mounted as read-only by many nodes
ReadWriteMany (RWX)Mounted as read-write by many nodes

Additional Reading:

Practice Exercises:

  1. https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
  2. https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/

Cloud Native Architecture - 16%

Cloud native architecture is about designing applications that fully exploit the advantages of the cloud computing model. This domain covers autoscaling, serverless, the CNCF community, roles and personas, and open standards. This is where you’ll learn about the broader ecosystem beyond just Kubernetes itself.

Autoscaling

Kubernetes provides multiple ways to scale workloads automatically:

Horizontal Pod Autoscaler (HPA): Scales the number of Pod replicas based on CPU, memory, or custom metrics.

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: my-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-app
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50

Vertical Pod Autoscaler (VPA): Adjusts resource requests and limits for Pods.

Cluster Autoscaler: Adds or removes nodes based on pending Pods and resource utilization.

KEDA (Kubernetes Event-Driven Autoscaling): Scales based on event sources like message queues, databases, and custom metrics.

Additional Reading:

Practice Exercises:

  1. https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/

Serverless

Serverless computing abstracts infrastructure management, allowing developers to focus on code. In the Kubernetes ecosystem, serverless means running workloads without managing the underlying infrastructure—your code scales to zero when not in use and spins up on demand.

Knative is the most prominent serverless platform for Kubernetes, but there are others worth knowing about:

Serverless Kubernetes Platforms:

PlatformDescription
KnativeKubernetes-native serverless platform (CNCF incubating)
OpenFaaSFunctions as a Service on Kubernetes
KubelessKubernetes-native serverless framework
FissionFast serverless functions for Kubernetes

Knative Components:

  • Knative Serving: Request-driven compute that scales to zero
  • Knative Eventing: Event-driven architecture with loose coupling

Additional Reading:

Practice Exercises:

  1. https://knative.dev/docs/getting-started/

Community and Governance

The CNCF (Cloud Native Computing Foundation) is the home of Kubernetes and many other cloud native projects. Understanding how the CNCF works is part of the exam, so take some time to explore the CNCF landscape—it’s massive!

CNCF projects go through maturity levels as they prove themselves:

CNCF Project Maturity Levels:

LevelDescription
SandboxEarly stage, experimental projects
IncubatingGrowing adoption, production usage
GraduatedMature, widely adopted projects (e.g., Kubernetes, Prometheus, Envoy)

Kubernetes Governance:

  • Special Interest Groups (SIGs): Focus on specific areas (SIG-Network, SIG-Storage, SIG-Security)
  • Working Groups: Cross-SIG initiatives
  • Kubernetes Enhancement Proposals (KEPs): Process for proposing new features
  • Release Cadence: 3 releases per year (~4 months between releases)

Additional Reading:

Roles and Personas

Understanding the different roles in a cloud native organization helps you understand who uses what and why. In reality, these roles often overlap—especially in smaller teams where you might wear multiple hats:

RoleResponsibilities
Application DeveloperWrites code, builds containers, defines Kubernetes manifests
Platform EngineerBuilds and maintains the platform, manages clusters
DevOps EngineerBridges development and operations, CI/CD pipelines
SRE (Site Reliability Engineer)Ensures reliability, manages incidents, capacity planning
Security EngineerImplements security policies, audits, compliance
Cluster AdministratorManages Kubernetes clusters, upgrades, monitoring

Additional Reading:

Open Standards

Cloud native computing relies on open standards for interoperability. This is actually one of the things I love about the cloud native ecosystem—you’re not locked into a single vendor’s implementation. These standards ensure that tools and platforms can work together:

StandardDescription
OCI (Open Container Initiative)Standards for container formats and runtimes
CRI (Container Runtime Interface)Kubernetes interface for container runtimes
CNI (Container Network Interface)Standard for network plugins
CSI (Container Storage Interface)Standard for storage plugins
SMI (Service Mesh Interface)Standard for service mesh implementations

OCI Specifications:

  • Image Spec: Defines container image format
  • Runtime Spec: Defines container runtime behavior
  • Distribution Spec: Defines how images are distributed

Additional Reading:


Cloud Native Observability - 8%

Observability is about understanding the internal state of your systems by examining their outputs. This domain covers telemetry, Prometheus, and cost management. Even though it’s only 8% of the exam, observability is crucial for operating cloud native applications in production.

Telemetry & Observability

The three pillars of observability are:

PillarDescription
MetricsNumerical measurements over time (CPU usage, request count)
LogsDiscrete events with timestamps and context
TracesRequest paths through distributed systems

OpenTelemetry: OpenTelemetry is the CNCF standard for collecting telemetry data. It provides:

  • Unified APIs for metrics, logs, and traces
  • Language-specific SDKs
  • Collector for processing and exporting data

The CNCF ecosystem has a rich set of observability tools. You don’t need to know all of them in depth, but you should be familiar with what each one does:

Observability Tools in the CNCF Ecosystem:

ToolPurpose
PrometheusMetrics collection and alerting
GrafanaVisualization and dashboards
JaegerDistributed tracing
Fluentd/Fluent BitLog collection and forwarding
OpenTelemetryUnified telemetry collection

Additional Reading:

Practice Exercises:

  1. https://opentelemetry.io/docs/demo/

Prometheus

Prometheus is the de facto standard for monitoring in the cloud native ecosystem. It’s a CNCF graduated project.

Key Concepts:

  • Pull-based model: Prometheus scrapes metrics from targets
  • Time-series database: Stores metrics with timestamps
  • PromQL: Query language for metrics
  • Alertmanager: Handles alerts from Prometheus
  • Exporters: Expose metrics from third-party systems

Prometheus supports different metric types, and understanding the difference matters when you’re writing PromQL queries:

Metric Types:

TypeDescription
CounterCumulative value that only increases
GaugeValue that can go up or down
HistogramSamples observations and counts them in buckets
SummarySimilar to histogram but calculates quantiles

Example PromQL Queries:

# CPU usage rate over 5 minutes
rate(container_cpu_usage_seconds_total[5m])

# Memory usage
container_memory_usage_bytes

# Request rate by status code
sum(rate(http_requests_total[5m])) by (status_code)

Additional Reading:

Practice Exercises:

  1. https://prometheus.io/docs/prometheus/latest/getting_started/
  2. https://prometheus.io/docs/tutorials/

Cost Management

Managing cloud native infrastructure costs is essential for sustainable operations:

Cost Optimization Strategies:

  • Right-sizing: Match resource requests/limits to actual usage
  • Autoscaling: Scale down during low demand
  • Spot/Preemptible Instances: Use cheaper, interruptible VMs for fault-tolerant workloads
  • Resource Quotas: Limit resource consumption per namespace
  • Limit Ranges: Set default and maximum resource limits

There are also tools specifically designed to help you understand and optimize your Kubernetes costs:

Cost Visibility Tools:

ToolDescription
KubecostKubernetes cost monitoring and optimization
OpenCostCNCF sandbox project for cost monitoring
Cloud provider toolsAWS Cost Explorer, Azure Cost Management, GCP Cost Management

Resource Quotas Example:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: compute-quota
  namespace: development
spec:
  hard:
    requests.cpu: "10"
    requests.memory: 20Gi
    limits.cpu: "20"
    limits.memory: 40Gi

Additional Reading:

Practice Exercises:

  1. https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/

Cloud Native Application Delivery - 8%

Application delivery encompasses the practices and tools for deploying applications reliably and efficiently. This domain covers application delivery fundamentals, GitOps, and CI/CD. This is where the rubber meets the road in terms of getting your applications into production.

Application Delivery Fundamentals

Modern application delivery focuses on:

  • Declarative Configuration: Define desired state, not imperative steps
  • Version Control: All configuration in Git
  • Automation: Reduce manual steps and human error
  • Reproducibility: Same process produces same results
  • Auditability: Track who changed what and when

Kubernetes supports several deployment strategies out of the box. Rolling updates are the default, but you might reach for blue-green or canary deployments when you need more control over how traffic shifts to new versions:

Deployment Strategies:

StrategyDescription
Rolling UpdateGradually replace old Pods with new ones (Kubernetes default)
Blue-GreenRun two environments, switch traffic between them
CanaryRoute a small percentage of traffic to the new version
A/B TestingRoute based on user attributes for experimentation

For managing Kubernetes manifests, you’ll likely use one of these tools. Personally, I tend to reach for Kustomize for simpler projects since it’s built into kubectl, and Helm when I need to package and distribute applications or deal with more complex templating:

Configuration Management Tools:

ToolDescription
HelmPackage manager for Kubernetes (charts)
KustomizeTemplate-free YAML customization (built into kubectl)
JsonnetData templating language

Additional Reading:

Practice Exercises:

  1. https://helm.sh/docs/intro/quickstart/
  2. https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/

GitOps

GitOps is a paradigm where Git is the single source of truth for declarative infrastructure and applications.

GitOps Principles:

  1. Declarative: Entire system described declaratively
  2. Versioned and Immutable: Desired state stored in Git
  3. Pulled Automatically: Agents pull desired state from Git
  4. Continuously Reconciled: Agents ensure actual state matches desired state

Argo CD and Flux are the two big players in this space. Both are CNCF graduated projects and both do the job well. I’ve used both and they each have their strengths:

GitOps Tools:

ToolDescription
Argo CDDeclarative GitOps CD for Kubernetes (CNCF graduated)
FluxGitOps toolkit for Kubernetes (CNCF graduated)

Benefits of GitOps:

  • Auditability: Git history shows all changes
  • Rollback: Revert to previous Git commit
  • Consistency: Same process for all environments
  • Security: No direct cluster access needed for deployments

Additional Reading:

Practice Exercises:

  1. https://argo-cd.readthedocs.io/en/stable/getting_started/
  2. https://fluxcd.io/flux/get-started/

CI/CD

Continuous Integration and Continuous Delivery/Deployment automate the software delivery process.

CI (Continuous Integration):

  • Developers frequently merge code changes
  • Automated builds and tests run on every change
  • Fast feedback on code quality

CD (Continuous Delivery/Deployment):

  • Continuous Delivery: Automated release process, manual deployment approval
  • Continuous Deployment: Fully automated deployment to production

There are a lot of CI/CD tools out there. The ones you’ll encounter most often in cloud native environments are:

CI/CD Tools:

ToolDescription
JenkinsOpen-source automation server
GitHub ActionsCI/CD built into GitHub
GitLab CICI/CD built into GitLab
TektonKubernetes-native CI/CD (CNCF project)
Argo WorkflowsKubernetes-native workflow engine

Kubernetes-Native CI/CD: Tekton provides Kubernetes Custom Resources for defining pipelines:

  • Task: A collection of steps
  • Pipeline: A series of Tasks
  • PipelineRun: An execution of a Pipeline
  • Trigger: Event-based Pipeline execution

Additional Reading:

Practice Exercises:

  1. https://tekton.dev/docs/getting-started/

Conclusion

That was a lot of information to cover, but I hope you found it helpful as you prepare for the Kubernetes and Cloud Native Associate (KCNA) exam. Remember, the KCNA is a foundational exam that tests your understanding of concepts rather than hands-on skills. Focus on understanding the “why” behind each topic, not just the “how.”

Good luck with your studies and reach out if you have any questions on the exam experience or with any of the topics covered in this study guide.

To Kubestronaut and beyond! 🚀

Resources

In addition to the official KCNA Exam Domains & Competencies and content above, here are some additional resources that you may find helpful as you prepare for the KCNA exam:

Official Resources:

Kubernetes Documentation:

CNCF Resources:

Practice Labs:

Next Steps:

After passing the KCNA, consider pursuing these certifications: