KCNA Online Practice Questions

Home / The Linux Foundation / KCNA

Latest KCNA Exam Practice Questions

The practice questions for KCNA exam was last updated on 2026-04-10 .

Viewing page 1 out of 6 pages.

Viewing questions 1 out of 30 questions.

Question#1

What is CloudEvents?

A. It is a specification for describing event data in common formats for Kubernetes network traffic management and cloud providers.
B. It is a specification for describing event data in common formats in all cloud providers including major cloud providers.
C. It is a specification for describing event data in common formats to provide interoperability across services, platforms and systems.
D. It is a Kubernetes specification for describing events data in common formats for iCloud services, iOS platforms and iMac.

Explanation:
CloudEvents is an open specification for describing event data in a common way to enable interoperability across services, platforms, and systems, so C is correct. In cloud-native architectures, many components communicate asynchronously via events (message brokers, event buses, webhooks). Without a standard envelope, each producer and consumer invents its own event structure, making integration brittle. CloudEvents addresses this by standardizing core metadata fields―like event id, source, type, spec version, and time―and defining how event payloads are carried.
This helps systems interoperate regardless of transport. CloudEvents can be serialized as JSON or other encodings and carried over HTTP, messaging systems, or other protocols. By using a shared spec, you can route, filter, validate, and transform events more consistently.
Option A is too narrow and incorrectly ties CloudEvents to Kubernetes traffic management; CloudEvents is broader than Kubernetes.
Option B is closer but still framed incorrectly―CloudEvents is not merely “for all cloud providers,” it is an interoperability spec across services and platforms, including but not limited to cloud provider event systems.
Option D is clearly incorrect.
In Kubernetes ecosystems, CloudEvents is relevant to event-driven systems and serverless platforms (e.g., Knative Eventing and other eventing frameworks) because it provides a consistent event contract across producers and consumers. That consistency reduces coupling, supports better tooling (schema validation, tracing correlation), and makes event-driven architectures easier to operate at scale.
So, the correct definition is C: a specification for common event formats to enable interoperability across systems.

Question#2

In a Kubernetes cluster, which scenario best illustrates the use case for a StatefulSet?

A. A web application that requires multiple replicas for load balancing.
B. A service that routes traffic to various microservices in the cluster.
C. A background job that runs periodically and does not maintain state.
D. A database that requires persistent storage and stable network identities.

Explanation:
A StatefulSet is a Kubernetes workload API object specifically designed to manage stateful applications. Unlike Deployments or ReplicaSets, which are intended for stateless workloads, StatefulSets provide guarantees about the ordering, uniqueness, and persistence of Pods. These guarantees are critical for applications that rely on stable identities and durable storage, such as databases, message brokers, and distributed systems.
The defining characteristics of a StatefulSet include stable network identities, persistent storage, and ordered deployment and scaling. Each Pod created by a StatefulSet receives a unique and predictable name (for example, database-0, database-1), which remains consistent across Pod restarts. This stable identity is essential for stateful applications that depend on fixed hostnames for leader election, replication, or peer discovery. Additionally, StatefulSets are commonly used with PersistentVolumeClaims, ensuring that each Pod is bound to its own persistent storage that is retained even if the Pod is rescheduled or restarted.
Option A is incorrect because web applications that scale horizontally for load balancing are typically stateless and are best managed by Deployments, which allow Pods to be created and destroyed freely without preserving identity.
Option B is incorrect because traffic routing to microservices is handled by Services or Ingress resources, not StatefulSets.
Option C is incorrect because periodic background jobs that do not maintain state are better suited for Jobs or CronJobs.
Option D correctly represents the ideal use case for a StatefulSet. Databases require persistent data storage, stable network identities, and predictable startup and shutdown behavior. StatefulSets ensure that Pods are started, stopped, and updated in a controlled order, which helps maintain data consistency and application reliability. According to Kubernetes documentation, whenever an application requires stable identities, ordered deployment, and persistent state, a StatefulSet is the recommended and verified solution, making option D the correct answer.

Question#3

Which of the following cloud native proxies is used for ingress/egress in a service mesh and can also serve as an application gateway?

A. Frontend proxy
B. Kube-proxy
C. Envoy proxy
D. Reverse proxy

Explanation:
Envoy Proxy is a high-performance, cloud-native proxy widely used for ingress and egress traffic management in service mesh architectures, and it can also function as an application gateway. It is the foundational data-plane component for popular service meshes such as Istio, Consul, and AWS App Mesh, making option C the correct answer.
In a service mesh, Envoy is typically deployed as a sidecar proxy alongside each application Pod. This allows Envoy to transparently intercept and manage all inbound and outbound traffic for the service. Through this model, Envoy enables advanced traffic management features such as load balancing, retries, timeouts, circuit breaking, mutual TLS, and fine-grained observability without requiring application code changes.
Envoy is also commonly used at the mesh boundary to handle ingress and egress traffic. When deployed as an ingress gateway, Envoy acts as the entry point for external traffic into the mesh, performing TLS termination, routing, authentication, and policy enforcement. As an egress gateway, it controls outbound traffic from the mesh to external services, enabling security controls and traffic visibility. These capabilities allow Envoy to serve effectively as an application gateway, not just an internal proxy.
Option A, “Frontend proxy,” is a generic term and not a specific cloud-native component.
Option B, kube-proxy, is responsible for implementing Kubernetes Service networking rules at the node level and does not provide service mesh features or gateway functionality.
Option D, “Reverse proxy,” is a general architectural pattern rather than a specific cloud-native proxy implementation.
Envoy’s extensibility, performance, and deep integration with Kubernetes and service mesh control planes make it the industry-standard proxy for modern cloud-native networking. Its ability to function both as a sidecar proxy and as a centralized ingress or egress gateway clearly establishes Envoy proxy as the correct and verified answer.

Question#4

What does the liveness Probe in Kubernetes help detect?

A. When a container is ready to serve traffic.
B. When a container has started successfully.
C. When a container exceeds resource limits.
D. When a container is unresponsive.

Explanation:
The liveness probe in Kubernetes is designed to detect whether a container is still running correctly or has entered a failed or unresponsive state. Its primary purpose is to determine whether a container should be restarted. When a liveness probe fails repeatedly, Kubernetes assumes the container is unhealthy and automatically restarts it to restore normal operation.
Option D correctly describes this behavior. Liveness probes are used to identify situations where an application is running but no longer functioning as expected―for example, a deadlock, infinite loop, or hung process that cannot recover on its own. In such cases, restarting the container is often the most effective remediation, and Kubernetes handles this automatically through the liveness probe mechanism.
Option A is incorrect because readiness probes―not liveness probes―determine whether a container is ready to receive traffic. A container can be alive but not ready, such as during startup or temporary maintenance.
Option B is incorrect because startup success is handled by startup probes, which are specifically designed to manage slow-starting applications and delay liveness and readiness checks until initialization is complete.
Option C is incorrect because exceeding resource limits is managed by the container runtime and kubelet (for example, OOMKills), not by probes.
Liveness probes can be implemented using HTTP requests, TCP socket checks, or command execution inside the container. If the probe fails beyond a configured threshold, Kubernetes restarts the container according to the Pod’s restart policy. This self-healing behavior is a core feature of Kubernetes and contributes significantly to application reliability.
Kubernetes documentation emphasizes using liveness probes carefully, as misconfiguration can cause unnecessary restarts. However, when used correctly, they provide a powerful way to automatically recover from application-level failures that Kubernetes cannot otherwise detect.
In summary, the liveness probe’s role is to detect when a container is unresponsive and needs to be restarted, making option D the correct and fully verified answer.

Question#5

What are the most important resources to guarantee the performance of an etcd cluster?

A. CPU and disk capacity.
B. Network throughput and disk I/
C. CPU and RAM memory.
D. Network throughput and CP

Explanation:
etcd is the strongly consistent key-value store backing Kubernetes cluster state. Its performance directly affects the entire control plane because most API operations require reads/writes to etcd. The most critical resources for etcd performance are disk I/O (especially latency) and network throughput/latency between etcd members and API servers―so B is correct.
etcd is write-ahead-log (WAL) based and relies heavily on stable, low-latency storage. Slow disks increase commit latency, which slows down object updates, watches, and controller loops. In busy clusters, poor disk performance can cause request backlogs and timeouts, showing up as slow kubectl operations and delayed controller reconciliation. That’s why production guidance commonly emphasizes fast SSD-backed storage and careful monitoring of fsync latency.
Network performance matters because etcd uses the Raft consensus protocol. Writes must be replicated to a quorum of members, and leader-follower communication is continuous. High network latency or low throughput can slow replication and increase the time to commit writes. Unreliable networking can also cause leader elections or cluster instability, further degrading performance and availability.
CPU and memory are still relevant, but they are usually not the first bottleneck compared to disk and network. CPU affects request processing and encryption overhead if enabled, while memory affects caching and compaction behavior. Disk “capacity” alone (size) is less relevant than disk I/O characteristics (latency, IOPS), because etcd performance is sensitive to fsync and write latency.
In Kubernetes operations, ensuring etcd health includes: using dedicated fast disks, keeping network stable, enabling regular compaction/defragmentation strategies where appropriate, sizing correctly (typically odd-numbered members for quorum), and monitoring key metrics (commit latency, fsync duration, leader changes). Because etcd is the persistence layer of the API, disk I/O and network quality are the primary determinants of control-plane responsiveness―hence B.

Disclaimer

This page is for educational and exam preparation reference only. It is not affiliated with The Linux Foundation, Kubernetes and Cloud Native, or the official exam provider. Candidates should refer to official documentation and training for authoritative information.

Exam Code: KCNAQ & A: 240 Q&AsUpdated:  2026-04-10

  Get All KCNA Q&As