A. Provides service discovery across multiple clusters.
B. Provides an infrastructure layer that makes communication between applications possible, structured, and observable.
C. Provides dynamic application load balancing and autoscaling across multiple clusters and multiple sites.
D. Provides a centralized, global routing table to simplify and optimize traffic management.
Explanation:
A service mesh is an application communication layer that standardizes service-to-service traffic inside Kubernetes. Instead of each development team building custom logic for retries, timeouts, encryption, and telemetry, the mesh provides these capabilities consistently across workloads. This is typically done by inserting a data plane (often sidecar proxies or node-level proxies) that intercepts inbound and outbound traffic for each microservice, plus a control plane that distributes configuration and identity material.
The key outcomes align directly to option B: communication becomes possible (reliable connectivity patterns), structured (consistent routing rules, policies, and identity), andobservable (metrics, logs, and distributed tracing for east-west traffic). A service mesh commonly adds controls such asmTLS encryption, fine-grainedtraffic policy (allow/deny, rate limits, circuit breaking), and progressive delivery patterns (canary/blue-green) without changing application code.
By contrast, service discovery (A) is usually a built-in Kubernetes function, load balancing/autoscaling across sites (C) is not the primary definition of a service mesh, and a single centralized global routing table (D) is not how meshes are typically described or implemented.