A. A physical network fabric in a leaf-spine configuration with dual Cisco switches within each availability zone.
B. A highly available gateway that supports the failure of an entire availability zone.
C. A 25-GbE port on each Top of Rack (ToR) switch connected to the ESXi host uplinks.
D. A single NSX Overlay Transport Zone for all clusters to carry the traffic between the ESXi hosts.
Explanation:
The VCF 5.2 design uses a Single Instance - Multiple Availability Zones topology (e.g., stretched cluster), requiring centralized management across two AZs, hosts in one rack per AZ, and workload mobility across AZs. The logical design focuses on high-level networking architecture, not physical details.
Let’s evaluate:
Option A: A physical network fabric in a leaf-spine configuration with dual Cisco switches within each availability zone
A leaf-spine fabric enhances physical network scalability and redundancy, aligning with rack-based deployments. However, it’s a physical design detail (switch topology), not a logical networking decision, per the VCF 5.2 Design Guide.
Option B: A highly available gateway that supports the failure of an entire availability zone
A gateway (e.g., NSX Edge Tier-0) with AZ failover supports North-South traffic resilience. While valuable, it doesn’t directly enable workload mobility across AZs (East-West traffic), which is the core requirement. The VCF 5.2 Networking Guide treats gateways as supplementary, not foundational for stretched clusters.
Option C: A 25-GbE port on each Top of Rack (ToR) switch connected to the ESXi host uplinks
Specifying 25-GbE ports is a physical network detail (bandwidth, cabling), not a logical design element. The VCF 5.2 Design Guide relegates port speeds to physical implementation, not logical architecture.
Option D: A single NSX Overlay Transport Zone for all clusters to carry the traffic between the ESXi hosts
In a stretched cluster topology, a single NSX Overlay Transport Zone enables VM mobility across AZs via overlay networks (e.g., Geneve). It ensures workloads can run on hosts in either AZ by providing a unified L2/L3 connectivity layer, managed by NSX. The VCF 5.2 Architectural Guide mandates a single Overlay TZ for stretched deployments to support vMotion and workload distribution, directly meeting the requirement.
Conclusion:
Option D is the logical design decision, enabling workload mobility across AZs in a stretched VCF topology via NSX overlay networking.
Reference: VMware Cloud Foundation 5.2 Architectural Guide (docs.vmware.com): Multi-AZ Topology and NSX Overlay.
VMware Cloud Foundation 5.2 Networking Guide (docs.vmware.com): Transport Zones in Stretched Clusters.
VMware Cloud Foundation 5.2 Design Guide (docs.vmware.com): Logical vs. Physical Design.