A security engineer is designing a solution that will provide end-to-end encryption between clients and Docker containers running in Amazon Elastic Container Service (Amazon ECS). This solution must also handle volatile traffic patterns.
Which solution would have the MOST scalability and LOWEST latency?
A. Configure a Network Load Balancer to terminate the TLS traffic and then re-encrypt the traffic to the containers.
B. Configure an Application Load Balancer to terminate the TLS traffic and then re-encrypt the traffic to the containers.
C. Configure a Network Load Balancer with a TCP listener to pass through TLS traffic to the containers.
D. Configure Amazon Route 53 to use multivalue answer routing to send traffic to the containers.
Explanation:
Network Load Balancers operate at Layer 4 and are optimized for extreme performance, ultra-low latency, and handling sudden traffic spikes. According to AWS Certified Security C Specialty documentation, using a TCP listener on an NLB allows TLS traffic to pass through directly to backend containers without termination, preserving true end-to-end encryption.
This approach eliminates the overhead of decrypting and re-encrypting traffic at the load balancer, reducing latency and maximizing throughput. NLBs scale automatically to handle volatile traffic patterns and millions of requests per second.
Application Load Balancers operate at Layer 7 and introduce additional latency due to TLS termination and HTTP processing. Route 53 multivalue routing does not provide load balancing at the transport layer and does not ensure encryption handling.
AWS recommends NLB TCP pass-through for high-performance, end-to-end encrypted container workloads.
Referenced AWS Specialty Documents:
AWS Certified Security C Specialty Official Study Guide
Elastic Load Balancing Architecture
Network Load Balancer Performance Characteristics