Resilience, Observability & Modern Trends in Load Balancers Quiz

Q1. A web service is deployed on two load-balancer nodes for high availability. In one configuration, both nodes share traffic simultaneously; in another, one node handles traffic while the second stays on standby until needed. Which statement about these two configurations is correct?




Q2. During a routine deployment, you need to temporarily remove one of the application servers from the load balancer's pool. There are active user sessions on that server. How should the load balancer handle this to avoid dropping users’ connections?




Q3. The SRE team monitors the load balancer using four key metrics: traffic (requests per second), latency, errors, and saturation (resource usage). They set up alerts when error rates or latency exceed SLO thresholds. What monitoring approach are they following with these metrics?




Q4. Your web application is under attack from malicious requests containing SQL-injection attempts and other exploits. You want the load balancer to inspect and block these dangerous HTTP payloads before they reach your servers. Which feature should be enabled?




Q5. One client is calling your API thousands of times per minute, causing performance issues for other users. The traffic is legitimate but overwhelming. What load-balancer feature can prevent a single client from consuming all resources at the expense of others?




Q6. In an active-passive HA load-balancer setup, the standby instance must detect when the active instance goes down. How is this typically accomplished?




Q7. You have two load-balancer appliances in an active-active cluster. If the network link between them fails, each one might assume the other is dead and take full control, resulting in both acting as primary (a split-brain scenario). What design measure can prevent this?




Q8. Your web service handles normal traffic fine, but when a sudden surge of clients all connect at once, the back-end servers are momentarily overwhelmed by the initial flood of new requests. Which load-balancer feature helps protect the servers by smoothing out such traffic bursts?




Q9. The team has implemented distributed tracing for their microservices. They want the load balancer to participate in traces as well by adding a unique trace-ID header and recording forwarding latency. What does this enable?




Q10. A company deploys a service mesh with sidecar proxies like Envoy for each microservice instead of relying only on a central load balancer for service-to-service traffic. Which is an advantage of this sidecar-based approach?




Q11. In a zero-trust architecture, all client-service connections must use mutual TLS (mTLS) for authentication. How can you configure the load balancer to enforce this while keeping the TLS connection end-to-end between client and server?




Q12. A startup expects a huge traffic spike during a one-day product launch. Their cloud load balancer can scale up, but it takes a few minutes to provision new capacity once the surge begins. What should they do beforehand?




Q13. A microservices platform notices high CPU usage and added latency from running a sidecar proxy alongside each service for load balancing. To reduce user-space overhead, they consider performing load balancing in the Linux kernel instead. Which technology enables this?




Q14. An e-commerce company wants to run custom code (for user authorization and dynamic content) at the edge of the network within CDN or load-balancer nodes. What is this architectural trend called?




Q15. A security policy mandates end-to-end encryption, so the load balancer is configured for TLS passthrough (it forwards encrypted traffic without decrypting). What is one significant limitation of this setup?




Q16. A popular API is served via load balancers in several regions. The team wants to enforce a global rate limit per user across all these distributed instances. What is a common solution?




Q17. A company’s infrastructure includes global anycast load balancing, a service mesh with sidecar proxies, and eBPF-based optimizations in the kernel. Engineers are finding it challenging to manage. What is a primary drawback of such a highly complex, multi-layer load-balancing architecture?




Q18. In a high-availability pair of load balancers, the secondary needs to seamlessly take over traffic if the primary fails—without clients noticing. What technique is used to achieve this transparent failover?




system-design