- Vishakha Sadhwani
- Posts
- 7 Cloud Architecture Patterns You Should Know
7 Cloud Architecture Patterns You Should Know
Use Cases, Real-World Examples, and Key Interview Questions

Hey inner circle,
Do you want to know how tech giants build amazing apps that handle millions of users, ensure high availability, and keep adding cool new features?
Well, it’s not magic! They use something called architecture patterns—proven blueprints or recipes for building and delivering software & services.
Why should you care about them? These patterns help you:
Grow your systems easily (scale)
Stay online even when things go wrong (resilience)
Add new features without breaking everything (evolvability)
Here are 7 patterns with their use cases, examples, and interview questions you should be prepared for:

1. Microservices: Breaking Apps Into Small blocks
What: Split your app into independent services that do one thing well, like LEGO blocks.
When to use: Big apps with many teams or when you need faster updates.
Examples:
E-commerce: Separate services for cart, catalog, payments, and shipping. One team can improve the checkout without touching product listings.
Streaming platforms: Independent microservices for user profiles, recommendations, video encoding, and billing.
Interview Questions & Answers:
What problems do microservices solve compared to monoliths?
They reduce code complexity, allow teams to work independently, and enable easier scaling of individual features.How do microservices communicate, and what protocols are common?
They use HTTP REST, gRPC, or messaging protocols like AMQP/Kafka for async communication.What are some common pitfalls when adopting microservices?
Too many services increase ops overhead; distributed transactions become tricky; debugging across services is harder.How do you handle data consistency across microservices?
Use eventual consistency with patterns like event sourcing; avoid distributed transactions when possible.Why would a company move back from microservices to a monolith?
Microservices add overhead in small teams or simple apps. A monolith can be faster to develop, easier to deploy, and cheaper to maintain if scalability and team size don’t justify the complexity.
2. Event-Driven: Apps That React Instantly
What: Components send “events” when things happen, and other parts listen and react.
When to use: Need real-time reactions or loosely coupled systems.
Examples:
Food delivery: When an order is placed, events notify the kitchen, driver, and payment systems at the same time.
Finance: “TransactionCompleted” events trigger fraud checks, update balances, and notify customers.
Interview Questions & Answers:
How does event-driven architecture improve scalability?
Systems can scale consumers independently, process events in parallel, and decouple services.What’s the difference between event sourcing and event-driven design?
Event-driven: Reacts to events for communication.
Event sourcing: Stores every state change as an event for rebuilding system state.How do you guarantee message delivery?
Use brokers like Kafka with durability, acknowledgments, retries, and dead-letter queues.What’s eventual consistency, and when is it acceptable?
Data becomes consistent over time. Acceptable when immediate accuracy isn’t critical (e.g., updating analytics dashboards).
3. Sidecar: Your App’s Helper
What: A helper runs next to your app, offloading tasks like logging, auth, or proxying.
When to use: When you need shared features across microservices without bloating app code.
Examples:
Logging: A sidecar collects logs from your app and forwards them to systems like ELK or Datadog.
Service mesh: Sidecars like Envoy handle retries, circuit-breaking, and secure communication.
Interview Questions & Answers:
What are common use cases for sidecar containers?
Logging, monitoring, security proxies, service mesh features like traffic routing or circuit-breaking.How do sidecars fit into a Kubernetes architecture?
They’re additional containers in the same pod as the main app, sharing networking and storage.How do you secure communication between your app and its sidecar?
Use local sockets or loopback networking; enforce TLS within the pod if needed.What’s the difference between a sidecar and a daemon process?
Sidecars are tightly coupled to a specific app instance; daemons run independently on the host.
4. Strangler Fig: Replace Old Systems Gradually
What: Build new parts bit by bit, redirect traffic, and retire old parts over time.
When to use: Modernizing legacy systems without risky big-bang rewrites.
Examples:
Payment gateway: Move only payment authorization to a new service, keeping settlement logic in the old system initially.
Banking apps: Slowly replace parts of a legacy core banking system by routing new features through modern microservices.
Interview Questions & Answers:
How would you plan a migration using the strangler fig pattern?
Identify low-risk features to migrate first, deploy them, redirect traffic, and iterate.What tools help implement this pattern?
API gateways or reverse proxies like NGINX/Envoy to route traffic to new or old components.How do you manage data consistency during gradual migration?
Sync data between old and new systems using CDC (Change Data Capture) or dual writes.How can you monitor traffic between old and new systems?
Use logging and tracing at the proxy/API gateway; monitor metrics on response times and error rates.
5. Sharding: Splitting Data for Speed
What: Divide a big database into smaller, faster pieces (shards).
When to use: Massive data or user bases that overwhelm a single database.
Examples:
User databases: Companies shards users by user ID range, so each database only handles a fraction of users.
IoT platforms: Store sensor data by device group or time period in separate shards.
Interview Questions & Answers:
How would you decide your sharding key?
Choose a key with uniform distribution (e.g., user ID) to avoid hot spots.What problems can arise from unbalanced shards?
Some shards get overloaded (hot shards) causing performance bottlenecks.How do you perform a resharding operation safely?
Use consistent hashing, migrate data gradually with dual writes, and minimize downtime.What’s the difference between horizontal and vertical sharding?
Horizontal: Splitting rows across multiple databases.
Vertical: Splitting tables by functionality into separate databases.
6. Serverless: Code Without Operational Overhead
What: Write small functions cloud runs only when needed — no servers to manage.
When to use: Event-based tasks, unpredictable traffic, or quick MVPs.
Examples:
Image processing: When a user uploads an image, a serverless function resizes it on the fly.
Email notifications: Serverless functions send emails in response to account activity.
Interview Questions & Answers:
What is a cold start, and how do you mitigate it?
Delay on first request due to initializing function container. Mitigate with provisioned concurrency or warm-up requests.When should you avoid serverless?
Long-running tasks, constant low-latency requirements, or high, predictable traffic (dedicated servers may be cheaper).How does billing work with serverless?
Pay per execution duration, memory used, and number of invocations.How would you build a fully serverless API?
Use API Gateway + Lambda + serverless database (e.g., DynamoDB).
7. API Gateway: Your App’s Traffic Controller
What: One central entry point for all client requests; routes, authenticates, and monitors traffic.
When to use: Many services needing secure, unified access.
Examples:
SaaS platforms: API Gateways handle routing to services for billing, support, analytics, and more.
Gaming backends: Central gateways protect game APIs from abuse and apply rate limits.
Interview Questions & Answers:
What’s the role of an API Gateway in microservices?
Centralize routing, offload cross-cutting concerns (auth, rate limiting, logging).How do API Gateways improve security?
Handle authentication/authorization, encrypt traffic, enforce rate limits.What’s the difference between API Gateway and a load balancer?
API Gateway understands and manipulates API-level traffic (HTTP verbs, headers), while load balancers distribute traffic at transport level (TCP/HTTP).How do you implement caching in an API Gateway?
Configure response caching by URL/method, set appropriate cache headers, or integrate with CDN.
That’s a wrap! You did well!!
See you next Thursday with more cloud insights.