Exam Guide: Solutions Architect - Associate
🧱 Domain 2: Design Resilient Architectures
📘 Task Statement 2.1
🎯 Designing Scalable And Loosely Coupled Architectures is about building systems that can:
1 Scale up and down with demand
2 Keep working when a component fails
3 Avoid tight dependencies so changes and failures don’t cascade
Loose coupling reduces blast radius.
Scaling reduces bottlenecks.
Managed services reduce operational burden.
Knowledge
1 | API Creation And Management
API Gateway And REST API
Amazon API Gateway is commonly used to:
1 Create REST/HTTP APIs for backend services
2 Front Lambda or private services
3 Handle auth, throttling, caching, request validation, and stages
“Need a managed API front door with throttling/auth” → API Gateway.
2 | Managed Services With Appropriate Use Cases
SQS, Transfer Family, Secrets Manager
The exam likes “use a managed service instead of building it.”
1 Amazon SQS: decouple services, buffer traffic spikes, async processing
2 AWS Transfer Family: managed SFTP/FTPS/FTP into S3/EFS (file ingestion)
3 AWS Secrets Manager: store + rotate secrets
3 | Caching Strategies
Caching improves performance and reduces load on databases/backends.
1 CloudFront: cache content at the edge (static + dynamic with rules)
2 ElastiCache (Redis/Memcached): app/data caching, sessions, leaderboards
3 API Gateway caching: cache API responses
“Reduce DB reads / improve latency for repeated requests” → caching
4 | Microservices Design Principles
Stateless vs Stateful
- Stateless compute scales horizontally (add more instances/tasks)
- State lives in managed services (databases, caches, object storage) Examples:
- ECS tasks and Lambda functions should not depend on local disk for important state
- Store files in S3, shared files in EFS, sessions in Redis if needed
5 | Event-Driven Architectures
Event-driven design improves decoupling and resilience.
1 EventBridge for event bus/routing
2 SNS for pub/sub fan-out
3 SQS for queues and buffering
4 Lambda for consumers
“Multiple consumers need the same event” → SNS fan-out
6 | Horizontal Scaling vs Vertical Scaling
- Horizontal scaling: add more instances/tasks (preferred for resilience)
- Vertical scaling: make one server bigger (simple, but has limits)
“Need high availability and elasticity”→ horizontal scaling + load balancing + managed services.
7 | Edge Accelerators
CDN use
- CloudFront reduces latency and protects origins (caching, TLS termination, WAF integration)
- Global Accelerator improves performance for TCP/UDP apps via Anycast routing
8 | How To Migrate Apps Into Containers
When Containers Are A Fit:
- You need portability, consistent runtime, or microservices packaging
- You want controlled scaling without managing servers (Fargate)
Main options:
- Amazon ECS: simpler AWS-native container orchestration
- Amazon EKS: Kubernetes-managed control plane
9 | Load Balancing Concepts
Application Load Balancer
- ALB (Application Load Balancer): HTTP/HTTPS, path-based routing, host-based routing
- Scales horizontally and improves availability by spreading traffic
10 | Multi-tier Architectures
Classic Tiers:
1 Presentation:
- CloudFront
- ALB
2 Application:
- EC2
- ECS
- EKS
- Lambda
3 Data:
- RDS
- DynamoDB
- ElastiCache
- S3
“Separate web/app/db tiers” → multi-tier in private subnets with scaling where needed.
11 | Queuing and Messaging Concepts
Key Distinctions:
- SQS: queue (point-to-point), buffering, retries, DLQs
- SNS: publish/subscribe (fan-out), multiple subscribers
- SNS → SQS for fan-out + durability per consumer
12 | Serverless Technologies And Patterns
Lambda And Fargate
Serverless means “no servers to manage,” and scaling is built in.
- AWS Lambda: event-driven compute, short-running, scales automatically
- AWS Fargate: serverless containers as in, you run tasks and services without managing EC2
13 | Storage Types And Characteristics
- Object (S3): durable, cheap, great for static assets, logs, backups
- File (EFS): shared POSIX file system for multiple instances
- Block (EBS): low-latency block storage attached to one EC2 instance (per volume)
14 | The Orchestration of Containers
Container orchestration is how you deploy, run, scale, and heal containers across compute capacity without manually managing container placement.
What Orchestration Solves
- Scheduling containers onto capacity
- Horizontal scaling (more tasks/pods)
- Self-healing (replace failed containers)
- Rolling deployments
- Load balancing integration
Amazon ECS (Elastic Container Service)
AWS-native orchestrator. You run:
- Task definitions: blueprints
- Tasks: running copies
- Services: keep desired task count running, integrate with ALB
Capacity Choices:
- ECS on Fargate: serverless containers therefore minimal ops
- ECS on EC2: you manage instances therefore more control
If the problem doesn’t require Kubernetes, ECS (often on Fargate) is usually the simplest correct answer.
Amazon EKS (Elastic Kubernetes Service)
Managed Kubernetes. You run:
- Pods: smallest deployable unit
- Deployments: replicas + rolling updates
- Services/Ingress: expose workloads
Worker Capacity:
Managed node groups (EC2) or EKS on Fargate for pods (where suitable)
If the question explicitly says Kubernetes, standardization across clouds, or Kubernetes tooling → EKS.
15 | When To Use Read Replicas
- You have read-heavy workloads
- You want to offload reads from the primary database instance
- You can tolerate eventual consistency on replica reads
16 | Workflow Orchestration
Step Functions
Step Functions is used to:
- Orchestrate multi-step workflows (retries, timeouts, branching)
- Coordinate microservices without building a custom state machine
Skills
A | Design Event-Driven, Microservice, And/Or Multi-tier Architectures
Pick Based On Requirements:
1 Monolith → multi-tier: simplest scaling boundaries, common for web apps
2 Microservices: independent deploy and scale per service
3 Event-driven: best for decoupling, async workflows, variable traffic
“Must decouple producers and consumers” → events/queues.
B | Determine Scaling Strategies For Architecture Components
Typical Scaling Choices
1 ALB + Auto Scaling for EC2
2 ECS Service Auto Scaling (tasks)
3 Lambda concurrency scaling
4 DB scaling: read replicas, caching, sharding patterns
C | Determine Services Required To Achieve Loose Coupling
Loose coupling toolkit:
1 SQS: buffer + retry + DLQ
2 SNS: fan-out
3 EventBridge: event routing
4 Step Functions: workflow + retries/timeouts
5 S3: durable handoff pattern: “drop file, trigger event”
D | Determine When To Use Containers
Containers are a good answer when:
1 You need custom runtimes or dependencies
2 You want microservices packaging with predictable scaling
3 You want long-running services (vs Lambda time limits)
E | Determine When To Use Serverless Technologies And Patterns
Serverless Is A Good Answer
1 Event-driven workloads
2 Spiky/unknown traffic
3 Want minimal ops and fast scaling
4 You can fit within service constraints (timeouts, cold starts, etc.)
F | Recommend Compute, Storage, Networking, And Database Technologies
1 Compute: Lambda vs ECS/Fargate vs EC2
2 Storage: S3 vs EFS vs EBS
3 Data: DynamoDB vs RDS (relational needs)
4 Network entry: CloudFront/ALB/API Gateway
G | Use Purpose-Built AWS Services
1 SQS for queues, SNS for pub/sub
2 API Gateway for managed API
3 Step Functions for orchestration
4 ElastiCache for caching
Cheat Sheet
| Requirement | Choice |
|---|---|
| Need to buffer spikes / decouple services | SQS (+ DLQ) |
| Fan-out to multiple consumers” | SNS → multiple SQS |
| Route events to different targets | EventBridge |
| Managed API front door | API Gateway |
| HTTP path-based routing to services | ALB |
| Global caching and lower latency | CloudFront |
| Orchestrate steps with retries/timeouts | Step Functions |
| Spiky events, minimal ops | Lambda |
| Containerized microservices without servers | ECS on Fargate |
| Read-heavy database workload | RDS read replicas |
| Store files durably and cheaply | S3 |
Recap Checklist ✅
If you can explain these ideas in simple terms, you’re in good shape for Task Statement 2.1:
1. [ ] Compute is designed to be stateless so it can scale horizontally
2. [ ] Traffic is distributed using ALB/API Gateway where appropriate
3. [ ] Spikes and failures are absorbed using queues/streams (often SQS)
4. [ ] Services are loosely coupled (async messaging, events, durable handoffs)
5. [ ] Caching is used to reduce latency and backend load (CloudFront/ElastiCache)
6. [ ] Storage choice matches the need (S3 object vs EFS file vs EBS block)
7. [ ] Databases scale reads with read replicas when needed
8. [ ] Workflows are orchestrated with Step Functions when coordination is required
9. [ ] Containers/serverless are chosen based on runtime + ops + scaling requirements
AWS Whitepapers and Official Documentation
These are the primary AWS documents behind Task Statement 2.1.
You do not need to memorize them, use them to understand why scalable and loosely coupled architectures work the way they do.
APIs and Integration
1. API Gateway
2. Step Functions
3. EventBridge
4. SNS
5. SQS
Load Balancing and Edge
1. Application Load Balancer (ALB)
2. CloudFront
3. Global Accelerator
Compute (Serverless and Containers)
1. Lambda
2. ECS
3. Fargate
4. EKS
Storage and Databases
1. S3
2. EBS
3. EFS
4. RDS read replicas
Other Managed Services Mentioned
1. AWS Transfer Family
2. AWS Secrets Manager
3. ElastiCache (Redis)
🚀
Top comments (0)