Time: ~10 hours ยท Difficulty: Intermediate ยท Stack: AWS ยท CloudTrail ยท SIEM (Wazuh / Elastic / Matano) ยท Sigma ยท Stratus Red Team
Detection engineering is the cloud security specialty with the lowest barrier-to-entry portfolio in the field. The pattern: get attack telemetry into a SIEM, write detections, prove they fire on the real attack technique. Doing this once on your own โ and publishing it โ puts you ahead of 80% of detection-engineer applicants.
๐ On this page
What you'll have at the end
- A working SIEM (Wazuh, Elastic, or Matano) ingesting CloudTrail from your own account.
- 5 Sigma rules (or vendor-specific equivalents) mapped to MITRE ATT&CK Cloud techniques.
- Validation evidence: a Stratus Red Team run that triggered each detection.
- A write-up explaining the rationale, the false-positive analysis, and the tuning you did.
- Bonus: a one-pager mapping your coverage against the MITRE ATT&CK Cloud matrix.
Prerequisites
- An AWS account with CloudTrail enabled.
- Home lab guardrails โ this project generates extra log volume and you don't want a runaway bill.
- Comfort with Linux and at least one query language (KQL, SPL, Lucene, or SQL).
- Docker (for the SIEM stack).
Step-by-step
1. Pick your SIEM
Three solid free options:
- Wazuh โ free, self-hosted, includes the analytics layer. Good if you want to run everything on one EC2 box.
- Elastic Security โ free tier on Elastic Cloud or self-hosted. Mature rule library; what most enterprises run.
- Matano โ serverless, S3-native, written for cloud detection from the ground up. Best signal-to-resume-noise ratio.
2. Stand up the SIEM
For Wazuh: docker compose up -d with the official compose file gets you running locally in 10 minutes. For Matano: deploy via AWS CDK to a new sub-account.
3. Wire CloudTrail into it
Configure CloudTrail to write to S3, then have the SIEM read from S3 (Matano native; Wazuh + Filebeat for the others). Confirm events are flowing by signing into the AWS console and verifying the sign-in event lands in the SIEM within ~5 minutes.
4. Pick 5 ATT&CK techniques to cover
Don't pick at random. Choose techniques that are (a) common, (b) detectable in CloudTrail, and (c) mapped to real attacker behavior. Recommended starter set:
- T1078.004 โ Valid Accounts: Cloud Accounts (impossible-travel sign-ins).
- T1098.001 โ Account Manipulation: Additional Cloud Credentials (CreateAccessKey on an existing user).
- T1530 โ Data from Cloud Storage (large S3 GetObject volume from a single principal).
- T1562.008 โ Impair Defenses: Disable Cloud Logs (CloudTrail StopLogging or DeleteTrail).
- T1538 โ Cloud Service Dashboard (unusual ConsoleLogin from a service account).
5. Write the detections
For each technique, write the rule in Sigma and compile it to your SIEM's native query language. Commit both versions to your repo. Annotate each rule with the MITRE ID and a short "why this matters" note.
6. Test with Stratus Red Team
Stratus Red Team from Datadog emulates real attack techniques against AWS. For each of your 5 detections, run the corresponding stratus technique and confirm your rule fires.
stratus list aws
stratus warmup aws.credential-access.ec2-get-password-data
stratus detonate aws.credential-access.ec2-get-password-data
7. Tune false positives
Run the rules against a week of normal CloudTrail and document every false positive. Tune. Document the tuning logic in your write-up. This is the part that separates real detection engineering from "I copied a rule from a blog."
8. Build the coverage map
Use the MITRE ATT&CK Navigator to colour the techniques you cover. Export as JSON, commit to the repo, embed the screenshot in the write-up.
What hiring managers look for
- You used Sigma as the source-of-truth for detection logic โ portable, reviewable, vendor-agnostic.
- You validated the detections with Stratus Red Team or equivalent. "It fires when the technique runs" is the only proof that matters.
- Your false-positive analysis is honest. Detection engineering is mostly tuning.
- Your rule names and descriptions are good. Detection content is read by humans at 3am โ clarity matters.
- You can speak to what you'd add next and how you'd measure coverage over time.
Common mistakes
- Picking 5 techniques because they're easy, not because they're important. Cover what attackers actually do.
- Writing the rule and never running the attack. If you didn't validate it fires, you don't know it fires.
- Skipping the false-positive analysis. Untuned detections are alert-fatigue factories.
- Hardcoding usernames or account IDs into rules. Use variables / parameters.
- Standing up the SIEM and never tearing it down. Wazuh on an idle EC2 instance bills 24/7.
Where to publish
The full publishing playbook is on the portfolio hub page. The short version: a public GitHub repo with a thorough README is the strongest single signal; pair it with a LinkedIn post and (optionally) a 5-minute lightning talk at a CSOH Friday Zoom.
Where next
- All 7 portfolio projects โ pick your next one.
- Home lab setup โ the safe environment this project runs in.
- Careers guide โ how this project fits into the hiring story.
- Friday Zoom + Signal chat โ share your write-up with practitioners.
