Time: ~10 hours ยท Difficulty: Intermediate ยท Stack: AWS ยท 3 CNAPP free trials ยท Markdown
The cloud security tooling market is opaque on purpose โ every vendor claims to do everything, comparisons are gated behind sales calls, and most published reviews are sponsored. An honest, hands-on, vendor-neutral comparison from a practitioner is genuinely scarce content. It also signals that you understand the tool category every modern cloud security team uses, which is exactly what hiring managers screen for.
This is the longest project on the list because waiting for sales-engineer scheduling adds calendar time. Plan accordingly.
๐ On this page
What you'll have at the end
- A markdown report comparing 3 CNAPP platforms across 6+ dimensions (findings volume, finding quality, false-positive rate, remediation guidance, IaC scanning, runtime detection, IAM analysis, UX, price model).
- Screenshots from each platform of the same finding, side-by-side.
- A scorecard summary at the top that someone can read in 60 seconds.
- A short "who should pick which" section.
- A reflection on what you learned about the CNAPP category itself.
Prerequisites
- An AWS account with a deliberately mixed posture โ some real findings, some clean. Use your home lab, ideally one already populated by a Prowler audit or CloudGoat run.
- Time to schedule sales calls. Most CNAPPs gate trials behind a 30-min demo. Plan 2-3 weeks calendar.
- Comfort writing balanced product comparisons (not advocacy).
Step-by-step
1. Pick three CNAPPs to evaluate
Cover the major categories. Recommended starter set:
- Wiz โ agentless, broad coverage, market leader.
- Orca โ agentless, originated the SideScanning pattern.
- Microsoft Defender for Cloud โ bundled with Azure (and now AWS/GCP), strong free tier.
Or substitute Lacework, Prisma Cloud, Aqua, Sysdig, or Tenable Cloud Security. Three is the right number โ more becomes a research project, fewer isn't a comparison.
2. Define your evaluation dimensions BEFORE you start
Decide what you'll grade on, in writing, before you see any product. This prevents post-hoc rationalization. Suggested dimensions:
- Time-to-first-finding after onboarding.
- Number of findings on the same account.
- Finding quality / actionability (judged by a rubric).
- False-positive rate (sample 10 random findings, validate each).
- Remediation guidance specificity.
- IaC scanning capability.
- Runtime / workload protection capability.
- IAM analysis capability.
- UX (rate 1โ5 with brief notes).
- Price model (where published).
3. Schedule the demos
Be honest with each vendor: "I'm doing a vendor-neutral comparison for a public write-up; I'd like to evaluate against my own AWS account." Most vendors will say yes โ you're free practitioner marketing.
4. Onboard each one
Connect each CNAPP to the same AWS account. Document time-from-signup-to-first-result for each.
5. Compare the same findings side by side
Pick 5 specific issues you know are in the account (a public S3 bucket, an over-privileged IAM role, a missing IMDSv2 enforcement, etc.). For each, capture how each platform surfaces it: severity, description, remediation guidance, screenshot. This is the most-read section of your write-up.
6. Sample false positives honestly
Pick 10 random findings per platform. Investigate each. Tag as confirmed-true / confirmed-false / can't-tell. Report the percentages. False-positive rate is the silent killer of CSPM tools and the data point everyone wants but few publish.
7. Write the scorecard
One-screen summary at the top of the write-up. Scoring 1โ5 across your dimensions. Ranking, with the caveat that ranking depends on the buyer's context.
8. Write the "who should pick which" section
Resist the urge to crown a winner. Different orgs have different needs (Azure-heavy โ Defender; cloud-only โ Wiz / Orca; multi-cloud + workload โ Lacework / Prisma; etc.). Demonstrate that judgment.
9. Publish carefully
Vendors will read this. Be technically accurate, be fair, cite versions and dates, and be willing to update if a vendor disputes a specific data point. The goal is a write-up you can stand behind in 12 months, not a hot take.
What hiring managers look for
- Your evaluation dimensions were defined upfront โ shows methodological maturity.
- Your false-positive analysis is honest. Most public comparisons skip this; including it elevates yours instantly.
- You don't crown a winner โ you say "depends on context, here's how to choose." This is what staff engineers actually do.
- You scaled the depth to the time available. Good triage, not exhaustive analysis paralysis.
- You can speak to the CNAPP category, not just the three products. Shows you understand the market shape.
Common mistakes
- Picking products you've already worked with. Half the value is the fresh-eyes evaluation.
- Skipping the false-positive sampling. Without it, the comparison is a feature checklist.
- Using only the demo data each vendor provides. Hook each one to the same real account.
- Letting a sales engineer drive the demo for the whole eval period. Use the trial yourself in self-serve mode.
- Publishing a hot take. Vendors will engage; you want to defend a careful position.
- Failing to disclose if any vendor gave you anything (extended trial, swag, free seats). Disclose at the top of the write-up.
Where to publish
The full publishing playbook is on the portfolio hub page. The short version: a public GitHub repo with a thorough README is the strongest single signal; pair it with a LinkedIn post and (optionally) a 5-minute lightning talk at a CSOH Friday Zoom.
Where next
- All 7 portfolio projects โ pick your next one.
- Home lab setup โ the safe environment this project runs in.
- Careers guide โ how this project fits into the hiring story.
- Friday Zoom + Signal chat โ share your write-up with practitioners.
