Cloud Security Office Hours Banner

Write a CNAPP comparison

Trial 3 CNAPPs against the same vulnerable account. Compare findings, false-positive rate, remediation guidance, and price-to-value. Honest write-up wins.

Walkthrough All Portfolio Projects

ยท ยท Vendor-neutral ยท View source on GitHub

Time: ~10 hours  ยท  Difficulty: Intermediate  ยท  Stack: AWS ยท 3 CNAPP free trials ยท Markdown

The cloud security tooling market is opaque on purpose โ€” every vendor claims to do everything, comparisons are gated behind sales calls, and most published reviews are sponsored. An honest, hands-on, vendor-neutral comparison from a practitioner is genuinely scarce content. It also signals that you understand the tool category every modern cloud security team uses, which is exactly what hiring managers screen for.

This is the longest project on the list because waiting for sales-engineer scheduling adds calendar time. Plan accordingly.

๐Ÿ“– On this page

  1. What you'll have at the end
  2. Prerequisites
  3. Step-by-step
  4. What hiring managers look for
  5. Common mistakes
  6. Where to publish
  7. Where next

What you'll have at the end

Prerequisites

Step-by-step

1. Pick three CNAPPs to evaluate

Cover the major categories. Recommended starter set:

Or substitute Lacework, Prisma Cloud, Aqua, Sysdig, or Tenable Cloud Security. Three is the right number โ€” more becomes a research project, fewer isn't a comparison.

2. Define your evaluation dimensions BEFORE you start

Decide what you'll grade on, in writing, before you see any product. This prevents post-hoc rationalization. Suggested dimensions:

3. Schedule the demos

Be honest with each vendor: "I'm doing a vendor-neutral comparison for a public write-up; I'd like to evaluate against my own AWS account." Most vendors will say yes โ€” you're free practitioner marketing.

4. Onboard each one

Connect each CNAPP to the same AWS account. Document time-from-signup-to-first-result for each.

5. Compare the same findings side by side

Pick 5 specific issues you know are in the account (a public S3 bucket, an over-privileged IAM role, a missing IMDSv2 enforcement, etc.). For each, capture how each platform surfaces it: severity, description, remediation guidance, screenshot. This is the most-read section of your write-up.

6. Sample false positives honestly

Pick 10 random findings per platform. Investigate each. Tag as confirmed-true / confirmed-false / can't-tell. Report the percentages. False-positive rate is the silent killer of CSPM tools and the data point everyone wants but few publish.

7. Write the scorecard

One-screen summary at the top of the write-up. Scoring 1โ€“5 across your dimensions. Ranking, with the caveat that ranking depends on the buyer's context.

8. Write the "who should pick which" section

Resist the urge to crown a winner. Different orgs have different needs (Azure-heavy โ†’ Defender; cloud-only โ†’ Wiz / Orca; multi-cloud + workload โ†’ Lacework / Prisma; etc.). Demonstrate that judgment.

9. Publish carefully

Vendors will read this. Be technically accurate, be fair, cite versions and dates, and be willing to update if a vendor disputes a specific data point. The goal is a write-up you can stand behind in 12 months, not a hot take.

What hiring managers look for

Common mistakes

Where to publish

The full publishing playbook is on the portfolio hub page. The short version: a public GitHub repo with a thorough README is the strongest single signal; pair it with a LinkedIn post and (optionally) a 5-minute lightning talk at a CSOH Friday Zoom.

Where next