Container Security: Image Scanning, Runtime, and Kubernetes Hardening
TL;DR
Container security spans four loosely connected disciplines: image scanning at build time, supply-chain integrity for what you pull and push, runtime workload protection in production, and Kubernetes admission control at the orchestrator. The category overlaps heavily with cloud security and software composition analysis, and the vendor landscape has consolidated under the CNAPP banner over the past two years. Most teams need at minimum image scanning wired into CI plus a runtime layer protecting production workloads, with admission control following once the platform team has Kubernetes maturity. The discipline is also defined by what it does not catch: container scanners do not see application source code or application-layer dependency vulnerabilities, so a complete picture still requires SAST and SCA against the application itself. This guide walks the four layers, names the public tools honestly, and ends with a sequencing model for teams building a container security program from scratch.
The container revolution moved most production workloads to images, and then to Kubernetes, in roughly five years. Docker reached general availability in 2014, Kubernetes hit 1.0 in 2015, and by the end of the decade running new services anywhere other than a container cluster had become the unusual choice in most engineering organizations. The CNCF annual survey now reports that the majority of production applications at large companies run on Kubernetes, and the rest are running on container platforms (ECS, Cloud Run, App Service, Nomad) that share most of the same security primitives. The default unit of deployment is the image, the default unit of compute is the pod, and the default network boundary is the cluster.
The attack surface moved with it. The most exploited vulnerabilities of the last few years — Log4Shell, the Spring4Shell follow-on, the long tail of OpenSSL CVEs, the supply-chain compromises against base image registries — all played out inside container workloads. The mitigations and the tooling that sit around those workloads have matured into a recognizable category that vendors and analysts now call "container security." This guide explains what the category actually contains, where the boundaries blur into adjacent disciplines like cloud security and SCA, which tools matter and what they do, and — importantly — what container security tools do not catch and where you still need other layers in the stack.
What Container Security Actually Covers
Modern container security solutions usually market themselves as combining container security scanning at build time with container vulnerability management across the registry and runtime layers. In practice that bundle maps onto the four layers below: image scanning during the build, supply-chain integrity for what you pull and push, runtime workload protection in production, and admission policy at the orchestrator. Vendors differ on which layers they cover deeply versus which they treat as table stakes — read each vendor's own documentation for the specifics.
Container security is best understood as four overlapping layers, each with a distinct purpose, a distinct set of tools, and a distinct integration point in the lifecycle. Buying decisions get confused when teams treat the category as a single product; vendors get confused when they market a single product as covering all four. The layers are build-time image scanning, supply-chain integrity, runtime workload protection, and orchestrator-level admission and policy. A serious container security program touches all four eventually, but most start with one or two and add the rest over time.
Build-time image scanning runs against built container images. The scanner enumerates the operating system packages, language libraries, and configuration baked into each layer of the image, then matches every package-version pair against vulnerability advisory feeds (NVD, GitHub Advisory Database, OSV, vendor security trackers) and reports the matching CVEs along with available fixed versions. This is the most mature layer of container security: the open-source tools (Trivy, Grype, Clair) are credible by themselves, the integration pattern is well-understood, and most CI platforms ship a first-party action or plugin to wire it up.
Supply-chain integrity answers the question of whether the image you are about to run is the image your build pipeline actually produced. The mechanisms are signed images (cosign, Notary v2), in-toto attestations, SLSA provenance, and reproducible builds. The threat model is a compromised base image, a compromised registry, or a compromised CI runner injecting a malicious payload into an artifact your team will deploy without noticing. The sigstore ecosystem (cosign, Rekor, Fulcio) has emerged as the dominant open standard for image signing and is increasingly the default in cloud-native CI.
Runtime workload protection watches a running container for behavior that suggests an in-progress attack or post-exploitation activity. The instrumentation is usually eBPF-based — a kernel hook that emits syscall and network events from the host without requiring application changes — or sidecar-based for less invasive deployments. The signals include unusual process execution, unexpected outbound connections, file modifications outside writable layers, privilege escalation attempts, and reverse-shell patterns. Falco and Cilium Tetragon are the dominant open-source runtime engines; the major commercial CNAPP vendors all sell runtime modules.
Orchestrator admission and policy sits at the Kubernetes API server. When a user or controller submits a pod spec, the admission layer evaluates it against a policy and either allows, denies, or mutates the request. The standard mechanisms are Kubernetes Pod Security Standards (the successor to Pod Security Policies), OPA Gatekeeper, and Kyverno. The policies typically enforce that pods do not run as root, do not mount the Docker socket, do not request privileged mode, do not pull from unapproved registries, and only run images that pass signature verification. This is the layer that operationalizes "shift left to the cluster" — making the cluster itself a security gate rather than relying on every team to follow guidance.
Image Scanning: What It Does and How It Works
Image scanning is a static analysis activity. The scanner pulls a built image from a registry or a local Docker daemon, decomposes it into its constituent layers, and walks each layer to enumerate everything installed: operating system packages from the package manager database (dpkg for Debian-derived bases, rpm for Red Hat derivatives, apk for Alpine), language packages bundled into the image (Python wheels, Node modules, Java JARs, Go binaries with embedded module metadata), and standalone binaries. It then matches every (package, version) tuple against vulnerability advisory feeds and produces a report of CVEs, fixed versions where available, and severity ratings.
Modern image scanners also catch a second class of issue: Dockerfile and image-configuration anti-patterns. Examples include images that run as root by default, images that include build tools (gcc, npm, pip) in the final layer, images that bundle SSH keys or AWS credentials in a layer, images that pull base images by tag rather than by digest (which makes them vulnerable to upstream supply-chain attacks), and images that disable signature verification. These findings are usually surfaced alongside the CVE list and remediated through Dockerfile changes rather than package upgrades.
The dominant open-source tools are Trivy (now maintained by Aqua Security and the de facto default for new projects), Grype (Anchore), Clair (originally Red Hat / Quay), and Dockle for Dockerfile linting. Trivy in particular has expanded well beyond container scanning into IaC, secrets, and Kubernetes manifest scanning, and it ships as a single binary that integrates cleanly into virtually any CI system. Commercial vendors (Snyk Container, Aqua Security Trivy Premium, Sysdig Secure, Wiz, Prisma Cloud) layer policy engines, registry integrations, deduplication across heterogeneous sources, and unified reporting on top of the scanning primitives. The integration pattern is consistent: scan at build time, scan again on registry push as a backup catch, and gate promotion between environments on a policy that allows known-acceptable findings while blocking new criticals.
Two practical caveats. First, most container scan findings are in base image OS packages that the application team cannot remediate without a base image rebuild. Rebuilding on a current minimal base (distroless, Chainguard Wolfi, Red Hat UBI Micro) typically eliminates 80 to 95 percent of CVE noise without requiring application changes. Second, image scanners produce findings against the image as built, not against the image as run. A vulnerable library inside an image that is never loaded by the running process represents zero practical risk; a current scanner cannot tell the difference. Reachability analysis in this domain is still immature.
Runtime Workload Protection
Runtime workload protection picks up where image scanning leaves off. Image scanning tells you what vulnerabilities your image carries; runtime protection tells you what the running container is actually doing. The instrumentation has standardized over the past several years on eBPF, the in-kernel virtual machine that lets a tool subscribe to syscall, network, and tracepoint events from the host without modifying application code or running a heavyweight agent inside the container. eBPF is now the foundation for Falco, Cilium Tetragon, the runtime modules of Sysdig Secure and Aqua, and most other serious cloud-native runtime tools.
The signals a runtime engine watches for include: unexpected process execution inside a container that should be running a single binary, syscalls that the application has no legitimate reason to make (ptrace, the unshare family, mount syscalls), outbound network connections to addresses outside the application's expected egress set, file system writes to read-only paths, privilege escalation attempts (writes to /proc, attempts to load kernel modules), and known post-exploitation patterns (reverse shells, base64-encoded payload execution, cryptominer behavior). Detection rules ship pre-loaded with the open-source engines and can be customized per-workload by the platform team.
Runtime is also where policy enforcement at the workload level lives, distinct from admission policy at the cluster level. Tools like Kyverno and Gatekeeper, while primarily admission controllers, are increasingly used to enforce mutation and validation policies that affect runtime behavior — image pull policies, required security contexts, mandatory labels for ownership and cost attribution, blocked sysctls. The honest assessment is that runtime is the least standardized of the four container security layers: vendors compete on rule libraries, agent footprint, response automation, and the integration with the rest of their CNAPP stack rather than on a settled set of primitives. Most teams adopting runtime today either start with Falco for visibility and graduate to a commercial tool when response automation becomes a requirement, or start with the runtime module of whichever CNAPP vendor they have already standardized on.
Kubernetes Admission Control
Admission control is the layer that lets the cluster itself reject workloads that violate security policy at the moment they are submitted, before any pod is scheduled. Kubernetes ships built-in admission controllers and exposes a webhook interface that lets external policy engines participate in the admission decision. The two dominant policy engines are Open Policy Agent (OPA) Gatekeeper and Kyverno. Gatekeeper expresses policies in Rego, OPA's declarative policy language; Kyverno expresses policies in YAML that mirrors Kubernetes resource shape. Both projects are CNCF graduated as of 2024-2025 and are widely used in production.
Native to Kubernetes itself is the Pod Security Standards (PSS) framework, which replaced the deprecated PodSecurityPolicy in 1.25. PSS defines three profiles — privileged, baseline, and restricted — that progressively constrain what a pod is allowed to do. The baseline profile blocks the most obvious anti-patterns (privileged containers, host networking, host PID); the restricted profile enforces non-root execution, read-only root filesystems, and a tightly scoped seccomp profile. PSS is enabled per-namespace via labels and is the recommended starting point for clusters that want a meaningful security baseline without immediately deploying a third-party admission engine.
Beyond pod-level admission, the supply-chain layer reaches into the cluster via signed-image enforcement. Kyverno and Gatekeeper both ship policies that integrate with cosign and the sigstore ecosystem to require that any image pulled by the cluster carries a valid signature from an approved key or fulcio identity. Combined with in-toto attestations describing how an image was built, this gives platform teams a mechanism to enforce that production runs only artifacts produced by trusted CI pipelines. The maturity of signed-image enforcement in 2026 is high enough that most regulated organizations are working toward it as a baseline expectation rather than an advanced capability.
The Container Security Tool Landscape
The vendor and open-source landscape for container security has consolidated significantly since the early years of the discipline. Most commercial buyers now think in terms of CNAPP (Cloud-Native Application Protection Platform) — a single platform covering image scanning, runtime, posture, and often IaC and identity. Open-source tooling remains strong at the individual-layer level. The table below summarizes the major options across the four layers; subsequent paragraphs add context.
| Tool | Image Scan | Runtime | K8s Admission | Model |
|---|---|---|---|---|
| Trivy | Yes | No | Manifest scan | Open source (Aqua) |
| Grype | Yes | No | No | Open source (Anchore) |
| Clair | Yes | No | No | Open source (Red Hat) |
| Falco | No | Yes (eBPF) | No | Open source (CNCF) |
| Cilium Tetragon | No | Yes (eBPF) | No | Open source (Isovalent) |
| Snyk Container | Yes | Limited | No | Commercial |
| Aqua Security | Yes | Yes | Yes | Commercial CNAPP |
| Sysdig Secure | Yes | Yes (Falco-based) | Yes | Commercial CNAPP |
| Wiz | Yes (agentless) | Yes | Yes | Commercial CNAPP |
| Prisma Cloud | Yes | Yes | Yes | Commercial CNAPP (Palo Alto) |
| Lacework | Yes | Yes | Yes | Commercial CNAPP (Fortinet) |
| Falcon Cloud Security | Yes | Yes | Yes | Commercial (CrowdStrike) |
A few caveats. The "Limited" entry for Snyk Container runtime reflects that Snyk's primary positioning is developer-facing scanning rather than full runtime workload protection; their runtime story has evolved through a series of acquisitions and partnerships. Wiz pioneered the agentless approach to image scanning by analyzing snapshots of cloud workload disks rather than running an in-cluster agent, which trades some runtime visibility for dramatically simpler deployment. CrowdStrike Falcon Cloud Security represents the major endpoint vendor pivot into cloud workload protection, leveraging CrowdStrike's existing agent footprint. Lacework was acquired by Fortinet in 2024 and is now positioned as part of Fortinet's broader cloud security portfolio. The CNAPP category is consolidating; expect further M&A through the rest of the decade.
What Container Scanners Do NOT Catch
Container security has well-defined boundaries, and the most expensive program failures come from teams that assume an image scanner is also a code scanner, or that runtime workload protection is also application security. It is not. There are three populations of vulnerability that container security tools cannot find no matter how mature the deployment is, and each requires a different category of tool.
Application code vulnerabilities. A container image scanner sees compiled binaries and installed packages; it does not parse the application's source code. A SQL injection bug in your team's Java service, a command injection in your Python API, an authorization bypass in your Go controller — none of these will appear in any container scan, because they live in the application logic the scanner has no semantic visibility into. Catching them requires Static Application Security Testing. GraphNode SAST covers 13+ languages including C#, Java, JavaScript, Python, PHP, Swift, Kotlin, Objective-C, C/C++, VB.NET, and HTML, with 780+ security rules mapped to OWASP Top 10, CWE, SANS Top 25, PCI-DSS, and HIPAA — the application-code layer container scanners cannot see.
Application dependency vulnerabilities. A container scanner can identify vulnerable OS packages installed via apt or yum, and it can usually identify language packages that were installed into well-known global locations. What it does not reliably catch is the application's own dependency tree as resolved by the application's package manager — the npm or Maven or pip dependency graph that lives inside the application directory. The transitive dependency that introduced Log4Shell into thousands of applications was an application-layer dependency, not an OS package, and many container scanners missed it for exactly this reason. Software Composition Analysis covers the application dependency tree explicitly. GraphNode SCA walks the full transitive tree across the major package managers and matches every component-version pair against vulnerability feeds. For a vendor-neutral deep dive on how SCA works, see our SCA scanning guide.
Business-logic flaws. A container scanner cannot tell you that your checkout endpoint allows a negative-quantity order, that your password reset flow leaks the existence of registered users, or that your IDOR-vulnerable API exposes records belonging to other tenants. These are runtime, business-context bugs that require Dynamic Application Security Testing against the deployed application and, ultimately, periodic manual penetration testing by humans who understand the application's threat model. No amount of container security investment substitutes for the application security layer above it.
Building a Practical Container Security Program
The mistake teams make when standing up container security is buying a CNAPP platform and trying to enable every module on day one. The result is a flood of findings — base image CVEs, Dockerfile lint warnings, runtime alerts on routine behavior, admission policy violations on existing workloads — that the platform team has no capacity to triage, and the program loses credibility with engineering before it has demonstrated value. A pragmatic sequencing model starts narrow, proves value, then expands.
Phase 1 — image scanning in CI (months 0-3): wire Trivy or a commercial image scanner into the build pipeline for the most critical applications. Configure it to fail builds only on new high or critical CVEs against the application's contribution to the image (not against the base image), and surface base image findings as advisory. In parallel, start a base image rationalization effort: pick one or two minimal bases (distroless, Chainguard, UBI Micro) that the platform team owns and supports, and migrate applications onto them over the next two quarters. This single change typically eliminates the majority of CVE noise.
Phase 2 — runtime visibility (months 3-9): add runtime workload protection. For most teams the right starting point is Falco with the default rule set, deployed cluster-wide as a DaemonSet, with alerts piped to whatever incident management tool the platform team already uses. Treat the first three months as observability rather than enforcement: tune rules against actual workload behavior, identify which alerts are meaningful and which are false positives from legitimate use, and only after that establish response runbooks. This is also the right time to wire image scanning into your DevSecOps pipeline more broadly — see our DevSecOps tools guide for the larger context.
Phase 3 — admission and supply-chain enforcement (months 9-18): deploy an admission controller (Kyverno or Gatekeeper) and start enforcing policy at the cluster boundary. Begin with audit-mode policies that report violations without blocking, prove the policies are correct against existing workloads, then turn enforcement on namespace by namespace. By this point the platform team has the operational maturity to handle Pod Security Standards baseline enforcement, signed-image policies via cosign, and selective use of more restrictive controls per namespace. Container security at this stage starts to overlap meaningfully with infrastructure-as-code scanning — for the IaC side, see our piece on IaC scanning explained.
Phase 4 — CNAPP consolidation (months 18-30): the team has individual tools running in each layer and an operational rhythm around them. The question now is whether to consolidate into a single CNAPP platform for unified posture management and cross-layer correlation, or to continue with best-of-breed open-source plus a separate ASPM aggregator. The right answer depends on the size of the security team, the budget envelope, and the breadth of cloud platforms in use. Larger organizations with multi-cloud footprints typically consolidate; smaller teams often stay best-of-breed longer.
Frequently Asked Questions
What is the difference between image scanning and runtime container security?
Image scanning is a static activity that runs against a built container image, enumerates the OS packages and language libraries inside it, and matches them against vulnerability advisory feeds to produce a report of CVEs and Dockerfile misconfigurations. It tells you what is in the image. Runtime container security is a dynamic activity that runs against a container as it executes in production, watches syscall and network behavior via eBPF or a sidecar agent, and detects anomalous activity that suggests an in-progress attack or post-exploitation. It tells you what the container is actually doing. Both are necessary in a mature program because they catch fundamentally different populations of issue.
Do I need a commercial container security tool or is Trivy enough?
Trivy is a credible image-scanning solution by itself for most teams up to a meaningful scale. It is fast, single-binary, integrates cleanly into any CI system, and covers OS packages, language dependencies, IaC, secrets, and Kubernetes manifests in one tool. Where commercial tools (Snyk Container, Aqua, Sysdig, Wiz, Prisma Cloud) add value is in centralized policy management across many teams, deduplication of findings across heterogeneous sources, integrations with ticketing and identity systems, and the runtime and admission layers that Trivy does not cover. A common pattern is to start with Trivy in CI, add Falco for runtime visibility, and only adopt a commercial CNAPP when the operational overhead of stitching multiple open-source tools together exceeds the licensing cost of consolidating.
What is CNAPP and how does it relate to container security?
CNAPP stands for Cloud-Native Application Protection Platform. The category was popularized by Gartner around 2021 to describe an integrated platform that combines what had previously been separate products: cloud security posture management (CSPM), cloud workload protection (CWPP), container security (image scanning, runtime, admission), and increasingly IaC scanning and identity entitlement management. Container security is one component of CNAPP; CNAPP is the broader category that places container security alongside cloud configuration, runtime workload monitoring, and identity in a single platform. Most major commercial container security vendors (Wiz, Prisma Cloud, Sysdig, Aqua, Lacework, Falcon Cloud Security) now position themselves as CNAPP platforms rather than pure container security tools.
Is Kubernetes RBAC part of container security?
Kubernetes RBAC sits at the boundary between container security and identity and access management, and most practitioners include it in a complete container security program even though it is technically an authorization layer. RBAC controls who and what can perform actions against the Kubernetes API: who can create pods, who can read secrets, which service accounts a workload can assume. Misconfigured RBAC is the root cause of many cluster compromises — an over-permissive service account, a default namespace with cluster-admin bindings, a CI pipeline with unnecessary write access. Audit tools like KubeAudit, kube-bench (CIS Kubernetes Benchmark), and the RBAC review modules of CNAPP platforms cover this layer. Treat RBAC review as a first-quarter activity in any container security program.
Does container security replace SAST + SCA?
No. Container security and application security cover different bug populations and the layers are complementary rather than substitutable. Container scanners find vulnerabilities in the OS packages and base-image contents that make up the runtime environment; they cannot see the application's source code or reliably analyze the application's own dependency tree. SAST analyzes the code your team wrote and finds injection flaws, broken authentication, insecure cryptography, and the other classes of bug developers introduce. SCA analyzes the application's open-source dependency graph and finds publicly disclosed CVEs in the libraries the application imports — the layer where Log4Shell lived. A complete program runs all three: image scanning for the container layer, SAST for the application code, SCA for the application dependencies. Each catches what the others miss.