GraphNode
All guides
Vuln Mgmt

Vulnerability Management: CVE, CVSS, NVD, and the AppSec Lifecycle

| 16 min read |GraphNode Research

TL;DR

Vulnerability management is the operational discipline of identifying, classifying, prioritizing, and remediating security weaknesses across an organization's technology estate. Three frameworks anchor the field: CVE is the universal identifier system run by MITRE, CVSS is the FIRST.org severity scoring system on a 0.0 to 10.0 scale, and the NVD is the United States authoritative database run by NIST that enriches CVEs with scores and product mappings. The discipline spans application code (SAST), open-source dependencies (SCA), infrastructure-as-code, network assets, endpoints, container images, and cloud workloads. A mature program treats vulnerabilities as a measurable, time-bounded queue with SLAs, ownership, and prioritization beyond raw CVSS, and an enterprise vulnerability management platform usually sits above the scanners as the system of record.

Most security programs treat vulnerabilities reactively. A scanner produces a report, somebody emails it to a development team, the highest-severity items get partial attention, and the rest age silently in a spreadsheet until the next audit cycle. The mature programs treat vulnerabilities as a measurable, time-bounded operational discipline — every finding has an identifier, a score, an owner, a target fix date, and a tracking record that closes the loop from discovery to verification. The difference between the two postures is not the quality of the scanners; it is the quality of the workflow that consumes their output.

This guide is the pillar reference for that workflow. It walks through the three foundational frameworks every practitioner needs to speak fluently — CVE, CVSS, and the NVD — clarifies the related vocabulary that gets confused (CWE, CAPEC, EPSS, KEV), explains the six-phase vulnerability management lifecycle, examines how application security findings from SAST and SCA feed into that lifecycle, surveys the enterprise vulnerability management platform landscape, and outlines the practical steps for building an application vulnerability management program that actually closes findings rather than just counting them.

What CVE Is

CVE stands for Common Vulnerabilities and Exposures. It is a public, free identifier system for publicly disclosed cybersecurity vulnerabilities. The CVE Program is operated by the MITRE Corporation and has been running continuously since 1999. Its single purpose is to give every distinct vulnerability a globally unique identifier so that vendors, researchers, scanners, advisories, and incident responders can refer to the same issue without ambiguity. Before CVE existed, the same vulnerability might have three different names across three different scanner vendors; CVE collapsed that into a single canonical reference.

The identifier format is CVE-YYYY-NNNNN — the literal prefix CVE, followed by the four-digit year the ID was reserved, followed by a sequence number with a minimum of four digits but no maximum. Older entries used four digits (CVE-2014-0160 was Heartbleed), but as the disclosure rate has grown the sequence number has expanded as needed. The year reflects when the ID was assigned, not necessarily when the vulnerability was discovered or disclosed publicly — a vulnerability disclosed in 2026 may carry a 2025 ID if the reservation was made the prior year.

IDs are issued by CNAs (CVE Numbering Authorities). CNAs are organizations authorized by MITRE to assign CVE IDs within a defined scope — typically their own products (Microsoft, Apple, Cisco, Red Hat), an open-source ecosystem (GitHub for npm and other ecosystem advisories, Linux distributions for their packages), or a research scope (HackerOne, Bugcrowd, certain CERTs). The CNA model distributes the assignment workload across hundreds of organizations rather than funneling every request through MITRE itself. Anyone can request a CVE for a vulnerability, but the request is routed to the appropriate CNA based on the affected product or technology.

The authoritative registry for CVE records lives at cve.org, operated by MITRE on behalf of the CVE Program. The site publishes the CVE List itself, the schema for CVE records, the directory of CNAs, and the program's governance documents. CVE records are deliberately minimal — they capture the identifier, a description, references, and the assigning CNA. They do not include severity scores, product version mappings, or remediation guidance; those enrichment fields are added downstream by databases like the NVD. The split is intentional: CVE is responsible for identity, not analysis.

What CVSS Is

CVSS stands for Common Vulnerability Scoring System. It is the open standard for assigning a numeric severity score to a vulnerability so that organizations can rank findings consistently. CVSS is governed by FIRST.org — the Forum of Incident Response and Security Teams — which publishes the specification, the calculator, and the user guide. The scoring system is deliberately framework-driven rather than opinion-driven: a CVSS score is the output of plugging defined metric values into a defined formula, so two analysts evaluating the same vulnerability with the same context should arrive at the same number.

CVSS scores fall into three groups. The Base score captures the intrinsic characteristics of a vulnerability that do not change over time or across environments — attack vector, attack complexity, privileges required, user interaction, scope, and the impact on confidentiality, integrity, and availability. The Base score is what most public databases publish. The Temporal score adjusts the Base score for factors that change over time, such as whether exploit code exists, whether a patch is available, and the confidence level of the report. The Environmental score further adjusts for the specific deployment context — how critical the affected asset is to the organization, what compensating controls exist, what the actual attack surface looks like in that environment. Most published scores are Base only; the Temporal and Environmental refinements are the responsibility of the consuming organization.

The current generation in active enterprise use is CVSS v3.1, published in 2019 as a clarification update to v3.0. CVSS v4.0 was released by FIRST.org in November 2023 and introduces meaningful changes: more granular threat and environmental metrics, explicit handling of attacks that require multiple steps, supplemental metrics like Safety and Automatable, and a more nuanced treatment of vulnerabilities in systems beyond traditional IT (industrial control, medical devices). Adoption of v4.0 across scanners and databases is in progress through 2025 and 2026; many tools still publish v3.1 vectors as the primary, with v4.0 added as a secondary field where supported.

The numeric range and severity bands are the part most practitioners memorize. CVSS scores run from 0.0 to 10.0 on a single decimal scale, with severity bands defined as None (0.0), Low (0.1 to 3.9), Medium (4.0 to 6.9), High (7.0 to 8.9), and Critical (9.0 to 10.0). The bands are advisory — organizations are free to define their own thresholds for SLA purposes — but they are the de facto language used in vendor advisories, regulatory frameworks, and security reporting. A vulnerability scoring 9.8 on the CVSS scale is universally understood to mean Critical-severity remote code execution or equivalent.

What NVD Is

The national vulnerability database nvd is the working name most practitioners use; the formal product name is the National Vulnerability Database. The two refer to the same data source described below, and both are the de facto reference feed for cve scanning workflows in U.S. federal procurement and most enterprise vulnerability programs.

The National Vulnerability Database (NVD) is the United States government's repository of CVE records enriched with additional metadata. It is operated by the National Institute of Standards and Technology (NIST) and is funded by the U.S. Department of Homeland Security. The NVD does not assign CVE IDs itself; that is MITRE's role. The NVD's job begins after a CVE is published: analysts review the record, assign a CVSS Base score and vector, attach CPE (Common Platform Enumeration) identifiers that map the vulnerability to specific affected products and versions, and add CWE classification and reference links. The result is a queryable, machine-readable record that scanners and vulnerability management platforms consume as a primary feed.

The NVD has historically been treated as the default authoritative source for vulnerability metadata in U.S. federal procurement, in many compliance frameworks, and in countless vendor scanners. That position has been complicated by a well-documented backlog problem. In February 2024, NIST publicly acknowledged that NVD enrichment had slowed dramatically — a large portion of newly published CVEs were stuck in an "Awaiting Analysis" state, meaning the CVE record existed but had not yet been assigned a CVSS score or CPE coordinates. NIST has published multiple updates on the situation through 2024 and into 2025 as it has worked to resume normal throughput, but the incident permanently changed how serious vulnerability programs source data: relying on a single feed is now widely understood as a single point of failure.

The practical response has been the rise of complementary databases. The GitHub Advisory Database publishes ecosystem-specific advisories (npm, Maven, pip, RubyGems, Go, NuGet, Composer, Rust, Erlang) often faster than NVD enrichment, with package coordinates already attached. OSV.dev, an open vulnerability schema and aggregator originally from Google, normalizes data from GHSA, PyPI, RustSec, Go vulnerabilities, and many other ecosystem feeds into a single queryable format. Commercial research databases from SecurityScorecard, Snyk, Sonatype, and others add proprietary research and reachability data on top of the public feeds. Mature SCA and vulnerability management platforms now aggregate several of these feeds rather than relying on the NVD alone. For a deeper look at how this affects dependency scanning specifically, see SCA scanning explained.

CVE vs CVSS vs CWE vs CAPEC

Four acronyms get conflated regularly because they all sound alike and all relate to vulnerabilities. They describe different things and live in different layers of the model.

AcronymStands ForWhat It IsMaintained By
CVECommon Vulnerabilities and ExposuresAn identifier for a specific vulnerability instanceMITRE
CVSSCommon Vulnerability Scoring SystemA 0.0 to 10.0 severity score for a vulnerabilityFIRST.org
CWECommon Weakness EnumerationA category of weakness (e.g. CWE-79 = XSS)MITRE
CAPECCommon Attack Pattern Enumeration and ClassificationA reusable attack technique (e.g. SQL injection via tampered form field)MITRE

The relationships are hierarchical. A specific bug in a specific product is a CVE. The bug is an instance of a general weakness category, which is a CWE — Heartbleed (CVE-2014-0160) is an instance of CWE-126, Buffer Over-Read. The severity of that specific instance is scored using CVSS. The technique an attacker would use to exploit it is a CAPEC entry. So a single Heartbleed scanner finding might carry CVE-2014-0160, CVSS 7.5 (Base), CWE-126, and CAPEC-540 (Overread Buffers). Practitioners usually only encounter CVE and CVSS in day-to-day work; CWE matters when looking at trend analysis ("what categories of weakness are we shipping?") and CAPEC matters in threat modeling and red team work.

The Vulnerability Management Lifecycle

Vulnerability management is best modeled as a six-phase loop: Discover, Classify, Prioritize, Remediate, Verify, Report. The loop is continuous — every new scan, every new advisory, every new asset added to the estate restarts it. Each phase has a different toolset and a different ownership model.

Discover. The discovery phase enumerates assets and finds vulnerabilities present on them. This is where the scanner inventory lives: SAST scanners for source code, SCA scanners for dependencies, network and host scanners (Tenable, Qualys, Rapid7, OpenVAS) for infrastructure, container image scanners for built artifacts, IaC scanners for cloud configuration, and endpoint agents for laptops and servers. Discovery completeness is bounded by asset inventory accuracy — a scanner only finds vulnerabilities on assets it knows about, so a stale CMDB silently degrades the entire program.

Classify. Raw findings are normalized: deduplicated across overlapping scanners, mapped to assets and asset owners, tagged with environment metadata (production vs staging, internet-facing vs internal, regulated vs general), and enriched with CVSS, CWE, and threat intelligence. Classification is where ASPM, Vulcan Cyber, Brinqa, and similar aggregation platforms add value over raw scanner output.

Prioritize. Severity alone is not enough to decide order of work — see the dedicated section below. Prioritization combines CVSS, exploit availability (EPSS, CISA KEV), reachability, asset criticality, and business context to produce a ranked queue. The output is a defensible answer to "of the 4,000 open findings, which 50 should engineering work on first?"

Remediate. The actual fix work — patch installation, dependency upgrade, code change, configuration update, compensating control, or accepted-risk waiver. Remediation is where the program either compounds value or collapses; without ownership and SLA enforcement, the queue grows monotonically and nothing closes.

Verify. Re-scan to confirm the fix actually landed and did not regress. This phase is often skipped, which is how teams end up reopening the same finding three quarters in a row. Verification needs to be automated as part of the deployment pipeline, not a manual quarterly exercise.

Report. Roll-up metrics for engineering leadership, security leadership, and the board. Mean time to remediate (MTTR) by severity, open finding count by age, SLA compliance percentage, and exploit-trend exposure are the standard set. The reporting phase closes the loop by creating accountability for the rest of it. For application-specific findings flowing through this lifecycle, see the audit and triage workflow at audit triage documentation.

Application Vulnerability Management — How AppSec Findings Feed In

Application vulnerability management is the slice of the broader discipline that deals specifically with vulnerabilities in software applications — the source code your team writes plus the open-source dependencies it pulls in. The scanner inputs are familiar: SAST for the in-house code, SCA for the third-party components. The findings produced by those scanners flow into the same six-phase lifecycle as network and infrastructure findings, but they map differently to identifiers.

GraphNode SAST findings typically carry CWE classifications rather than CVE IDs. A SAST finding is a discovered weakness in code your team owns — say, an SQL injection sink reached by an untrusted query parameter — so the vulnerability is brand new, has no public identifier, and is described by its weakness category (CWE-89, SQL Injection) and a CVSS-like internal severity. When the same code is later disclosed publicly and a CVE is assigned, the SAST finding can be linked back, but in the normal pre-disclosure case it lives as a CWE-classified internal finding.

GraphNode SCA findings, by contrast, almost always carry CVE IDs. SCA matches the components in your dependency tree against vulnerability advisories, so every finding corresponds to a publicly disclosed vulnerability with an existing CVE, an NVD record, and a CVSS score. SCA findings are the natural input format for a vulnerability management platform: the CVE becomes the join key, and the platform aggregates instances of the same CVE across every application in the estate. For a deeper walkthrough of how SCA scans actually generate these findings, see SCA scanning explained; for the full pillar context on how SAST and SCA fit into a complete program, see the application security pillar guide.

The architectural pattern that works is to keep the scanners specialized and let an aggregation layer normalize their output. SAST and SCA each have their own deep concerns — data flow analysis depth, transitive resolution, advisory feed coverage — and trying to make a vulnerability management platform also be a deep code analyzer dilutes both. The cleanest stack is best-of-breed scanners feeding a vulnerability management or ASPM platform that handles deduplication, prioritization, and SLA tracking on top.

A practical word on terminology: a "cve scanner" is shorthand for any scanner whose primary job is to match installed components or configurations against CVE records. SCA tools are cve scanners against application dependencies; container image scanners are cve scanners against OS packages baked into images; network and host scanners are cve scanners against deployed services. Any meaningful program of automated vulnerability management combines several of these scanners feeding a single workflow that owns deduplication, severity calibration, SLA enforcement, and verification — the system of record beats any single scanner every time.

Prioritization Beyond CVSS

The biggest practical critique of CVSS is that it tends to overrate. A naive program that says "fix every Critical immediately" will burn engineering time on vulnerabilities that score 9.0+ but have no public exploit, are not reachable in the deployed configuration, or live on assets that do not handle sensitive data. Meanwhile, a CVSS 6.5 issue that has an active exploit campaign and sits on an internet-facing asset processing payment data is a much higher real risk. Modern prioritization layers additional signals on top of the Base score.

EPSS — the Exploit Prediction Scoring System, also maintained by FIRST.org — is a daily-updated probability score for the likelihood that a given CVE will be exploited in the wild within the next 30 days. EPSS uses statistical models trained on observed exploitation data; it complements CVSS by answering "how likely is this to be attacked?" rather than "how bad would it be if it were?". A CVSS 9.8 with an EPSS of 0.001 is much less urgent than a CVSS 7.0 with an EPSS of 0.7.

CISA KEV — the U.S. Cybersecurity and Infrastructure Security Agency's Known Exploited Vulnerabilities catalog — is a curated list of CVEs that have been observed in active exploitation. KEV is binary (a CVE is either on the list or not) and is updated as new evidence becomes available. Inclusion in KEV is a strong signal that the vulnerability matters in the real world; U.S. federal civilian agencies are required to remediate KEV entries within defined deadlines under Binding Operational Directive 22-01.

Reachability analysis — covered in detail in the SCA guide — asks whether a vulnerable function is actually invoked through the application's call graph. A vulnerable library function that is shipped in the artifact but never called by any code path is present-but-unreachable, which generally drops priority sharply. Business context — the criticality of the affected asset, regulatory scope, internet exposure, data sensitivity — is the final layer. A program that combines CVSS Base score, EPSS probability, KEV membership, reachability, and business context produces a prioritized queue that bears almost no resemblance to a raw "sort by CVSS descending" list.

The Enterprise Vulnerability Management Platform Landscape

The vulnerability management platform market splits into two broad camps. The first is platforms with their own scanning engines — typically network and host scanners, often expanded into web application and cloud scanning. The second is aggregation platforms (sometimes called risk-based vulnerability management, or RBVM) that ingest findings from third-party scanners and add prioritization and workflow on top. Several vendors blur the line. The table below is a vendor-neutral overview based on each vendor's publicly documented product positioning.

VendorProductPrimary Positioning
TenableTenable Vulnerability ManagementNetwork/host scanning roots, expanded to cloud, web, and exposure management
QualysQualys VMDRCloud-native vulnerability, detection, and response across IT, cloud, web, and containers
Rapid7InsightVMIntegrated vulnerability management with live monitoring and remediation tracking
CrowdStrikeFalcon SpotlightEndpoint-agent-based vulnerability assessment integrated with the Falcon platform
Vulcan CyberVulcan Cyber PlatformRisk-based aggregation across third-party scanners with remediation orchestration
CiscoKenna Security (Cisco Vulnerability Management)Risk-based prioritization platform consuming third-party scanner data
BrinqaBrinqa PlatformCyber risk graph aggregating findings, assets, and business context
NopSecUnified VRMRisk-based vulnerability management with prioritization and remediation workflow

Selection criteria depend on what the organization already owns. Teams with a heavy network and host scanning footprint often standardize on the platform that owns those scanners (Tenable, Qualys, Rapid7). Teams with strong best-of-breed scanners across categories — say, GraphNode for AppSec, a separate cloud security platform, and an endpoint vendor — often layer a pure aggregation platform (Vulcan, Kenna, Brinqa) on top to unify the view. There is no single right answer; the wrong answer is to buy a platform without first defining what "fixed" means and how the queue will actually be worked.

Building an Application Vulnerability Management Program

A practical rollout sequence for application vulnerability management looks like this. Step one: turn on SAST and SCA in CI on every active repository, with severity policies defined (critical fails the build, high requires triage within five business days, medium within thirty). The scanners produce the raw finding stream; without consistent CI coverage the rest of the program is missing inputs. Step two: stand up a centralized findings store — either an ASPM platform, a vulnerability management platform with AppSec ingestion, or a purpose-built internal data layer — that normalizes SAST and SCA output and joins them to application and owner metadata.

Step three: define and enforce remediation SLAs. Every finding above a defined severity gets an auto-created ticket assigned to the application owner, with a target fix date based on severity. SLA compliance percentage becomes a tracked metric. Step four: layer prioritization signals beyond CVSS — EPSS, KEV membership, reachability where supported, and business context tags — so engineering does not waste effort on low-real-risk items. Step five: build an executive dashboard with mean time to remediate (MTTR) by severity, open finding count by age bucket, and SLA compliance by application. The dashboard is what creates accountability; without it, the program drifts.

The triage workflow for individual findings is its own discipline — see audit and triage for the operational pattern. The pillar program-level lesson is that scanners are commodity inputs; the differentiator is the workflow that consumes them. A program with mediocre scanners and rigorous SLA enforcement consistently outperforms a program with best-in-class scanners and no operating cadence.

Frequently Asked Questions

What is the difference between CVE and CVSS?

CVE and CVSS describe different things. CVE (Common Vulnerabilities and Exposures) is an identifier system run by MITRE that gives every publicly disclosed vulnerability a globally unique ID in the format CVE-YYYY-NNNNN. CVSS (Common Vulnerability Scoring System) is a scoring framework run by FIRST.org that produces a 0.0 to 10.0 severity score for a vulnerability. A single finding typically carries both: the CVE identifies which vulnerability it is, and the CVSS score communicates how severe it is. CVE answers "what?", CVSS answers "how bad?".

What is a CVSS score range?

CVSS scores run on a single decimal scale from 0.0 to 10.0, with severity bands defined as None (0.0), Low (0.1 to 3.9), Medium (4.0 to 6.9), High (7.0 to 8.9), and Critical (9.0 to 10.0). The bands are advisory — organizations are free to define their own SLA thresholds — but they are the de facto language used in vendor advisories and regulatory frameworks. The current widely-deployed specification is CVSS v3.1; CVSS v4.0 was released by FIRST.org in November 2023 and is being adopted across scanners and databases through 2025 and 2026.

Is the NVD reliable?

The NVD is operated by NIST and remains the default authoritative source for vulnerability metadata in U.S. federal procurement and many compliance frameworks, but it is no longer treated as a single point of truth by serious vulnerability programs. In February 2024, NIST publicly acknowledged a significant slowdown in NVD enrichment, with a large portion of newly published CVEs stuck in "Awaiting Analysis" without CVSS scores or CPE coordinates. NIST has continued to work through the backlog, but the incident reset best practice: mature programs now aggregate NVD with the GitHub Advisory Database, OSV.dev, and commercial research feeds rather than relying on the NVD alone.

What is application vulnerability management?

Application vulnerability management is the slice of broader vulnerability management that deals with vulnerabilities in software applications — the in-house source code your team writes plus the open-source dependencies it pulls in. The scanner inputs are SAST (for code) and SCA (for dependencies); the findings flow into the same Discover, Classify, Prioritize, Remediate, Verify, Report lifecycle as network and infrastructure vulnerabilities. SCA findings carry CVE IDs because they correspond to publicly disclosed dependency vulnerabilities; SAST findings typically carry CWE classifications because they describe weakness categories in code your team owns.

Do I need a separate vulnerability management platform if I have SAST and SCA?

It depends on scope. If your security program is application-only and your scanners include their own triage, ticketing, and SLA workflow, a separate vulnerability management platform may not be required. If the program also covers network, host, cloud, container, and endpoint vulnerabilities — and most enterprise programs do — then a vulnerability management or ASPM platform usually pays for itself by aggregating findings across the full estate, deduplicating across overlapping scanners, applying consistent prioritization, and producing a single executive view. SAST and SCA are best-of-breed scanners that sit upstream of that aggregation layer; they feed the platform, they do not replace it. GraphNode positions itself in the scanner role for AppSec rather than as an enterprise-wide vulnerability management platform.

Get the SAST + SCA Findings Your Vulnerability Management Program Needs

GraphNode is the AppSec scanner layer upstream of your vulnerability management platform. Deep data flow SAST across 13+ languages and SCA for the full transitive dependency tree, in one engine, ready to feed any VM or ASPM workflow.

Request Demo