← Back to Blog

Why Sensitive Data Should Never Be in a URL

Web Security March 11, 2026 9 min read

Tokens in query strings. Session IDs in URL paths. API keys as parameters. It happens more often than it should, and the consequences go beyond what most developers expect. Once sensitive data enters a URL, it escapes through channels you don't control and often can't even see. The Referer header gets most of the attention, but it is only one of many leakage channels, and not even the most dangerous one.

This post covers exactly how that leakage happens, where the data ends up, and what to do instead.

The Referer Header: A Built-In Leak

When you click a link that takes you from one page to another, your browser includes a Referer header in the HTTP request to the destination. The value of this header is the URL of the page you just left.

Yes, "Referer" is a typo. It was misspelled in the original HTTP specification in 1996, and the typo stuck. The entire internet standardized around a spelling mistake. The newer Referrer-Policy header spells it correctly, which makes searching for documentation extra fun.

Here's what happens in practice:

  1. A user is on https://example.com/reset?token=a1b2c3d4
  2. They click a link to https://external-site.com/page
  3. The browser sends: Referer: https://example.com/reset?token=a1b2c3d4

The destination site now has the password reset token. No exploitation required. The browser handed it over as part of normal HTTP behavior.

Referrer-Policy: What Modern Browsers Actually Do

The good news: modern browsers no longer send the full URL by default on cross-origin requests.

Starting with Chrome 85 (August 2020) and Firefox 87 (March 2021), the default Referrer-Policy is strict-origin-when-cross-origin. Here's what that means:

  • Same-origin requests: Full URL is sent (path + query string included).
  • Cross-origin requests (same security level): The browser strips everything after the domain and only sends https://example.com/.
  • Security downgrade (HTTPS to HTTP): No referrer sent at all.

So in the example above, with the default policy in a modern browser, external-site.com would only receive https://example.com/. The token is stripped. This is a meaningful improvement.

But here's the catch. If the source site explicitly sets:

<meta name="referrer" content="unsafe-url">

or sends the HTTP response header:

Referrer-Policy: unsafe-url

then the browser sends the complete URL to every destination on every navigation. Path, query parameters, everything. Every third-party resource loaded by the page gets it too.

Full URL leakage via the Referer header requires the source site to actively opt in by overriding the browser default. In theory, sites that don't touch this setting are protected. In practice, I've seen unsafe-url quietly set by CMS plugins, marketing tools, and copy-pasted Stack Overflow snippets that nobody ever questioned.

A Correction: Element-Level referrerpolicy Does Override Document Policy

While researching this, I initially believed that a referrerpolicy="unsafe-url" attribute on an individual <a> tag would be insufficient to override a stricter document-level policy set via HTTP header. After digging into the W3C Referrer Policy specification, that turns out to be wrong.

The actual priority order is:

  1. Element-level referrerpolicy attribute, which takes the highest priority and applies only to requests from that specific element
  2. Document-level <meta name="referrer"> tag
  3. HTTP response header Referrer-Policy
  4. Browser default (strict-origin-when-cross-origin)

This means if a page has Referrer-Policy: strict-origin-when-cross-origin set via the server, but a specific <a> tag has referrerpolicy="unsafe-url", the browser will send the full URL for that specific navigation. The element-level attribute wins.

The practical implication: a single careless anchor tag can leak a full URL even on an otherwise well-configured page. Look for these during code review.

A Note on file:// URLs

I also tested this from local file:// pages. No referrer is sent, but not because of strict-origin-when-cross-origin. Browsers treat the local filesystem as a special privacy context and suppress the referrer entirely for file:// origins, regardless of any policy you set.

The Referer Header Is Not the Only Problem

Here's what most discussions of this topic miss. The Referer header is one leakage channel. It's arguably not even the most dangerous one.

Third-Party Scripts: Direct URL Access

Every piece of JavaScript running on the page can read window.location.href directly. It does not matter if the script is yours or comes from a third party. Referrer-Policy does nothing to prevent this.

This is where the real-world impact becomes concrete. Take tawk.to, a widely used free live chat widget. tawk.to does exactly what it should: it shows the site owner which page each visitor is currently browsing. The full URL of the page appears in the site owner's dashboard.

Now imagine a visitor lands on:

https://example.com/reset-password?token=a1b2c3d4e5f6

If tawk.to's widget is loaded on that page, the script reads window.location.href and sends it to tawk.to's servers. The site owner opens their dashboard and sees the complete URL with the reset token right there. tawk.to now has that token in their server logs.

This is not a vulnerability in tawk.to. It's working exactly as designed. The vulnerability is putting the token in the URL in the first place.

And it's not just tawk.to. Every third-party script loaded on the page has the same access:

  • Google Analytics / Google Tag Manager
  • Facebook Pixel
  • Hotjar, Mixpanel, Segment, Amplitude
  • Ad network tags
  • Any CDN-hosted library
  • Any embedded widget, chat tool, or support widget

Each one of these reads and transmits the URL to its own servers as part of standard operation. Your reset token, session ID, or API key is now sitting in a dozen different third-party databases, each with their own retention policy, access controls, and breach risk.

Server and Infrastructure Logs

Every component in the request path logs the full URL by default:

  • Web server access logs: Apache, Nginx, IIS all log the full request URI including query strings.
  • CDN logs: Cloudflare, AWS CloudFront, Fastly. If you use a CDN, there's another copy of every URL.
  • Reverse proxy logs: HAProxy, Envoy, Traefik. Every layer adds another log.
  • Load balancer logs: AWS ALB, GCP Load Balancer.
  • WAF logs: In most configurations, web application firewalls log full request details including URLs. Some providers like Cloudflare may redact or omit query strings depending on the tier and settings, but you should not assume that by default.

That's potentially five or six copies of every token that ever traveled in a URL, spread across different systems with different access controls, different retention periods, and different teams who can read them.

Browser-Side Leakage

  • Browser history: The URL is saved in the local history database. If the user has Chrome Sync or Firefox Sync enabled, it's uploaded to Google's or Mozilla's servers and synced across every device on that account.
  • Autocomplete / address bar suggestions: The URL will appear as a suggestion when the user starts typing in the address bar.
  • Bookmarks: If a user bookmarks the page, the full URL is stored and potentially synced.
  • Shared links: Users copy URLs from the address bar to share them. They won't notice (or care about) a token in the query string.
  • Screenshots and screen recordings: The address bar is visible in screen shares, presentations, and support tickets.
  • Browser extensions: Any extension with the right permissions can read window.location.href on every page you visit, exactly the same way a third-party script can. The difference is that extensions run across all sites, not just the ones that loaded them.

Network-Level Leakage

  • Corporate proxies: Many organizations route all HTTP(S) traffic through a proxy that performs TLS interception and logs full URLs.
  • ISP transparent proxies: Less common with HTTPS, but still exists in some regions.
  • Firewall audit logs: Enterprise firewalls routinely log full request URLs for compliance.

Real-World Vulnerability Classes

This isn't theoretical. Sensitive data in URLs is a well-documented vulnerability class that shows up in production applications constantly.

Password Reset Tokens

The most common case. Application sends a reset link via email:

https://example.com/reset?token=a1b2c3d4e5f6

The user clicks the link. The reset page loads. So does Google Analytics, a chat widget, and a retargeting pixel. All three now have the token. If the user clicks any external link before submitting the form, the token may also travel in the Referer header, depending on the site's policy.

OAuth Authorization Codes and State Parameters

OAuth flows pass tokens in the URL by design:

https://example.com/callback?code=AUTH_CODE&state=STATE_TOKEN

The authorization code can be exchanged for an access token. If it leaks before the exchange, an attacker can complete the OAuth flow and gain access to the user's account. The state parameter is meant to prevent CSRF. If it leaks, that protection is gone.

PKCE (Proof Key for Code Exchange) mitigates the authorization code leakage specifically. The client generates a random code_verifier before the flow starts and sends a hash of it in the authorization request. When exchanging the code for a token, the client must present the original verifier. An attacker who intercepts the code but does not have the verifier cannot complete the exchange. The OAuth 2.1 draft makes PKCE mandatory for all clients, not just public ones. If your OAuth implementation does not use PKCE, the authorization code in the URL is a live credential that anyone in the leakage chain can use.

API Keys in Query Strings

Some APIs accept authentication via query parameters:

https://api.example.com/data?api_key=sk_live_abc123

Every proxy, CDN, log aggregation service, and monitoring tool in the request path now has the API key. If the API doesn't restrict by IP or have usage limits, that key can be used by anyone who finds it in any of those logs.

OWASP Recognition

OWASP explicitly warns against this. The Web Security Testing Guide (WSTG) includes testing for sensitive information in URLs as part of its information leakage testing procedures. In the OWASP Top 10 2021, this falls under A04:2021 Insecure Design, which covers sensitive data exposure through poor architectural decisions. It also touches A02:2021 Cryptographic Failures, specifically the failure to protect sensitive data in transit from unintended exposure.

If your application puts sensitive tokens in URLs, it will get flagged on any competent security assessment. This is not an edge case. It is a well-known, well-documented vulnerability class.

What to Do Instead

1. Use POST Bodies for Token Submission

Password reset tokens, authentication codes, and sensitive parameters should be submitted in the request body, not the URL:

POST /reset-password HTTP/1.1
Content-Type: application/json

{
  "token": "a1b2c3d4e5f6",
  "new_password": "..."
}

POST request bodies are not logged by web servers in their default configuration, not sent in the Referer header, and not stored in browser history.

2. Use the Authorization Header for API Authentication

For API authentication, use the Authorization header:

GET /api/data HTTP/1.1
Authorization: Bearer sk_live_abc123

HTTP headers are not part of the URL. They're not logged by default in most server configurations, not leaked through the Referer header, and not stored in browser history or bookmarks.

3. If GET Must Be Used: Short-Lived and Single-Use

Sometimes a GET request with a token in the URL is unavoidable. Email password reset links need to be clickable, and email clients don't support POST requests. When you can't avoid it:

  • Make tokens short-lived. A password reset token should expire in 15 to 30 minutes, not 24 hours.
  • Make tokens single-use. Invalidate the token immediately on first use. Even if it leaks after that, it's worthless.
  • Redirect immediately. When the server receives the GET request with the token, validate it, establish a session, and 302 redirect to a clean URL (/reset-password with no query parameters). Don't render a page with third-party scripts while the token is still in the address bar.
  • Set a restrictive Referrer-Policy on sensitive pages. Any page that handles tokens should send Referrer-Policy: no-referrer explicitly instead of relying on the browser default.
  • Minimize third-party scripts. A password reset page should not load analytics, chat widgets, or marketing pixels. Strip everything that isn't essential to the reset flow.

4. Audit Your Pages

Open your browser's DevTools, go to the Network tab, and load your password reset page. Count the third-party domains. Every one of them can read window.location.href. Ask yourself whether each of them needs to be there.

Then check your server logs. Search for URLs containing tokens. You will almost certainly find them, and when you do, you have a retention and access control problem to address on top of the application fix.

The Bottom Line

Sensitive data in URLs is one of those issues that feels like it should be obvious, but it keeps showing up in production. The leakage surface is wide: Referer headers, third-party JavaScript, server logs, CDN logs, browser history, proxy logs, screen recordings, shared links. Once a token enters a URL, you've lost control over where it ends up.

The modern strict-origin-when-cross-origin default only protects against Referer header leakage on cross-origin navigation. It does nothing for all the other channels listed above.

The fix is straightforward: keep secrets out of URLs. Use POST bodies, use Authorization headers, and when you can't avoid URL tokens, make them expire fast, die on first use, and redirect the user to a clean URL before anything else loads.

Not sure if your application is leaking sensitive data through URLs? I will review your Referrer-Policy configuration and check for exposed tokens across your public-facing pages at no cost. If you want a deeper look at your information leakage surface, you can book a short call and we will figure out what makes sense for your setup. Reach out on LinkedIn or through the contact form.