Security Scanner Reference
Complete technical documentation for every scanner SurfaceGuard runs — what each check does, why it matters, how to read the output, and how to fix the finding. Written for engineers.
TLS & Certificate Security
Validates the X.509 certificate chain, protocol version support, and cipher suite strength on every HTTPS endpoint. A broken TLS stack is a hard blocker for trust: browsers, API clients, and monitoring systems all refuse connections on cert errors.
tls_security
TLS Certificate Validity
What is it?
A TLS certificate is a digitally signed document issued by a Certificate Authority (CA) that proves a server's identity and enables encrypted HTTPS connections. Without it, there is no way for a client to verify it is talking to the real server and not an impersonator.
Every certificate has two timestamps: NotBefore and
NotAfter. When the current time is past NotAfter,
the certificate is considered expired. Every modern TLS stack — browsers,
curl, OpenSSL, Go's tls package, Python's
requests — terminates the handshake with a fatal alert and
refuses to connect.
SurfaceGuard also flags near-expiry (less than 30 days remaining) as High to give lead time before the domain goes unreachable.
Why we check it
An expired certificate makes a domain completely inaccessible over HTTPS.
Unlike most security findings that degrade security posture silently, a cert
expiry is an outage: users see a hard browser error, API calls fail with
SSL_ERROR_RX_RECORD_TOO_LONG or similar, and monitoring
integrations break. If background monitoring is enabled in SurfaceGuard,
expiries trigger an alert before they cause downtime.
What SurfaceGuard checks
- Opens a TCP connection to port 443 and completes a TLS handshake using OpenSSL
- Extracts the leaf certificate from the returned chain
- Reads
notAfterand computes days remaining - Flags Critical if expired, High if expiring within 30 days
- Records the issuer CN, serial number, and full expiry timestamp as evidence
Certificate valid Expires: 2026-11-22 (222 days remaining) Issuer: Let's Encrypt Authority X3 Serial: 04:A1:B2:...
Certificate EXPIRED Expiry: 2026-04-10 (3 days ago) Issuer: DigiCert Inc Issue: notAfter exceeded
How to fix it
Renew via your CA. For Let's Encrypt with Certbot:
certbot renew --force-renewal
# Verify the new cert is deployed:
openssl s_client -connect yourdomain.com:443 -servername yourdomain.com \
Enable auto-renewal: certbot installs a systemd timer or cron job by
default.
Verify it is active with systemctl status certbot.timer. Ensure
port 80 is reachable for ACME HTTP-01 challenges (or configure DNS-01 if behind
a firewall). For cloud-managed certs (ACM, GCP Managed SSL), auto-renewal is
automatic — check that the cert is attached to the correct load balancer.
Weak TLS Cipher Suites
What is it?
A TLS cipher suite is a named combination of three algorithms: the key exchange (how session keys are negotiated — e.g., ECDHE, RSA), the bulk cipher (how data is encrypted — e.g., AES-256-GCM, 3DES), and the MAC/hash (integrity — e.g., SHA-256). Weak suites use broken or deprecated algorithms that allow passive decryption or active downgrade attacks.
The critical property is forward secrecy (PFS): ephemeral key exchange (ECDHE/DHE) means each session uses a fresh key. Without it (RSA key exchange), anyone who later obtains the server's private key can decrypt all previously recorded traffic.
Why we check it
RC4 is cryptographically broken (NIST deprecated 2015, RFC 7465 prohibits it). 3DES is vulnerable to SWEET32 (CVE-2016-2183): a birthday attack on the 64-bit block cipher allows an attacker on the same network to recover plaintext after ~785 GB of traffic. NULL ciphers provide zero encryption. EXPORT ciphers (40/56-bit keys) are the basis of FREAK and Logjam — both allow real-time decryption of otherwise "secure" sessions if the server accepts them. Anonymous DH ciphers have no authentication, enabling trivial MITM.
What SurfaceGuard checks
- Sends TLS ClientHello messages advertising one weak cipher at a time
- If the server responds with ServerHello, the cipher is accepted and flagged
- Checks: RC4, DES, 3DES (SWEET32), NULL, EXPORT-grade, anonymous DH, CBC-mode without PFS, MD5 MACs
- Separately tests for TLS 1.0 / 1.1 acceptance (deprecated by RFC 8996)
- Records the exact negotiated cipher string as evidence (e.g.,
TLS_RSA_WITH_3DES_EDE_CBC_SHA)
No weak ciphers accepted
Supported: TLS_AES_256_GCM_SHA384
TLS_CHACHA20_POLY1305_SHA256
Forward secrecy: yes (ECDHE)
Minimum protocol: TLS 1.2
Weak cipher accepted: TLS_RSA_WITH_3DES_EDE_CBC_SHA No forward secrecy (RSA key exchange) 3DES susceptible to SWEET32 Protocol TLS 1.0 also accepted
How to fix it
Restrict your server to ECDHE/DHE key exchange with AEAD ciphers (GCM or CHACHA20). TLS 1.3 only supports forward-secret AEAD cipher suites by design — enabling it is the cleanest fix.
# nginx — restrict to modern ciphers
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:!aNULL:!eNULL:!RC4:!3DES:!EXPORT;
ssl_prefer_server_ciphers on;
# Apache
SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite HIGH:!aNULL:!MD5:!3DES:!RC4:!EXPORT
SSLHonorCipherOrder on
Validate after applying:
openssl s_client -connect host:443 -cipher 'RC4'
should return no ciphers available. Use testssl.sh for a
comprehensive check.
Certificate Anomaly
What is it?
A certificate anomaly is any condition where the certificate cannot be trusted as presented, even if it has not expired. The four most common cases:
- Self-signed — the certificate's issuer and subject are the same entity; no CA has vouched for it
- Hostname mismatch — the Subject Alternative Names (SANs) or CN do not match the domain being scanned (RFC 6125)
- Untrusted root — the issuer chain terminates at a root CA not present in public trust stores (Mozilla, Microsoft, Apple)
- Incomplete chain — an intermediate certificate is missing, causing chain validation to fail on most clients even if the root is trusted
Why we check it
The CA trust model works because your OS and browser ship with a pre-approved list of trusted root CAs. When a server presents a certificate, the client walks the chain — leaf → intermediate → root — and verifies it terminates at one of those trusted roots. If it does, the server's identity is verified. If it doesn't, the connection is rejected.
A valid certificate proves two things: the data is encrypted and the server is who it claims to be. Without CA-signed identity validation, the encryption provides no real protection — an attacker on any network path (ISP, coffee shop, corporate proxy) can intercept traffic by presenting their own self-signed certificate. The client has no way to distinguish it from the real server. Hostname mismatches indicate a misconfigured deployment — commonly caused by deploying a wildcard cert on the wrong subdomain level or forgetting to add a SAN entry.
What SurfaceGuard checks
- Validates the full certificate chain (root → intermediate → leaf) against the Mozilla trust store
- Checks Subject Alternative Names and CN against the scanned hostname using wildcard matching per RFC 6125
- Verifies the root certificate is publicly trusted (not just self-signed at the root)
- Detects missing intermediate certificates (incomplete chain)
- Checks Certificate Transparency log compliance for certs issued after 2018 (SCT presence)
Chain valid (depth: 3) Subject: CN=example.com SANs: example.com, www.example.com ✓ Root: DigiCert Global Root CA (trusted) CT: SCT present (2 logs)
Self-signed certificate Issuer = Subject: CN=localhost Hostname mismatch: scanned api.example.com not in SANs [localhost] No trusted CA in chain
How to fix it
Replace self-signed certificates with CA-issued certificates. Let's Encrypt is free and automated for any publicly accessible domain. For internal services not reachable from the internet, use a private CA and push its root to all client trust stores via MDM/GPO.
# Verify SANs on an existing cert before deploying:
openssl x509 -in cert.pem -noout -text | grep -A1 "Subject Alternative Name"
# Request a cert with multiple SANs (Certbot):
certbot certonly --webroot -d example.com -d www.example.com -d api.example.com
# Check chain completeness (should print 3+ certs):
openssl s_client -connect example.com:443 -showcerts 2>/dev/null | grep -c "BEGIN CERTIFICATE"
For hostname mismatches: reissue the certificate with all required SANs
listed explicitly. Wildcard certs (*.example.com) do not cover
the apex domain (example.com) or second-level subdomains
(a.b.example.com).
DNS Security
Checks email authentication records (SPF, DKIM, DMARC), DNSSEC signing, zone transfer exposure, CAA records, and MTA-STS/TLS-RPT policies. DNS misconfigurations are the primary vector for email spoofing and domain hijacking attacks.
dns_security
SPF Record
What is it?
Sender Policy Framework (SPF) is a DNS TXT record published at your domain root that declares which mail servers are authorised to send email on your behalf (RFC 7208). The record is a space-separated list of mechanisms — IP ranges, include: references to third-party senders, and an all qualifier — that receiving mail servers evaluate against the envelope sender IP during SMTP delivery.
Three distinct issues are detected: DNS_SPF_WEAK — record exists but ends with ~all (softfail) or ?all (neutral), so unauthorised senders are not rejected. DNS_SPF_PERMISSIVE — uses +all or excessively broad ranges like ip4:0.0.0.0/0. SPF_TOO_MANY_LOOKUPS — requires more than 10 DNS lookups to fully resolve; RFC 7208 mandates a hard limit of 10, so exceeding it causes a permerror on many receivers, silently breaking SPF entirely.
Why we check it
Without a strict -all policy, anyone on the internet can send email appearing to come from your domain — the primary mechanism behind business email compromise (BEC) and brand-impersonation phishing. ~all (softfail) is widely misunderstood as secure: it marks failing mail as suspicious but still delivers it. Only -all (hardfail) causes receivers to reject unauthorised mail. SPF is also a prerequisite for DMARC: without a passing SPF or DKIM check, DMARC has nothing to align against.
What SurfaceGuard checks
- Queries the domain root for TXT records matching
v=spf1 - Parses the
allqualifier:~all→DNS_SPF_WEAK;+all/?all→DNS_SPF_PERMISSIVE - Recursively counts lookup mechanisms (
include:,a:,mx:,ptr:,exists:); flagsSPF_TOO_MANY_LOOKUPSif > 10 - Records the full raw SPF string as evidence
v=spf1 include:_spf.google.com
include:mailgun.org -all
Policy: -all (hardfail) ✓
DNS lookups: 4 / 10 limit
DNS_SPF_WEAK — softfail policy v=spf1 include:_spf.google.com ~all Policy: ~all — unauthorised mail still delivered to recipients
How to fix it
Change ~all to -all. Before deploying, verify every legitimate sending service is listed — any unlisted sender will be rejected. For too-many-lookups, use SPF flattening to inline IPs instead of include: chains.
# Check current SPF record
dig TXT yourdomain.com | grep spf1
# Correct record (hardfail)
v=spf1 include:_spf.google.com include:mailgun.org -all
# Flattened form (avoid lookup limit)
v=spf1 ip4:209.85.128.0/17 ip4:198.61.254.0/23 -all
DKIM Record
What is it?
DomainKeys Identified Mail (DKIM, RFC 6376) adds a cryptographic signature to outgoing email. The sending mail server signs a defined set of headers and the message body using a private key. The corresponding public key is published in DNS at selector._domainkey.yourdomain.com. Receiving servers retrieve the public key and verify the signature — proving the message was sent by a server with access to the private key, and that neither headers nor body were modified in transit.
The selector is an arbitrary label (e.g., google, selector1) that allows multiple keys to coexist, enabling key rotation or separate keys per sending service.
Why we check it
Without DKIM, emails have no cryptographic proof of origin or integrity. Critically, DKIM is required for DMARC alignment when SPF cannot align — for example, when email is forwarded: forwarding rewrites the envelope sender (breaking SPF), but leaves DKIM signatures intact. Without DKIM, your DMARC policy is entirely dependent on SPF, which fails for legitimate forwarded mail. Most enterprise providers (Google Workspace, Microsoft 365) generate DKIM keys automatically, but signing only begins after the DNS TXT record is published — a step that is often missed.
What SurfaceGuard checks
- Probes common DKIM selectors:
default,google,mail,k1,s1,s2,selector1,selector2,dkim,email - Queries
{selector}._domainkey.{domain}for TXT records containingv=DKIM1 - Checks key type (
k=rsaork=ed25519) and flags RSA keys shorter than 1024 bits as deprecated - Records the selector name and key hash as evidence
DKIM record found Selector: google._domainkey.yourdomain.com v=DKIM1; k=rsa; p=MIGfMA0GCS... Key: RSA-2048 ✓
No DKIM records found Checked: default, google, mail, k1, s1, s2, selector1, selector2 No v=DKIM1 record at any selector
How to fix it
Enable DKIM signing in your email provider — this generates the private key on their servers. Publish the provided public key TXT record in DNS. Minimum key size: RSA-2048.
# Google Workspace: Admin Console → Apps → Google Workspace → Gmail → Authenticate email
# Microsoft 365: Exchange Admin Center → Protection → DKIM
# Verify DNS record after publishing:
dig TXT selector._domainkey.yourdomain.com
# Expected: v=DKIM1; k=rsa; p=
# Test real signing — send to this address and get a report back:
# check-auth2@verifier.port25.com
DMARC Policy
What is it?
Domain-based Message Authentication, Reporting and Conformance (DMARC, RFC 7489) is a DNS TXT record at _dmarc.yourdomain.com that ties SPF and DKIM together with an enforcement policy. It tells receiving mail servers what to do when an email fails both authentication checks — and critically enforces alignment: the authenticated domain must match the From: header that end users actually see.
Three policy options: p=none — monitor only, do nothing with failing mail; p=quarantine — send to spam; p=reject — reject at SMTP level, the only option that fully prevents impersonation. DMARC also enables aggregate (rua=) and forensic (ruf=) reporting from major receivers including Gmail, Yahoo, and Microsoft.
DNS_DMARC_MONITOR_ONLY is raised when p=none is set — this provides visibility but zero enforcement and is only appropriate as a temporary state during initial DMARC rollout.
Why we check it
SPF and DKIM each cover a partial view of authentication. An attacker can pass both while forging your From: header (indirect spoofing) — DMARC's alignment requirement closes this gap. Without DMARC at p=reject, your domain can be impersonated in phishing emails delivered to recipients at Gmail, Microsoft, and Yahoo, all of which enforce DMARC policies. The reporting mechanism is equally important: without rua=, you cannot see which services are sending on your behalf or whether legitimate mail is failing authentication.
What SurfaceGuard checks
- Queries
_dmarc.{domain}for a TXT record containingv=DMARC1 - Flags
DNS_DMARC_MISSINGif no record exists - Flags
DNS_DMARC_MONITOR_ONLYifp=none - Parses
pct=,sp=(subdomain policy),adkim=/aspf=(alignment mode strictness) - Records the full raw DMARC string as evidence
v=DMARC1; p=reject; pct=100 rua=mailto:dmarc@yourdomain.com Policy: reject ✓ — full enforcement Subdomain policy: reject ✓
DNS_DMARC_MONITOR_ONLY v=DMARC1; p=none rua=mailto:dmarc@yourdomain.com No enforcement — failing mail delivered unchanged
How to fix it
Start with p=none + reporting to identify all legitimate senders. After reviewing reports for 2–4 weeks, progress through quarantine to reject.
# Step 1 — monitor (initial deployment, collect reports)
_dmarc.yourdomain.com TXT "v=DMARC1; p=none; rua=mailto:dmarc@yourdomain.com; adkim=s; aspf=s"
# Step 2 — quarantine (gradual enforcement)
_dmarc.yourdomain.com TXT "v=DMARC1; p=quarantine; pct=100; rua=mailto:dmarc@yourdomain.com"
# Step 3 — reject (target state — full enforcement)
_dmarc.yourdomain.com TXT "v=DMARC1; p=reject; pct=100; sp=reject; rua=mailto:dmarc@yourdomain.com"
# Verify:
dig TXT _dmarc.yourdomain.com
DNSSEC
What is it?
DNS Security Extensions (DNSSEC, RFC 4033) adds cryptographic signatures to DNS records. The standard DNS protocol returns responses with no authentication — a resolver has no way to verify that the answer came from an authoritative server and was not modified in transit. DNSSEC solves this by publishing RRSIG (signature) records alongside each DNS record set, signed with a Zone Signing Key (ZSK). The ZSK's public key is published as a DNSKEY record, and a Delegation Signer (DS) record at the parent zone creates a chain of trust all the way up to the DNS root, which is signed by ICANN.
Why we check it
Without DNSSEC, DNS cache poisoning attacks (Kaminsky attack, CVE-2008-1447) allow an attacker who can send forged UDP responses faster than the real nameserver to inject false records into a resolver's cache — redirecting all users of that resolver to attacker-controlled infrastructure. This affects A, MX, and NS records: a poisoned A record redirects TCP connections before the TLS handshake begins. DNSSEC is also a prerequisite for DANE (RFC 6698), which allows publishing TLS certificate fingerprints directly in DNS.
What SurfaceGuard checks
- Queries
DNSKEYrecords to check if key material is published (KSK + ZSK) - Checks for a
DSrecord at the parent zone (anchors the chain of trust) - Verifies
RRSIGrecords are present and not expired on the SOA and A records - Records whether the full chain root → TLD → domain is intact
DNSSEC enabled DNSKEY: present (KSK + ZSK) DS record at parent zone: present RRSIG valid, expires: 2026-05-14 Chain of trust: complete ✓
DNSSEC not configured No DNSKEY at yourdomain.com No DS record at parent zone (.com) DNS responses are unauthenticated
How to fix it
Enable DNSSEC in your DNS provider's control panel. Cloudflare, Route 53, and Google Cloud DNS handle key rotation automatically. If your DNS provider and registrar are different, manually submit the DS record to your registrar after enabling DNSSEC.
# Verify DNSSEC is active:
dig DNSKEY yourdomain.com +dnssec +short
# Check DS record at parent zone:
dig DS yourdomain.com @a.gtld-servers.net +short
# Full chain validation (BIND's delv):
delv @8.8.8.8 yourdomain.com A +rtrace
# Should show: fully validated
DNS Zone Transfer
What is it?
A DNS zone transfer (AXFR query, RFC 5936) is the mechanism used by secondary nameservers to replicate a full copy of a DNS zone from the primary. When a nameserver responds to an AXFR query from any source — rather than only authorised secondary IPs — it returns every DNS record for the domain in a single response: all subdomains, internal hostnames, IP addresses, mail servers, and service records. This hands an attacker a complete, authoritative infrastructure map in one query.
Why we check it
An unrestricted zone transfer is the most efficient passive reconnaissance technique available. Instead of brute-forcing subdomains one-by-one (which takes hours and only finds guessable names), an attacker gets the full authoritative list in under a second — including internal hostnames never meant to be public, such as db-primary.internal.example.com, vpn-gateway.example.com, or jenkins.internal. These names reveal infrastructure topology, technology stack, and attack surface that would otherwise take significant effort to discover.
What SurfaceGuard checks
- Resolves all authoritative nameservers via NS record query
- Sends an AXFR query to each nameserver from a public IP
- If any nameserver responds with zone records rather than
REFUSEDorNOTAUTH, the finding is raised - Records the number of records leaked and a sample of returned hostnames as evidence
Zone transfer not permitted ns1.yourdomain.com → REFUSED ✓ ns2.yourdomain.com → REFUSED ✓ All nameservers correctly restricted
ZONE TRANSFER PERMITTED
ns1.yourdomain.com → AXFR succeeded
Records leaked: 47
Sample: db-primary.internal
vpn-gw, staging.api, dev.admin
How to fix it
Restrict AXFR to authorised secondary nameserver IPs. Use TSIG (Transaction Signature) keys for authenticated replication rather than relying on IP allowlists alone. Managed DNS providers (Cloudflare, Route 53, Google Cloud DNS) disable zone transfers entirely by default — this finding only appears on self-hosted BIND/PowerDNS.
# BIND named.conf — restrict zone transfer to secondary only
zone "yourdomain.com" {
type primary;
file "zones/yourdomain.com";
allow-transfer { 192.0.2.53; }; # secondary NS IP
also-notify { 192.0.2.53; };
};
# Verify the fix:
dig AXFR yourdomain.com @ns1.yourdomain.com
# Expected: Transfer failed.
MTA-STS & SMTP TLS Reporting
What is it?
Standard SMTP uses opportunistic TLS — the sending server tries TLS but silently falls back to plaintext if negotiation fails. This makes inbound email delivery vulnerable to downgrade attacks: an attacker positioned between two mail servers can signal TLS is unavailable, causing mail to be delivered unencrypted.
MTA-STS (RFC 8461) closes this gap. It works in two parts: a DNS TXT record at _mta-sts.yourdomain.com signals a policy exists, and a policy file hosted at https://mta-sts.yourdomain.com/.well-known/mta-sts.txt specifies which MX servers are authorised and whether TLS is required (mode: enforce). Conforming senders (Gmail, Microsoft 365) will refuse to deliver mail over plaintext to MTA-STS-protected domains.
TLS-RPT (RFC 8460) is the companion reporting mechanism — a DNS TXT record at _smtp._tls.yourdomain.com that tells senders where to send JSON reports on TLS negotiation failures. Without it, SMTP TLS failures are completely silent.
Why we check it
Email in transit is only as secure as the weakest link in the SMTP relay chain. Without MTA-STS, even domains with perfect SPF, DKIM, and DMARC can have inbound email intercepted in plaintext at the network layer. TLS-RPT is the observability layer: without it, you will never know whether legitimate mail is failing TLS negotiation due to certificate mismatches or intermediary stripping.
What SurfaceGuard checks
- Queries
_smtp._tls.{domain}for a TXT record containingv=TLSRPTv1; flagsDNS_TLS_RPT_MISSINGif absent - Checks for the MTA-STS policy file at
https://mta-sts.{domain}/.well-known/mta-sts.txt - Verifies MTA-STS mode:
testingvsenforce - Records the
rua=reporting endpoint if present
TLS-RPT: v=TLSRPTv1 rua=mailto:tls@yourdomain.com ✓ MTA-STS policy: enforce mx: mail.yourdomain.com max_age: 86400 ✓
TLS-RPT record missing No TXT at _smtp._tls.yourdomain.com MTA-STS: not configured SMTP TLS failures are silent
How to fix it
Deploy TLS-RPT first (one DNS record). Then deploy MTA-STS in testing mode to collect data before switching to enforce.
# 1. TLS-RPT DNS record
_smtp._tls.yourdomain.com TXT "v=TLSRPTv1; rua=mailto:tls-reports@yourdomain.com"
# 2. MTA-STS pointer DNS record
_mta-sts.yourdomain.com TXT "v=STSv1; id=20260414000000Z"
# 3. Policy file at https://mta-sts.yourdomain.com/.well-known/mta-sts.txt
# (requires valid TLS cert on mta-sts subdomain)
version: STSv1
mode: enforce
mx: mail.yourdomain.com
max_age: 86400
# Verify:
dig TXT _smtp._tls.yourdomain.com
HTTP Security Headers
Scans every HTTP response for the presence and correctness of security headers: CSP, HSTS, CORS policy, cookie flags, X-Frame-Options, Referrer-Policy, Permissions-Policy, and more. Missing or misconfigured headers are the most common class of web application security findings.
http_security
Content Security Policy (CSP)
What is it?
Content Security Policy is an HTTP response header that instructs the browser which content sources are trusted for the current page. It is expressed as a series of directives: script-src controls where JavaScript may be loaded from; default-src is the fallback for any directive not explicitly set; object-src 'none' blocks plugins; frame-ancestors controls who can embed the page in a frame. A strict CSP stops the browser from executing injected scripts even if an XSS vulnerability exists in the application.
Key unsafe directives that weaken a CSP: 'unsafe-inline' (allows inline <script> tags, defeating most XSS protection), 'unsafe-eval' (allows eval() and similar), and wildcard * in script-src (allows scripts from any origin).
Why we check it
XSS is the most prevalent web vulnerability class. Without CSP, a single injection point — a reflected parameter, a stored comment, a DOM sink — allows complete client-side compromise: session token theft via document.cookie, credential harvesting by rewriting login forms, keylogging, or redirecting the user to a phishing page. CSP is a browser-enforced safety net that contains the blast radius of an XSS finding: even if an attacker locates an injection point, a strict script-src 'nonce-...' policy prevents the injected code from executing or exfiltrating data.
What SurfaceGuard checks
- Fetches the root page and checks for the
Content-Security-Policyresponse header - Flags absent CSP as
HEADER_CSP_MISSING - If CSP is present, checks for weakening directives:
'unsafe-inline','unsafe-eval', wildcard*inscript-src - Checks whether
Content-Security-Policy-Report-Onlyis used instead (report-only provides no enforcement)
Content-Security-Policy: default-src 'none'; script-src 'self' 'nonce-r4nd0m'; style-src 'self'; object-src 'none'; frame-ancestors 'none' ✓
Content-Security-Policy header absent No CSP enforcement on this page XSS findings have unrestricted impact — no browser containment
How to fix it
Start with Content-Security-Policy-Report-Only to collect violation reports without breaking functionality. Once violations are resolved, switch to enforcing mode. Use nonces or hashes instead of 'unsafe-inline'.
# Nginx — add to server block
add_header Content-Security-Policy "default-src 'none'; script-src 'self'; style-src 'self'; img-src 'self' data:; font-src 'self'; connect-src 'self'; object-src 'none'; frame-ancestors 'none'; base-uri 'self';" always;
# Report-only mode first (for testing — won't break anything):
add_header Content-Security-Policy-Report-Only "default-src 'self'; report-uri /csp-report;" always;
# Python/FastAPI — add middleware:
response.headers["Content-Security-Policy"] = "default-src 'none'; script-src 'self'; object-src 'none';"
HTTP Strict Transport Security (HSTS)
What is it?
HTTP Strict Transport Security (RFC 6797) is a response header that instructs browsers to only connect to a domain over HTTPS for a specified period (max-age). Once a browser has seen the HSTS header, it refuses to make plain HTTP connections to that domain for the duration of the policy — even if the user types http:// — upgrading all requests to HTTPS before they leave the browser.
HSTS_NOT_PRELOADED is a lower-severity variant: HSTS is present but the domain is not on the browser HSTS preload list (hstspreload.org). Preloading hardcodes the policy into Chrome, Firefox, and Safari's source code, protecting first-time visitors who have never received the header before.
Why we check it
Without HSTS, SSL stripping attacks (sslstrip) intercept the first HTTP request before the browser follows the 301 → https:// redirect. The attacker acts as a transparent proxy: the user sees http:// in the address bar, the attacker sees plaintext, the origin server sees what appears to be a normal HTTPS connection. The attack is completely invisible to the user. HSTS closes this gap by preventing the initial HTTP request from ever leaving the browser. Preloading ensures even first-time visitors — who have never received the header — are protected.
What SurfaceGuard checks
- Checks for the
Strict-Transport-Securityheader on HTTPS responses - Validates
max-agevalue (minimum recommended: 31,536,000 seconds / 1 year) - Checks for
includeSubDomainsdirective (required for preload eligibility) - Checks the hstspreload.org API for preload list inclusion; raises
HSTS_NOT_PRELOADEDif absent
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload ✓ Preload list status: included ✓
Strict-Transport-Security header absent First HTTP request is unprotected SSL stripping attack is possible
How to fix it
Add the header with a minimum max-age of 1 year. Before adding includeSubDomains, verify all subdomains serve valid HTTPS — any HTTP-only subdomain becomes inaccessible once includeSubDomains is enforced. Submit to the preload list only after all subdomains are HTTPS-ready.
# Nginx
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# Apache
Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
# Verify current status:
curl -sI https://yourdomain.com | grep -i strict
# Submit for preloading: https://hstspreload.org
CORS Misconfiguration
What is it?
Cross-Origin Resource Sharing (CORS) is a browser mechanism that extends the Same-Origin Policy to allow controlled cross-origin HTTP requests. Servers declare which origins are permitted by responding with Access-Control-Allow-Origin headers. A CORS misconfiguration occurs when the server dynamically reflects the caller's Origin header value back without validation, or uses Access-Control-Allow-Origin: * combined with Access-Control-Allow-Credentials: true — granting any website the ability to make authenticated cross-origin requests and read the response.
Why we check it
The Same-Origin Policy is the browser's fundamental isolation boundary — it prevents scripts on evil.com from reading responses from bank.com. A misconfigured CORS policy tears down this boundary. An attacker who tricks a logged-in user into visiting a malicious site can use JavaScript on that site to make authenticated API requests to the vulnerable origin, read the full response body, and exfiltrate account data, CSRF tokens, or session information. This is a server-side misconfiguration, not a browser bug — the server explicitly grants cross-origin access it should not.
What SurfaceGuard checks
- Sends requests with
Origin: https://attacker.surfaceguard-probe.comto all discovered API endpoints - Checks if the response reflects the injected origin in
Access-Control-Allow-Origin - Tests null origin:
Origin: null(exploitable from sandboxed iframes) - Checks for
Access-Control-Allow-Credentials: truecombined with a reflected or wildcard origin - Records the exact reflected header value as evidence
Access-Control-Allow-Origin: https://app.yourdomain.com ✓ Probe origin not reflected Credentials: not allowed
CORS origin reflected verbatim:
Access-Control-Allow-Origin:
https://attacker.surfaceguard-probe.com
Access-Control-Allow-Credentials: true
Any origin can read authenticated responses
How to fix it
Maintain an explicit allowlist of trusted origins. Never reflect the Origin header value without validating it against the allowlist. Never combine Access-Control-Allow-Credentials: true with Access-Control-Allow-Origin: * — browsers block this combination, but a reflected dynamic origin achieves the same effect.
# Python/FastAPI example — strict allowlist
ALLOWED_ORIGINS = {"https://app.yourdomain.com", "https://admin.yourdomain.com"}
@app.middleware("http")
async def cors_middleware(request, call_next):
origin = request.headers.get("origin", "")
response = await call_next(request)
if origin in ALLOWED_ORIGINS:
response.headers["Access-Control-Allow-Origin"] = origin
response.headers["Vary"] = "Origin" # required for caching correctness
return response
# Nginx — static allowlist via map
map $http_origin $cors_origin {
"https://app.yourdomain.com" $http_origin;
default "";
}
add_header Access-Control-Allow-Origin $cors_origin always;
Cookie Security Flags
What is it?
Session and authentication cookies should carry three security attributes. Secure: the cookie is only transmitted over HTTPS connections — without it, the cookie is sent in plaintext on any HTTP request. HttpOnly: the cookie is not accessible via JavaScript (document.cookie) — without it, any XSS payload can steal the session token. SameSite=Strict or SameSite=Lax: the cookie is not sent on cross-site requests — without it, CSRF attacks can trigger authenticated actions.
SurfaceGuard identifies cookies whose names match session and auth patterns (session, auth, token, jwt, sid, user) and verifies all three flags are present.
Why we check it
Each missing flag is an independent attack vector. Without Secure: any HTTP sub-request on the same domain transmits the session cookie in plaintext to any on-path observer. Without HttpOnly: a single XSS vulnerability anywhere on the domain immediately yields session takeover — the attacker exfiltrates document.cookie and replays it. Without SameSite: a CSRF attack can perform any authenticated action the victim can perform (funds transfer, email change, account deletion) by embedding a forged request in a cross-origin page.
What SurfaceGuard checks
- Inspects all
Set-Cookieheaders from the root page and auth endpoints - Identifies cookies matching session/auth name patterns
- Checks for presence of
Secure,HttpOnly,SameSiteattributes on each matched cookie - Also checks for modern cookie prefixes:
__Host-(implies Secure + Path=/ + no Domain) and__Secure-(implies Secure)
Set-Cookie: __Host-session=abc123; Path=/; Secure; HttpOnly; SameSite=Strict ✓ All three security flags present
Set-Cookie: session=abc123; Path=/ Missing: Secure, HttpOnly, SameSite Cookie accessible to JS and HTTP CSRF and XSS session theft possible
How to fix it
# FastAPI / Starlette
response.set_cookie(
key="session",
value=session_token,
httponly=True,
secure=True,
samesite="strict",
max_age=3600,
path="/",
)
# Django settings.py
SESSION_COOKIE_SECURE = True
SESSION_COOKIE_HTTPONLY = True
SESSION_COOKIE_SAMESITE = "Strict"
CSRF_COOKIE_SECURE = True
# Nginx fallback — stamp flags on Set-Cookie responses from upstream
proxy_cookie_flags ~ secure httponly samesite=strict;
Open Redirect
What is it?
An open redirect occurs when a web application accepts a URL as a parameter (e.g., ?next=, ?redirect=, ?return_to=, ?url=) and issues a redirect response to that URL without validating it against an allowlist. An attacker crafts a link like https://trustedsite.com/login?next=https://evil.com — the victim sees a trusted domain in the link preview, clicks it, and is silently redirected to the attacker's site after the server processes the request.
OPEN_REDIRECT_CANDIDATE is raised when a redirect to an external URL is observed but requires authentication or specific conditions to exploit — it needs manual verification before confirming exploitability.
Why we check it
Open redirects are a phishing force-multiplier. Attackers send links that pass email and SMS reputation filters because they originate from a legitimate, trusted domain. In OAuth 2.0 flows, an open redirect at the authorisation server can be chained with a redirect_uri that includes a wildcard or open redirect, redirecting authorisation codes or access tokens to an attacker-controlled endpoint. SAML and SSO integrations that redirect users after authentication are particularly high-risk targets.
What SurfaceGuard checks
- Probes common redirect parameters (
next,redirect,url,return,goto,dest) with an external probe URL - Checks
Locationheader in 3xx responses for the probe URL value - Tests both full URL and protocol-relative (
//evil.com) redirect values - Raises
OPEN_REDIRECTon confirmed redirect;OPEN_REDIRECT_CANDIDATEon partial match or auth-gated redirects
Redirect parameters probed External URL injected — not followed Location: /dashboard (relative only) No open redirect confirmed ✓
OPEN_REDIRECT confirmed GET /login?next=https://probe.example.com → HTTP 302 Location: https://probe.example.com Arbitrary external redirect allowed
How to fix it
Validate redirect destinations against a strict allowlist of your own paths. Never redirect to an absolute URL provided by the user — only relative paths or allowlisted absolute domains.
# Python — allowlist-based redirect validation
from urllib.parse import urlparse
ALLOWED_REDIRECT_HOSTS = {"yourdomain.com", "app.yourdomain.com"}
def safe_redirect(next_url: str, default="/dashboard") -> str:
parsed = urlparse(next_url)
# Reject if it has a netloc (absolute URL) not in allowlist
if parsed.netloc and parsed.netloc not in ALLOWED_REDIRECT_HOSTS:
return default
# Reject protocol-relative URLs (//evil.com)
if next_url.startswith("//"):
return default
return next_url or default
Host Header Injection
What is it?
HTTP/1.1 requires a Host header identifying the target domain. Many web frameworks use the Host header to dynamically construct URLs for password reset emails, activation links, redirects, and canonical links. If the server trusts and reflects the Host header without validation, an attacker can supply a forged host value and have the application generate links pointing to the attacker's infrastructure.
The primary exploit is password reset poisoning: the victim requests a password reset, the attacker intercepts the request and injects Host: attacker.com, the server sends a legitimate-looking reset email containing a link like https://attacker.com/reset?token=..., the victim clicks it, and the reset token is delivered to the attacker.
HOST_HEADER_REFLECTION_CANDIDATE is raised when the injected Host value appears in the response but full exploitation (e.g., via email delivery) cannot be confirmed from an external scan.
Why we check it
Password reset poisoning is a reliable account takeover technique that requires no interaction beyond the victim clicking a legitimate email from the real domain. The attacker doesn't need to intercept traffic or exploit a browser vulnerability — the server generates the malicious link itself. The attack is also effective against cache poisoning: if the reflected Host value ends up in a cached page served to other users, a single attacker-controlled request can poison the cache for thousands of users.
What SurfaceGuard checks
- Sends requests with
Host: probe.surfaceguard-test.comand checks if the injected value appears in the response body orLocationheaders - Also tests
X-Forwarded-HostandX-Hostheader injection (used by some reverse proxy setups) - Checks for reflected host in HTML href/src attributes, meta refresh tags, and JSON response fields
- Raises
HOST_HEADER_INJECTIONon confirmed reflection;HOST_HEADER_REFLECTION_CANDIDATEon partial match
Host header injected: probe.surfaceguard-test.com Injected value not reflected in response Response uses hardcoded domain ✓
HOST_HEADER_INJECTION Injected: probe.surfaceguard-test.com Reflected in response body: href="https://probe.surfaceguard-test.com/reset" Password reset poisoning possible
How to fix it
Never use the Host header or request.host to construct URLs. Use a hardcoded BASE_URL environment variable instead. If you must validate the host, use a strict allowlist.
# BAD — trusts the Host header
reset_url = f"https://{request.headers['host']}/reset?token={token}"
# GOOD — hardcoded base URL from config
import os
BASE_URL = os.environ["BASE_URL"] # e.g., "https://yourdomain.com"
reset_url = f"{BASE_URL}/reset?token={token}"
# Django: set ALLOWED_HOSTS strictly
ALLOWED_HOSTS = ["yourdomain.com", "www.yourdomain.com"]
# Never use ["*"] in production
Mixed Content
What is it?
Mixed content occurs when an HTTPS page loads resources over plain HTTP. Browsers classify it into two categories: passive mixed content (images, audio, video) — loaded but flagged with a warning; active mixed content (scripts, stylesheets, iframes, XMLHttpRequest) — blocked by default in modern browsers because these resources can read and modify page content.
Even passive mixed content is a security risk: an on-path attacker can intercept the HTTP sub-request and replace the resource — substituting an image with one containing an exploit, or injecting content into the page. SurfaceGuard detects mixed content in HTML source, inline CSS url() references, and JavaScript-loaded resources.
Why we check it
HTTPS is not end-to-end unless all resources load over HTTPS. A single <script src="http://..."> tag gives any on-path attacker (ISP, corporate proxy, public Wi-Fi) the ability to inject arbitrary JavaScript into an otherwise HTTPS-protected page. This is common in legacy sites that migrated to HTTPS without updating hardcoded http:// asset URLs in their CMS or templates. Active mixed content being blocked by browsers does not resolve the security issue — it just means the page is broken rather than compromised.
What SurfaceGuard checks
- Fetches the HTTPS page and parses HTML for
src,href,actionattributes pointing tohttp://URLs - Checks inline
<style>blocks and external CSS forurl(http://...)references - Classifies findings as active (scripts, stylesheets) or passive (images) mixed content
- Records the specific
http://URLs found as evidence
No mixed content detected All resources loaded over HTTPS ✓ 0 http:// src/href references found
Mixed content detected (active) <script src="http://cdn.example.com/lib.js"> Blocked by browser — page broken On-path injection possible if loaded
How to fix it
Replace all http:// resource URLs with https:// or protocol-relative //. Add the upgrade-insecure-requests CSP directive as a safety net — it instructs the browser to upgrade HTTP sub-requests to HTTPS before blocking them.
# CSP directive to auto-upgrade HTTP sub-requests
Content-Security-Policy: upgrade-insecure-requests;
# Find all http:// references in your codebase:
grep -r "src=[\"']http://" templates/ static/
grep -r "url(http://" static/css/
# Django — force HTTPS on all URLs in templates
# settings.py
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
SECURE_SSL_REDIRECT = True
Subresource Integrity (SRI)
What is it?
Subresource Integrity (SRI) is a browser security feature that allows you to specify a cryptographic hash alongside <script> and <link> tags loading external resources. The browser computes the hash of the fetched resource and compares it against the value in the integrity attribute — if they don't match, the resource is blocked and not executed.
<script src="https://cdn.example.com/jquery-3.6.0.min.js"
integrity="sha384-KyZXEAg3QhqLMpG8r+8fhAXLRk2vvoC2f3B09zVXn8CA4egDlOm65b7e0MgFyc9"
crossorigin="anonymous"></script>
Without the integrity attribute, the browser executes whatever the CDN serves — no verification.
Why we check it
CDN supply chain attacks are a real threat vector. In 2024, the Polyfill.io CDN was compromised after its domain was acquired by a threat actor — every website loading https://polyfill.io/v3/polyfill.min.js without SRI was serving malicious JavaScript to their users. SRI is the only client-side control that protects against CDN compromise: even if the CDN serves a modified file, the browser refuses to execute it. Without SRI, your security posture is entirely dependent on the CDN operator's security practices.
What SurfaceGuard checks
- Parses the page HTML for
<script src>and<link rel="stylesheet" href>tags loading from external origins (different from the page's origin) - Checks each external resource tag for a valid
integrityattribute containing a hash (sha256-,sha384-,sha512-) - Flags each external script/stylesheet missing an
integrityattribute - Records the specific external URLs as evidence
3 external scripts found All have integrity= attribute ✓ Hashes: sha384-... (verified)
SRI missing on external script: https://cdn.example.com/analytics.js No integrity= attribute CDN compromise = immediate XSS
How to fix it
Generate SRI hashes for all external scripts and stylesheets. For scripts that update frequently (analytics, tag managers), consider self-hosting the asset instead — SRI is incompatible with dynamically-versioned resources.
# Generate SRI hash for an external resource:
curl -s https://cdn.example.com/lib.js | openssl dgst -sha384 -binary | openssl base64 -A
# Output: sha384-
# Use srihash.org for a web UI alternative
# Add to your HTML:
<script src="https://cdn.example.com/lib.js"
integrity="sha384-<hash>"
crossorigin="anonymous"></script>
# For resources you control: self-host them instead
# so SRI hash stays stable across versions
Header Info Leak & HTTP Misconfiguration
What is it?
HEADER_INFO_LEAK: The server exposes technology and version information in response headers that are unnecessary for browser functionality. Common examples: Server: Apache/2.4.41 (Ubuntu), X-Powered-By: PHP/7.4.3, X-AspNet-Version: 4.0.30319. Attackers cross-reference exact version strings against CVE databases to identify unpatched vulnerabilities with minimal effort.
HEADER_OTHER_MISSING: Security headers not covered by dedicated checks are absent. Key ones: X-Content-Type-Options: nosniff — prevents browsers from MIME-sniffing responses, blocking drive-by downloads disguised as images; X-Frame-Options: DENY or SAMEORIGIN — prevents clickjacking by blocking the page from being embedded in an iframe; Referrer-Policy — controls how much URL information is sent in the Referer header to third parties; Permissions-Policy — restricts access to browser features (camera, microphone, geolocation).
HTTP_MISCONFIGURATION: General server configuration issues — TRACE method enabled (Cross-Site Tracing: can be used to read HttpOnly cookies via XSS), OPTIONS response exposing sensitive endpoints, or missing Cache-Control headers on authenticated responses.
Why we check it
Version disclosure in headers gives attackers a precise fingerprint for targeted exploitation — instead of blind vulnerability scanning, they know exactly which CVE to attempt. Clickjacking (missing X-Frame-Options) allows an attacker to overlay transparent iframes over legitimate UI elements, tricking users into performing authenticated actions (button clicks, form submissions) without their knowledge. MIME sniffing attacks (missing nosniff) allow an attacker to upload a file with a benign MIME type that the browser re-interprets as HTML or JavaScript.
What SurfaceGuard checks
HEADER_INFO_LEAK: Checks forServer,X-Powered-By,X-AspNet-Version,X-Generatorheaders containing version stringsHEADER_OTHER_MISSING: Checks for absence ofX-Content-Type-Options,X-Frame-Options,Referrer-Policy,Permissions-PolicyHTTP_MISCONFIGURATION: SendsTRACEandOPTIONSrequests; checksCache-Controlon authenticated pages
X-Content-Type-Options: nosniff ✓ X-Frame-Options: DENY ✓ Referrer-Policy: no-referrer ✓ Server: (omitted) ✓ TRACE: disabled ✓
Server: nginx/1.18.0 (Ubuntu) X-Powered-By: PHP/8.1.2 X-Frame-Options: absent (clickjacking risk) X-Content-Type-Options: absent TRACE method: enabled
How to fix it
# Nginx — suppress version info and add security headers
server_tokens off; # removes version from Server header
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "DENY" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
# Disable TRACE method
if ($request_method = TRACE) { return 405; }
# PHP — suppress X-Powered-By
# php.ini: expose_php = Off
# Cache-Control on authenticated pages
add_header Cache-Control "no-store, no-cache, must-revalidate, private" always;
Email Security
Evaluates the overall email authentication posture: SPF strictness, DKIM key presence, DMARC enforcement level, and aggregate spoofing risk score. A domain without enforced email authentication can be impersonated in phishing campaigns with no technical barrier.
email_security
Email Spoofing Risk
What is it?
Email spoofing is the ability to send mail that appears to originate from a domain you do not control. SurfaceGuard evaluates the combined authentication posture — SPF policy strictness, DKIM key presence, and DMARC enforcement level — and raises EMAIL_SPOOFING_HIGH when the aggregate posture allows unauthenticated senders to reach recipients with minimal friction. This is distinct from the individual SPF/DKIM/DMARC findings in the DNS section: this check represents the overall spoofability verdict for the domain.
The three individual checks (SPF, DKIM, DMARC) each cover a separate authentication layer. EMAIL_SPOOFING_HIGH is raised when the combination leaves a practical, exploitable spoofing path — for example, SPF softfail with no DMARC enforcement, or DMARC at p=none with a missing DKIM key.
Why we check it
Business email compromise (BEC) is consistently the highest-dollar cybercrime category — FBI IC3 reports show BEC losses exceeding $2.9 billion annually. The attack requires no malware: an attacker spoofs your domain in an email to an employee, partner, or customer, instructs a wire transfer or credential submission, and the victim complies because the sender appears legitimate. Domains without enforced email authentication can be impersonated by anyone with an SMTP server in under five minutes. Major receiving providers (Gmail, Outlook, Yahoo) enforce DMARC policies — a p=reject record means spoofed mail is silently dropped before delivery.
What SurfaceGuard checks
- Evaluates SPF policy strength:
-all(hardfail) vs~all(softfail) vs missing - Checks DKIM key presence across common selectors
- Evaluates DMARC enforcement:
p=reject(protected) vsp=quarantinevsp=none/ missing (unprotected) - Raises
EMAIL_SPOOFING_HIGHwhen the combined posture leaves a practical spoofing path open - Records which specific gaps contribute to the high-risk verdict as evidence
Email authentication: protected SPF: -all (hardfail) ✓ DKIM: present (RSA-2048) ✓ DMARC: p=reject; pct=100 ✓ Spoofing path: closed
EMAIL_SPOOFING_HIGH SPF: ~all (softfail) ✗ DKIM: missing ✗ DMARC: p=none (monitor only) ✗ Domain can be spoofed freely
How to fix it
Fix the underlying individual findings in the DNS Security section: strengthen SPF to -all, publish DKIM keys for all sending services, and progress DMARC from p=none → p=quarantine → p=reject. This finding resolves automatically once all three layers are correctly configured.
# Target state for all three records:
yourdomain.com TXT "v=spf1 include:_spf.google.com -all"
selector._domainkey TXT "v=DKIM1; k=rsa; p="
_dmarc.yourdomain.com TXT "v=DMARC1; p=reject; pct=100; rua=mailto:dmarc@yourdomain.com"
Subdomain Exposure
Enumerates subdomains via certificate transparency logs, DNS brute-force, and passive DNS. Checks each discovered subdomain for takeover vulnerability (dangling CNAME to a deprovisioned service) and enforced HTTPS.
subdomain_exposure
Subdomain Takeover
What is it?
A subdomain takeover occurs when a DNS record points to a third-party service that has been deprovisioned, but the DNS record itself was never removed. The most common pattern: a CNAME record points to a service like myapp.azurewebsites.net, yourbrand.github.io, or mysite.s3-website.amazonaws.com, but the resource at that endpoint no longer exists. Because the CNAME still resolves, anyone can register the same resource name on the third-party platform and gain control of content served at your subdomain.
The attacker doesn't need access to your DNS — they just claim the dangling resource on the third-party platform. Once claimed, they serve arbitrary content under your domain: phishing pages, malware, or a convincing replica of your login page.
Why we check it
A taken-over subdomain operates under your domain's trust context: it shares cookies scoped to the parent domain (depending on Domain= attribute), benefits from your domain's reputation for email delivery and browser trust, and appears legitimate to users. An attacker serving a phishing page at login.yourdomain.com — a subdomain you previously used for a deprovisioned product — has an immediate and credible impersonation vector. GitHub Pages, Heroku, Fastly, AWS S3, Azure, and Zendesk are the most commonly exploited platforms.
What SurfaceGuard checks
- Enumerates discovered subdomains (via CT logs, DNS brute-force, and passive DNS)
- Follows each subdomain's CNAME chain to the final target
- Checks if the CNAME destination returns a "no such bucket", "project not found", "404 not found" response characteristic of a deprovisioned resource on known platforms (GitHub Pages, Heroku, AWS S3, Azure, Fastly, Zendesk, etc.)
- Records the dangling CNAME chain and the platform fingerprint as evidence
api.yourdomain.com CNAME → yourbrand.azurewebsites.net Azure resource: exists (HTTP 200) ✓ No dangling CNAMEs found
SUBDOMAIN_TAKEOVER staging.yourdomain.com CNAME → yourbrand.github.io GitHub Pages: "There isn't a GitHub Pages site here" — resource unclaimed
How to fix it
Either re-provision the resource on the third-party platform (if still needed) or delete the dangling DNS record immediately. Audit all CNAME records periodically — subdomain takeover vulnerabilities accumulate silently as teams deprovision services without cleaning up DNS.
# Find all CNAME records in your zone:
dig AXFR yourdomain.com @ns1.yourdomain.com | grep CNAME
# For each CNAME, verify the target still resolves to a live resource:
curl -sI https://target.platform.com | head -5
# If the resource is gone — delete the DNS record:
# AWS Route 53: aws route53 change-resource-record-sets (DELETE)
# Cloudflare: dashboard → DNS → delete the CNAME record
# Automate ongoing monitoring: rerun SurfaceGuard on a schedule
# to catch new dangling CNAMEs as services are deprovisioned
Subdomain Serving Insecure HTTP
What is it?
A discovered subdomain responds to HTTP requests but does not redirect to HTTPS, or does not serve HTTPS at all. The subdomain may be serving a staging environment, internal tool, API endpoint, or legacy service over plain HTTP — all traffic to and from it is transmitted in cleartext with no encryption or integrity protection.
Why we check it
Even if your primary domain enforces HTTPS, a single HTTP-only subdomain undermines the security posture. Any data exchanged with that subdomain — session cookies (if scoped to the parent domain), form submissions, API responses — is visible to any on-path observer. Additionally, if HSTS is configured with includeSubDomains on the root domain, browsers will refuse HTTP connections to subdomains, causing functionality to break. HTTP-only subdomains often indicate forgotten internal tooling or staging environments that were not subject to the same security hardening as production.
What SurfaceGuard checks
- For each discovered subdomain, attempts an HTTPS connection and an HTTP connection
- Flags if HTTP is served without a redirect to HTTPS (
301/302tohttps://) - Flags if port 443 is closed or returns a TLS error while port 80 is open and serving content
- Records the subdomain, HTTP status, and response server header as evidence
staging.yourdomain.com HTTP → 301 → https://staging.yourdomain.com HTTPS: valid cert, active ✓ All subdomains enforce HTTPS
dev.yourdomain.com HTTP: 200 OK — no redirect to HTTPS Port 443: closed All traffic in cleartext
How to fix it
Provision a TLS certificate and enforce HTTPS on all subdomains. If the subdomain is an internal tool that should not be publicly accessible, restrict it at the network layer (firewall / VPC) rather than serving it over HTTP.
# Nginx — redirect HTTP to HTTPS on the subdomain
server {
listen 80;
server_name dev.yourdomain.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name dev.yourdomain.com;
ssl_certificate /etc/letsencrypt/live/dev.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/dev.yourdomain.com/privkey.pem;
# ... rest of config
}
# Let's Encrypt cert for the subdomain:
certbot certonly --webroot -d dev.yourdomain.com
Port & Service Exposure
Scans for internet-reachable ports serving databases (Postgres, MySQL, Redis, MongoDB), remote access (RDP, SSH, VNC), admin panels, and other services that should never be directly reachable from the public internet.
port_exposure
Exposed Database Port
What is it?
A database server (PostgreSQL on 5432, MySQL/MariaDB on 3306, MongoDB on 27017, Redis on 6379, Elasticsearch on 9200, Cassandra on 9042, CouchDB on 5984, MSSQL on 1433) is accepting TCP connections from the public internet. Database ports should never be directly reachable from outside your private network — they should only be accessible by application servers within your VPC or internal network.
Why we check it
A database exposed to the internet is one credential brute-force or default-credential attempt away from a full data breach. Redis has no authentication by default — anyone who reaches port 6379 has immediate read/write access to all data and can use the CONFIG SET dir technique to write arbitrary files to the filesystem (leading to RCE). MongoDB historically defaulted to no authentication. Elasticsearch exposes a full REST API with no authentication by default. Even databases with authentication enabled are subject to brute-force, credential stuffing, and exploitation of unpatched CVEs (e.g., PostgreSQL privilege escalation chains).
What SurfaceGuard checks
- Scans well-known database ports: 1433, 3306, 5432, 5984, 6379, 9042, 9200, 27017, 27018, 28017
- Attempts a TCP connection to each port; if it accepts, records the port as open
- Sends a protocol-appropriate banner probe to confirm the service type (e.g., MySQL handshake, Redis
PING, ElasticsearchGET /) - Records the service type, version banner, and whether authentication is required as evidence
Database ports scanned 5432 (PostgreSQL): closed ✓ 3306 (MySQL): closed ✓ 6379 (Redis): closed ✓ 27017 (MongoDB): closed ✓
PORT_DATABASE_EXPOSED 6379 (Redis): OPEN — accepting connections Banner: +PONG (no auth required) Full read/write access from internet
How to fix it
Never bind database services to a public interface. Bind to 127.0.0.1 or your private VPC CIDR only. Use security groups / firewall rules to deny all inbound traffic on database ports from 0.0.0.0/0. Access databases from application servers over the private network, or use a bastion host / VPN for administrative access.
# Redis — bind to localhost only (redis.conf)
bind 127.0.0.1
requirepass
protected-mode yes
# PostgreSQL — bind to private IP only (postgresql.conf)
listen_addresses = '10.0.0.5' # private VPC IP
# AWS Security Group — deny public database access
aws ec2 revoke-security-group-ingress \
--group-id sg-xxxx \
--protocol tcp --port 6379 --cidr 0.0.0.0/0
# Verify: nmap should show port as filtered from outside
nmap -p 6379 yourdomain.com
Exposed Internal Service
What is it?
An internal service that should not be directly reachable from the public internet is accepting connections. This category covers infrastructure management tools, monitoring systems, and internal APIs: Elasticsearch (9200), Jenkins CI (8080), Prometheus metrics (9090), Docker daemon API (2375/2376), Kubernetes API server (6443/8443), HashiCorp Vault (8200), etcd (2379), Kubelet API (10250), and similar services.
EXPOSED_SERVICE is raised when the service is confirmed open and returning a recognisable banner or API response. EXPOSED_SERVICE_CANDIDATE is raised when the port is open but the service type cannot be definitively confirmed — it requires manual investigation.
Why we check it
Each of these services has a well-documented exploitation path when internet-exposed. An unauthenticated Elasticsearch endpoint exposes all indexed data to anyone who sends a GET /_search request. An exposed Docker daemon API (POST /containers/create) allows an attacker to spawn privileged containers and escape to the host filesystem. An exposed Kubernetes API server may allow anonymous access to list pods, secrets, and service accounts. A Jenkins instance with default credentials gives full CI/CD pipeline control — an attacker can inject malicious build steps that exfiltrate source code or deploy backdoored artifacts.
What SurfaceGuard checks
- Probes ports associated with known infrastructure services
- Sends service-specific requests (e.g.,
GET / HTTP/1.0to Elasticsearch,GET /api/v1/namespacesto Kubernetes) and checks for identifying responses - Aliased codes (
ELASTICSEARCH_UNAUTH,JENKINS_UNAUTH,PROMETHEUS_UNAUTH,DOCKER_API_UNAUTH,KUBERNETES_API_UNAUTH,VAULT_API_EXPOSED,ETCD_UNAUTH,KUBELET_UNAUTH) all map to this finding - Records the service name, version, and whether authentication was required as evidence
Infrastructure service ports scanned 8080 (Jenkins): closed ✓ 9090 (Prometheus): closed ✓ 9200 (Elasticsearch): closed ✓ 2375 (Docker API): closed ✓
EXPOSED_SERVICE: Elasticsearch Port 9200 open — no authentication GET /_cat/indices → HTTP 200 All indices readable from internet
How to fix it
Place all internal services behind a firewall or VPC security group that denies public ingress. Use a VPN or bastion host for administrative access. Enable authentication on every service regardless of network placement — defence in depth.
# Elasticsearch — enable security (elasticsearch.yml)
xpack.security.enabled: true
network.host: 127.0.0.1 # bind to localhost only
# Docker daemon — use TLS, never expose on TCP without auth
# dockerd should bind to unix socket only (default)
# If TCP is needed: use --tlsverify with client certificates
# Kubernetes API server — disable anonymous auth
kube-apiserver --anonymous-auth=false
# Prometheus — bind to localhost, use reverse proxy with auth
web.listen-address: "127.0.0.1:9090"
Remote Access Service Exposed
What is it?
A remote access service — SSH (22), RDP (3389), VNC (5900/5901), Telnet (23), or WinRM (5985/5986) — is accepting connections from the public internet. While SSH and RDP have legitimate uses, direct internet exposure dramatically expands the attack surface: every bot on the internet continuously scans for open SSH and RDP ports to attempt credential brute-force, password spraying, and exploitation of unpatched vulnerabilities.
Why we check it
RDP exposed to the internet has been the initial access vector in the majority of ransomware deployments over the past five years. Attackers scan for port 3389, brute-force weak passwords or use credentials from breach databases, and gain interactive desktop access. SSH on port 22 receives millions of brute-force attempts per day on any internet-connected server. Telnet transmits credentials in cleartext and should be considered a critical finding wherever it appears. Even with strong passwords, direct internet exposure means unpatched vulnerabilities in the remote access service itself (e.g., BlueKeep CVE-2019-0708 for RDP) are directly exploitable.
What SurfaceGuard checks
- Probes ports: 22 (SSH), 23 (Telnet), 3389 (RDP), 5900–5901 (VNC), 5985–5986 (WinRM)
- Attempts a TCP handshake and reads the service banner to confirm service type
- Records the port, service banner, and any version information as evidence
Remote access ports scanned 22 (SSH): closed (VPN-gated) ✓ 3389 (RDP): closed ✓ 5900 (VNC): closed ✓
REMOTE_ACCESS_EXPOSED 3389 (RDP): OPEN — internet-reachable Banner: accepted connection Brute-force and BlueKeep exposure
How to fix it
Move all remote access behind a VPN or use IP allowlisting to restrict to known corporate IPs. For SSH: disable password authentication, use key-based auth only, and consider a non-standard port as obscurity-in-depth. For RDP: use Network Level Authentication (NLA), restrict via firewall, and consider RD Gateway or a VPN.
# SSH hardening (/etc/ssh/sshd_config)
PasswordAuthentication no
PermitRootLogin no
Port 2222 # non-standard port (obscurity only — not a fix)
AllowUsers deploy # restrict to specific users
# Firewall — allow SSH only from known IPs
ufw allow from 203.0.113.0/24 to any port 22
ufw deny 22
# RDP — restrict to VPN subnet via Windows Firewall
netsh advfirewall firewall add rule name="RDP VPN Only" ^
protocol=TCP dir=in localport=3389 ^
remoteip=10.0.0.0/8 action=allow
Admin Panel & Login Surface
What is it?
ADMIN_PANEL_EXPOSED: An administrative interface is directly reachable from the public internet — common paths like /admin, /wp-admin, /administrator, /phpmyadmin, /django/admin, /manage, or vendor-specific admin consoles. Admin panels have elevated privilege levels, are high-value targets for credential attacks, and often have wider attack surface than the main application.
LOGIN_SURFACE_EXPOSED: A login form or authentication endpoint is publicly reachable. This is informational context rather than a standalone vulnerability — it maps the credential attack surface. LOGIN_FORM_ON_HOMEPAGE is an alias raised when a login form is detected on the root page itself.
Why we check it
Admin panels exposed to the internet are primary targets for credential stuffing, brute-force, and exploitation of admin-specific vulnerabilities. WordPress /wp-admin, phpMyAdmin, and Django admin are scanned continuously by automated bots. Compromise of an admin account typically yields full application control: data exfiltration, content modification, code execution via plugin/template upload, or database access. Login surfaces map where automated credential attacks will focus — understanding exposure helps prioritise MFA and rate-limiting investments.
What SurfaceGuard checks
- Probes a wordlist of common admin and login paths across the domain and discovered subdomains
- Checks HTTP response codes and page content for admin panel indicators (form fields, CMS fingerprints, title keywords)
- Detects login forms by parsing HTML for
<input type="password">elements - Records the exact URL and page title as evidence
Admin path probe: no panels found /admin → 404 /wp-admin → 404 /phpmyadmin → 404 Common admin paths not exposed ✓
ADMIN_PANEL_EXPOSED /admin → HTTP 200 Title: "Django administration" Publicly accessible, no IP restriction
How to fix it
Restrict admin panels to specific IP ranges or move them behind a VPN. Enforce MFA on all admin accounts. Consider relocating the admin path from the default URL. For login surfaces generally: implement rate limiting and account lockout policies.
# Nginx — restrict admin path to office/VPN IPs
location /admin {
allow 203.0.113.0/24; # office IP range
allow 10.0.0.0/8; # VPN range
deny all;
proxy_pass http://app;
}
# Django — restrict admin to internal network
# middleware or custom view decorator:
from django.core.exceptions import PermissionDenied
INTERNAL_IPS = ["10.0.0.0/8", "203.0.113.0/24"]
# Rate limit login attempts (fail2ban or application-level):
# Max 5 failed attempts per IP per 15 minutes
Secrets & Credential Exposure
Checks for credential breaches (HaveIBeenPwned), leaked API keys and secrets
in public GitHub repositories and Docker Hub images, exposed cloud storage
buckets (S3, GCP, Azure, Firebase), publicly accessible
.env / .git / backup files, and domain reputation.
exposure_monitoring
Credential Breach Detection
HighCREDENTIAL_BREACH_FOUND
What it is
SurfaceGuard queries the HaveIBeenPwned (HIBP) API to identify email addresses associated with your domain that have appeared in publicly known data breaches. A breach record means those credentials were exposed in incidents like database dumps, phishing kit leaks, or third-party compromises, and are likely circulating in criminal marketplaces.
Why it's checked
Credential stuffing is one of the highest-volume attack vectors today. Attackers purchase breach dumps and automate login attempts across every service the victim uses the same password on. A single breached employee account can be the entry point for an internal network compromise, BEC fraud, or SaaS account takeover. Early detection allows forced resets before accounts are exploited.
What SurfaceGuard checks
The scanner identifies email addresses in the @yourdomain.com namespace
and checks each against HIBP's breach database. It records the breach name, the
date of the breach, and the data classes exposed (passwords, email addresses,
phone numbers). Results are deduplicated and ranked by most recent breach date.
No email addresses associated with this domain appear in any known data breaches. Credential posture is clean.
3 email addresses breached — found in "Collection #1" (2019-01-07). Exposed data classes: passwords, email addresses.
How to fix
Force an immediate password reset for all identified accounts. Enable multi-factor authentication (TOTP or hardware key) for all accounts — this neutralises credential stuffing even when passwords are known. Integrate HIBP's API into your authentication flow to prevent re-use of known breached passwords at sign-up and password change. Consider subscribing to HIBP domain monitoring for real-time alerts on future breaches.
Exposed Secrets in Public Repositories
CriticalSECRET_LEAK_FOUND
SECRET_MATCH_NEEDS_VALIDATION
What it is
This scanner searches public GitHub repositories for API keys, private keys,
connection strings, and tokens associated with your domain. Developers
accidentally commit secrets to version control, and even after deletion, the
credential remains in git history. SECRET_LEAK_FOUND indicates a
confirmed, high-confidence match; SECRET_MATCH_NEEDS_VALIDATION
indicates a pattern that resembles a secret but requires manual review to
confirm it's not a test value or placeholder.
Why it's checked
A single exposed AWS access key or database connection string provides immediate, direct access to production systems — bypassing every other security control. Automated scanners ("secret snipers") continuously scrape GitHub within minutes of a push. The average time from secret push to exploitation is under 4 minutes according to GitGuardian research. Even "deleted" commits remain in git history unless the repo is completely re-created or history-rewritten.
What SurfaceGuard checks
The scanner queries the GitHub Code Search API for patterns including:
AKIA (AWS key prefix), private key headers, Bearer tokens,
database URLs containing the domain name, and generic high-entropy string
patterns. It checks both active code and commit history where accessible.
No secrets, API keys, or credentials associated with this domain found in public GitHub repositories.
AWS access key found in acme-corp/deploy-scripts
— key pattern AKIAIOSFODNN7EXAMPLE with adjacent secret key
committed 2024-03-12.
How to fix
1. Rotate the exposed credential immediately — assume it has
already been accessed. 2. Revoke the old key in your cloud
console / service dashboard before removing it from code.
3. Remove from git history using
git filter-repo --path secrets.txt --invert-paths (preferred over
git filter-branch) and force-push, then contact GitHub support to
purge cached views. 4. Add pre-commit hooks: install
detect-secrets or gitleaks as a pre-commit hook so
secrets are caught before they reach the remote. Store all secrets in a secrets
manager (AWS Secrets Manager, HashiCorp Vault, Doppler) and inject at runtime
via environment variables.
Cloud Storage Bucket Exposure
CriticalCLOUD_BUCKET_EXPOSED
What it is
Cloud object storage (AWS S3, Google Cloud Storage, Azure Blob Storage) defaults to private access, but a single misconfigured ACL or bucket policy can make all contents world-readable — or worse, world-writable. This scanner probes common bucket naming conventions derived from the target domain to find publicly accessible storage that contains sensitive files, customer data, or backups.
Why it's checked
Open S3 buckets have been responsible for some of the largest data breaches in history, exposing hundreds of millions of records. Cloud providers make it technically easy to open a bucket (one checkbox or a single API call), and these misconfigurations often persist undetected because the data is accessible but no authentication error is logged. Listing a bucket's contents takes seconds with a simple HTTP GET request.
What SurfaceGuard checks
The scanner derives candidate bucket names from the domain
(e.g., acme, acme-corp, acme-prod,
acme-backups, acme-assets) and probes:
https://<bucket>.s3.amazonaws.com/ (AWS),
https://storage.googleapis.com/<bucket>/ (GCP), and
https://<account>.blob.core.windows.net/<container>
(Azure). A 200 response with an XML listing confirms public read access.
No publicly accessible storage buckets found for derived domain name variants across AWS S3, GCP, and Azure.
Public S3 bucket found:
acme-corp-backups.s3.amazonaws.com — bucket listing
accessible, 47 objects including db_dump_2024.sql.gz.
How to fix
AWS S3: Enable "Block All Public Access" at the account level
in the S3 console — this overrides any individual bucket ACL. Review all
bucket policies for "Principal": "*" statements and remove them.
Use presigned URLs for legitimate temporary public access instead of making
buckets public. GCP: Remove allUsers from
bucket IAM bindings. Azure: Set blob containers to
Private access level. Audit regularly with
aws s3api list-buckets + get-bucket-acl or
Cloud Security Posture Management (CSPM) tools.
Firebase Open Database
CriticalFIREBASE_OPEN_DATABASE
FIREBASE_OPEN_DATABASE_CANDIDATE
What it is
Firebase Realtime Database and Firestore are Google-managed NoSQL databases
commonly used in mobile and web apps. When security rules are set to
".read": true (Firebase's default in legacy projects), the entire
database is readable by anyone without authentication — just a direct HTTP GET
to https://<project>.firebaseio.com/.json.
FIREBASE_OPEN_DATABASE is a confirmed read. FIREBASE_OPEN_DATABASE_CANDIDATE
indicates the endpoint responded but may be empty or partially secured.
Why it's checked
Firebase databases have exposed millions of user records because developers
use the default permissive rules during development and never lock them down
before shipping. The URL is embedded in the mobile app's
google-services.json / GoogleService-Info.plist, which
is trivially extracted from any published APK or IPA file. Attackers routinely
scrape app stores, extract Firebase URLs, and probe them automatically.
What SurfaceGuard checks
The scanner derives the Firebase project ID from the domain name
(e.g., acme-corp-default-rtdb.firebaseio.com) and sends an
unauthenticated GET to the /.json endpoint. If the response
returns JSON data (non-null), the database is publicly readable. It also
checks /.json?shallow=true to test for partial exposure.
Firebase database returns null or
{"error": "Permission denied"} — unauthenticated
access is blocked.
Open Firebase database:
acme-prod.firebaseio.com/.json returns 120 MB of user
records including emails, profile data, and session tokens.
How to fix
Update Firebase Realtime Database security rules to require authentication:
{
"rules": {
".read": "auth.uid != null",
".write": "auth.uid != null"
}
}
For per-user data isolation, scope rules to the authenticated user's UID:
"users/$uid": { ".read": "$uid === auth.uid" }. For Firestore,
set rules to allow read: if request.auth != null. Use Firebase
Security Rules Simulator in the console to validate before deploying.
Enable Firebase App Check to restrict API access to your verified apps only.
Sensitive File Exposure
CriticalSENSITIVE_FILE_EXPOSED_GIT
SENSITIVE_FILE_EXPOSED_ENV
SENSITIVE_FILE_EXPOSED_BACKUP
SENSITIVE_FILE_EXPOSED_DEBUG
SENSITIVE_FILE_CANDIDATE
What it is
Web servers sometimes expose files that were never intended to be publicly
accessible — git metadata, environment config, database dumps, or debug pages.
SurfaceGuard probes a curated list of high-value paths and classifies findings
by sensitivity: _GIT (git repository metadata leaking source
code or commit history), _ENV (environment files containing
credentials), _BACKUP (database/archive dumps), _DEBUG
(PHP info pages, debug endpoints disclosing server details), and
SENSITIVE_FILE_CANDIDATE for lower-confidence matches.
Why it's checked
A publicly accessible /.env file is one of the most severe
exposures possible — it typically contains database passwords, API keys,
and encryption secrets in plaintext. A reachable /.git directory
exposes the entire application source code including its git history, enabling
attackers to find vulnerabilities, hardcoded secrets, and business logic flaws.
These exposures are a direct result of deployment process errors and misconfigured
web server rules.
What SurfaceGuard checks
The scanner probes paths including: /.git/config,
/.git/HEAD, /.env, /.env.production,
/.env.local, /backup.zip, /backup.sql,
/dump.sql, /db.sql.gz, /phpinfo.php,
/info.php, /.DS_Store, /WEB-INF/web.xml,
and dozens more. HTTP response codes and content-type fingerprinting
distinguish real files from catch-all 200 responses.
No sensitive files found at any probed paths. All common exposure vectors return 404 or are blocked by the server configuration.
/.env accessible (200 OK) — response contains
DB_PASSWORD=, SECRET_KEY=, and
AWS_SECRET_ACCESS_KEY= in plaintext.
How to fix
Nginx: Add deny rules for sensitive paths in your server block:
location ~ /\.(env|git|htaccess|DS_Store) {
deny all;
return 404;
}
location ~* \.(sql|bak|zip|tar\.gz|dump)$ {
deny all;
return 404;
}
Apache: Add to .htaccess:
RedirectMatch 404 /\.git. Ensure .env files
are outside the document root — place them one directory above
public_html/ or webroot/. Never commit
.env to version control; use .env.example with
placeholder values instead. For debug pages, ensure
APP_DEBUG=false / WEB_CONCURRENCY production
settings are set, and remove phpinfo() calls entirely.
Hosting IP Blacklist Reputation
MediumHOSTING_IP_BLACKLISTED
What it is
Threat intelligence services maintain real-time blacklists of IP addresses associated with spam sending, malware distribution, command-and-control (C2) infrastructure, or active exploitation. This scanner checks the IP address hosting your domain against multiple reputation feeds. A blacklisted IP means email from your domain may be blocked by recipients, your site may trigger browser warnings, and security-aware visitors may be automatically blocked before they reach your login page.
Why it's checked
An IP being blacklisted is a strong signal that the hosting environment has been compromised, is shared with malicious actors (bad-neighbourhood effect on shared hosting), or that the server itself is participating in attacks. Blacklisting has real operational consequences: email deliverability drops to near zero, CDN and WAF services may refuse to serve the origin, and enterprise firewalls often block the IP entirely.
What SurfaceGuard checks
The scanner resolves the domain to its hosting IP and queries multiple reputation sources including Spamhaus (SBL, XBL, PBL), Barracuda, SORBS, AbuseIPDB, and MX Toolbox's blacklist aggregator. It distinguishes between email-focused blacklists (affect mail delivery only) and general threat intelligence lists (affect web and mail).
Hosting IP (203.0.113.42) is not present on any checked
blacklists. Reputation is clean.
IP 198.51.100.8 found on Spamhaus SBL and AbuseIPDB
— listed for spam activity (confidence score 87). Email deliverability
impacted.
How to fix
First, investigate the cause: check server logs for unexpected outbound traffic (spam bots, crypto miners, C2 beacons). Run a malware scan on the server with ClamAV or a commercial tool. If the server is compromised, rebuild from a clean snapshot. Once clean, submit delisting requests to each blacklist operator — Spamhaus has a self-service lookup-and-request form. If on shared hosting and the IP is shared with other tenants, request a dedicated IP or migrate to a cleaner host. For email specifically, use a dedicated email sending service (SendGrid, Postmark, SES) with its own IP pool rather than sending from the web server IP.
Phishing & Brand Impersonation
HighPHISHING_DOMAIN_VERIFIED
PHISHING_DOMAIN_DETECTED
BRAND_DOMAIN_DETECTED
What it is
Attackers register domains that visually or phonetically resemble your brand
to deceive customers, partners, and employees. Techniques include typosquatting
(acm3.com), homoglyph attacks (replacing o with
0), TLD swaps (acme.net instead of acme.com),
and subdomain abuse (acme.com.phisher.xyz).
PHISHING_DOMAIN_VERIFIED means the domain is actively hosting
a phishing page or is on a threat intelligence feed. PHISHING_DOMAIN_DETECTED
indicates suspicious registration without confirmed phishing content.
BRAND_DOMAIN_DETECTED flags lookalike registrations for awareness
without severity scoring.
Why it's checked
Brand impersonation attacks directly harm your customers and damage your reputation — even though the attacker controls the infrastructure, victims blame the brand. BEC (Business Email Compromise) using lookalike domains cost organisations $2.9 billion in 2023 according to the FBI IC3 report. Proactive monitoring lets you detect and act on phishing infrastructure before it claims victims.
What SurfaceGuard checks
The scanner generates lookalike domain variations (transpositions, insertions, deletions, homoglyphs, common TLD swaps) and checks which are registered, resolving, or hosting web content. Registered domains are cross-referenced against Google Safe Browsing, OpenPhish, PhishTank, and URLhaus for active phishing classification. DNS MX records are also checked — a phishing domain set up for BEC will have MX records configured.
No active phishing domains or brand impersonation sites detected for this domain's brand variants.
Active phishing page: acme-corp-login.net
— resolves to 185.220.x.x, hosts a replica of your login page,
listed on PhishTank (ID #8432791).
How to fix
For verified phishing domains: submit takedown requests to the hosting
provider (abuse contact in WHOIS), the registrar, and report to Google
Safe Browsing (https://safebrowsing.google.com/safebrowsing/report_phish/),
PhishTank, and APWG. For persistent offenders, pursue UDRP (Uniform Domain
Name Dispute Resolution Policy) through ICANN — straightforward for clear-cut
brand hijacking. Defensively, register common TLD variants and typosquats of
your own domain to take them off the market. Implement DMARC
p=reject to prevent lookalike domains from spoofing your
exact domain in email headers. Consider brand monitoring services for
real-time registration alerts.
Dependency & Supply Chain Exposure
MediumDEPENDENCY_MANIFEST_EXPOSED
PACKAGE_MENTION_NPM
PACKAGE_MENTION_PYPI
DOCKER_IMAGE_MENTION
What it is
This scanner identifies two related supply chain exposure vectors.
First, DEPENDENCY_MANIFEST_EXPOSED detects publicly accessible
package manifest files (package.json, requirements.txt,
composer.json, go.mod, Gemfile.lock)
that enumerate every library and version your application depends on.
Second, PACKAGE_MENTION_NPM, PACKAGE_MENTION_PYPI,
and DOCKER_IMAGE_MENTION flag when your domain/brand is
associated with public packages that could be targets for
dependency confusion or typosquatting attacks.
Why it's checked
An exposed package.json is a complete attack roadmap: it tells
an attacker exactly which vulnerable library versions to look for CVEs in.
Supply chain attacks (SolarWinds, XZ Utils, Polyfill.io) have demonstrated
that a single compromised dependency can propagate to thousands of downstream
applications. Dependency confusion attacks exploit the difference between
private internal package names and public registry namespaces — knowing your
internal package names enables this attack.
What SurfaceGuard checks
The scanner probes common manifest paths (/package.json,
/requirements.txt, /Pipfile.lock,
/yarn.lock, /composer.lock) for HTTP 200 responses.
It also searches npm, PyPI, and DockerHub for packages/images mentioning the
target domain, helping identify your public software footprint and potential
impersonation vectors.
No dependency manifest files exposed at common web paths. No associated public packages found on registries.
/package.json accessible (200 OK) — lists 247
dependencies including lodash@4.17.15 (CVE-2021-23337)
and axios@0.21.1 (CVE-2021-3749).
How to fix
Block package manifest paths in your web server config:
location ~* ^/(package\.json|package-lock\.json|
yarn\.lock|requirements\.txt|composer\.(json|lock)|
Pipfile(\.lock)?|go\.(mod|sum)|Gemfile(\.lock)?)$ {
deny all;
return 404;
}
Better yet, serve only the files your application explicitly exposes —
ensure your deployment process copies only built artifacts to the document
root, not the entire source tree. For supply chain risk management, use
npm audit / pip-audit / Dependabot in CI to
keep dependencies patched. Register your internal package names on
npm/PyPI (even as empty placeholder packages) to prevent dependency
confusion attacks.
Active DAST
Dynamic Application Security Testing: sends controlled probe requests to detect error page information disclosure (stack traces, SQL errors, framework names), reflected XSS markers, and SQL error responses. All probes are read-only and non-destructive. Requires Sentinel or Fortress plan.
dast · plan-gated
Error Page Information Disclosure
MediumDAST_ERROR_DISCLOSURE
What it is
When an application encounters an unhandled exception or routing error in
development mode, it renders a debug page containing the full stack trace,
source file paths, framework version, environment variables, and sometimes
database schema details. This scanner actively triggers error conditions
to detect whether the application leaks this internal information in
production responses. Common culprits are Django's yellow debug page,
Flask's Werkzeug debugger, Rails' error pages, and PHP's default
display_errors = On setting.
Why it's checked
Stack traces are a reconnaissance gift: they reveal exact framework versions (enabling targeted CVE lookups), internal file paths (helping path traversal attacks), database queries (aiding SQL injection), and class/method names (reversing application logic). Disclosed framework names let attackers look up all known vulnerabilities for that specific version within minutes. This is classified under CWE-209: Generation of Error Message Containing Sensitive Information and OWASP A05:2021.
What SurfaceGuard checks
The scanner sends crafted HTTP requests designed to trigger server-side errors: invalid URL path segments, malformed query strings, oversized headers, and invalid content-type payloads. Response bodies are inspected for patterns matching stack trace signatures (Python tracebacks, Java exception formats, PHP error messages, Rails backtrace formatting), framework name disclosures, and SQL error strings containing table or column names.
Error conditions return generic responses (e.g., "Something went wrong" or a custom 500 page) with no framework details, file paths, or stack traces in the response body.
Django debug page active: Request to
/api/user/<invalid> returns 500 with full Django
traceback including Python version, installed apps list,
and database query.
How to fix
The single most important change is ensuring debug mode is disabled in production:
# Django
DEBUG = False # in settings.py
# Flask
app = Flask(__name__)
app.config['DEBUG'] = False # or FLASK_ENV=production
# PHP
display_errors = Off # in php.ini
log_errors = On
# Node/Express
app.use((err, req, res, next) => {
res.status(500).json({ error: 'Internal Server Error' });
});
Implement centralised error handling that logs the full exception to your logging service (Sentry, Datadog, CloudWatch) while returning only a generic error message and a correlation ID to the client. Create custom error templates for 400, 404, 500 status codes that match your brand but reveal nothing about the underlying technology.
Reflected XSS Probe
Medium · Needs ValidationDAST_RAW_HTML_REFLECTION_CANDIDATE
What it is
Reflected XSS (Cross-Site Scripting) occurs when a web application echoes
user-supplied input back into an HTML response without proper encoding, allowing
an attacker to craft a URL that injects and executes arbitrary JavaScript in a
victim's browser. SurfaceGuard's probe is deliberately non-destructive: it
injects a safe, alphanumeric marker string (no script tags, no event handlers)
and checks whether it appears unencoded in the HTML response body. A
_CANDIDATE suffix indicates the reflection was detected but
requires manual verification to confirm actual script execution is possible.
Why it's checked
XSS is consistently in the OWASP Top 10 (A03:2021 Injection) and the most prevalent client-side vulnerability. Exploited XSS enables session hijacking (stealing cookies), credential harvesting (injecting fake login forms), browser-based exploitation, and bypassing CSRF protections. Reflected XSS is particularly dangerous when combined with phishing links, as the payload executes in the context of the legitimate domain, bypassing same-origin policy checks.
What SurfaceGuard checks
The scanner identifies URL parameters and form fields on discovered pages,
then injects a unique alphanumeric marker (e.g., sgxss7f3a2b)
into each parameter. The response body is parsed to determine if the marker
appears: (1) as raw text in HTML context (confirming unencoded reflection),
(2) inside a JavaScript context, or (3) inside an HTML attribute. Context
determines exploitability. Only raw unencoded reflection in HTML body or
attribute context is flagged.
All injected markers were either absent from responses or appeared
HTML-encoded (e.g., <, >).
No unencoded reflection detected.
Unencoded reflection in
GET /search?q=sgxss7f3a2b — marker appeared verbatim
in <div class="results">sgxss7f3a2b</div>.
XSS candidate — manual verification required.
How to fix
Output encoding is the primary control — always encode user input before rendering in HTML context. Modern templating engines do this automatically when used correctly:
# Jinja2 (Python) — auto-escaped by default
# safe — auto-escaped
# UNSAFE — bypasses escaping
# React — safe by default
<div>{userInput}</div> # safe
dangerouslySetInnerHTML # UNSAFE
# Go templates — use html/template, not text/template
Complement output encoding with a strong Content Security Policy
that uses nonces to allow only trusted scripts
(script-src 'nonce-{random}') — this prevents execution even
if a reflection exists. Use DOMPurify for any rich-text
user content that genuinely needs HTML rendering. Never use
innerHTML with user-supplied strings.
SQL Error Response Detection
Medium · Needs ValidationDAST_SQL_ERROR_RESPONSE_CANDIDATE
What it is
SQL injection (SQLi) allows an attacker to interfere with database queries
by injecting SQL syntax into user-controlled input. SurfaceGuard's probe
is error-based detection only: it appends a single apostrophe
(') to URL parameters and checks whether the response contains
a database error message pattern. This is non-destructive read-only
testing — no data extraction, no modification queries. The
_CANDIDATE suffix means an error pattern was detected but
the injection path and database engine must be confirmed manually.
Why it's checked
SQL injection remains in the OWASP Top 10 (A03:2021) and is consistently
one of the most severe web vulnerabilities. A confirmed SQLi finding in
a login or search parameter can mean full database read access, authentication
bypass, and in some database configurations, remote code execution via
xp_cmdshell (MSSQL) or LOAD_FILE/INTO OUTFILE
(MySQL). Error-based responses additionally reveal database engine,
version, and query structure — enabling targeted exploitation.
What SurfaceGuard checks
The scanner appends single-quote and comment sequences to GET parameters
(e.g., ?id=1', ?q=test'--) and inspects
responses for error patterns including:
MySQL — You have an error in your SQL syntax;
PostgreSQL — ERROR: unterminated quoted string;
MSSQL — Unclosed quotation mark after the character string;
Oracle — ORA-01756;
SQLite — unrecognized token.
HTTP status, response time differences, and content-length anomalies
are secondary signals.
No SQL error messages triggered. Responses to injected payloads are generic (404 or application-level validation errors) with no database engine information.
MySQL error in response to
GET /products?category=1':
"You have an error in your SQL syntax; check the manual that
corresponds to your MySQL 8.0.32 server" — SQLi candidate.
How to fix
Parameterized queries (prepared statements) are the complete fix — they are architecturally immune to SQL injection because user input is never interpreted as SQL syntax:
# Python (psycopg2)
cursor.execute(
"SELECT * FROM products WHERE id = %s",
(user_id,) # always pass as parameter, never f-string
)
# Node (pg)
const result = await client.query(
'SELECT * FROM products WHERE id = $1',
[userId]
);
# Java (JDBC)
PreparedStatement ps = conn.prepareStatement(
"SELECT * FROM products WHERE id = ?"
);
ps.setInt(1, userId);
ORMs (SQLAlchemy, Django ORM, Hibernate, Prisma) use parameterized
queries by default — avoid raw query methods unless necessary.
Disable display_errors in production to suppress error messages
regardless. Deploy a WAF with SQL injection rules as a defence-in-depth layer,
not as a substitute for parameterized queries.
GraphQL Security
MediumGRAPHQL_INTROSPECTION_ENABLED
GRAPHQL_ERROR_DISCLOSURE
GRAPHQL_VERBOSE_ERRORS_CANDIDATE
What it is
GraphQL is a query language API architecture that provides a single flexible
endpoint (typically /graphql) instead of multiple REST routes.
SurfaceGuard checks three GraphQL-specific security concerns:
Introspection (GRAPHQL_INTROSPECTION_ENABLED)
— the built-in schema discovery mechanism that enumerates every type, query,
mutation, and field in the API; Error disclosure
(GRAPHQL_ERROR_DISCLOSURE) — verbose error messages that reveal
schema details, resolver code paths, or database errors; and
Verbose error candidates
(GRAPHQL_VERBOSE_ERRORS_CANDIDATE) — responses with partial
technical detail that needs validation.
Why it's checked
Introspection is enabled by default in every major GraphQL implementation (Apollo, Strawberry, Hasura, GraphQL.js) and is essential during development. In production, it provides an attacker with a complete, machine-readable map of your entire API surface: every query, every mutation, every argument, every relationship between types. This dramatically reduces the reconnaissance effort needed to find injection points, IDOR vulnerabilities, and business logic flaws. Verbose error messages compound this by revealing which fields exist, which authorization checks are missing, and internal resolver logic.
What SurfaceGuard checks
The scanner discovers GraphQL endpoints by probing common paths
(/graphql, /api/graphql, /v1/graphql,
/query) and checking for GraphQL-specific response signatures.
Against confirmed GraphQL endpoints, it sends:
(1) a standard IntrospectionQuery POST and checks if the schema
is returned; (2) a malformed query to test error verbosity; (3) a field
enumeration probe to detect field suggestion leakage (Apollo's
"Did you mean…?" feature).
Introspection returns {"errors": [{"message": "Introspection is
disabled"}]}. Error responses are generic with no schema details.
Field suggestions are suppressed.
Full schema returned via introspection: 47 types,
23 queries, 15 mutations exposed at /api/graphql. Includes
adminCreateUser and updateBillingDetails
mutations.
How to fix
Disable introspection in production — all major GraphQL servers support this:
# Apollo Server (Node.js)
const server = new ApolloServer({
typeDefs, resolvers,
introspection: process.env.NODE_ENV !== 'production',
plugins: [ApolloServerPluginLandingPageDisabled()]
});
# Strawberry (Python)
schema = strawberry.Schema(
query=Query,
extensions=[DisableIntrospectionExtension]
)
# Hasura
HASURA_GRAPHQL_ENABLE_INTROSPECTION=false
For error handling, configure a custom error formatter that returns
generic messages in production and suppresses resolver stack traces and
field path information. Disable Apollo Studio's "Did you mean…?" field
suggestion feature
(ApolloServerPluginDisableSchemaReporting). Implement
query depth limiting and query complexity analysis to prevent
resource-exhaustion attacks even after disabling introspection.
API Discovery & Security.txt
Low / InfoAPI_ENDPOINT_DISCOVERED
SECURITY_TXT_MISSING
What it is
API_ENDPOINT_DISCOVERED is an informational finding that
records API endpoints detected through JavaScript analysis, robots.txt,
sitemap parsing, and response header inspection. This enriches your asset
inventory and surfaces attack surface that may not be documented.
SECURITY_TXT_MISSING flags the absence of a
/.well-known/security.txt file — the standard
(RFC 9116) mechanism for organisations to publish responsible disclosure
contact information for security researchers who find vulnerabilities.
Why it's checked
Undocumented API endpoints are a significant attack surface: legacy
versions (/api/v1/ alongside /api/v3/),
internal endpoints accidentally exposed, and debug/admin APIs left
reachable. Security.txt has practical value: without it, researchers
who find a vulnerability in your systems have no clear way to report
it, often resorting to public disclosure or selling it. A security.txt
file with a PGP-signed disclosure policy demonstrates security maturity
and keeps your vulnerability pipeline private.
What SurfaceGuard checks
API discovery analyses JavaScript bundles for fetch(),
axios, and XMLHttpRequest URL patterns;
scans /robots.txt disallow entries for API paths; and checks
OpenAPI/Swagger spec files at /openapi.json,
/swagger.json, /api-docs. Security.txt is
checked at both /security.txt and
/.well-known/security.txt (RFC-correct location).
/.well-known/security.txt found with valid
Contact:, Expires:, and
Preferred-Languages: fields. PGP-signed.
security.txt missing — neither
/security.txt nor /.well-known/security.txt
respond with valid content. Vulnerability disclosure path unclear.
How to fix
Create a /.well-known/security.txt file using the generator
at securitytxt.org. Minimum required fields per RFC 9116:
Contact: mailto:security@example.com
Expires: 2027-01-01T00:00:00.000Z
Preferred-Languages: en
Optional but recommended: Encryption: (link to PGP key),
Policy: (link to responsible disclosure policy page),
Acknowledgments: (hall of fame). Sign the file with PGP
and reference your public key so researchers can send encrypted reports.
For API endpoints discovered: review all flagged paths and ensure
old API versions are either removed or protected by the same authentication
and authorisation as current versions — legacy endpoints commonly lack
security controls added to newer routes.
Scanner Status Codes
InfoSCANNER_UNAVAILABLE
SCANNER_ERROR
What it is
These are internal scan pipeline status codes, not security findings.
SCANNER_UNAVAILABLE means an individual scanner
module could not reach its external dependency within the allotted timeout
(e.g., the HIBP API was unreachable, or a DNS resolver timed out).
SCANNER_ERROR means the scanner encountered
an unexpected runtime error during execution. Both codes carry no security
weight (CVSS 0.0) and do not contribute to your risk score.
Why it appears
SurfaceGuard runs 48+ concurrent scanners under strict per-scanner timeouts (10 seconds) and a total scan timeout (40 seconds) to keep scans fast. Network transience, rate limiting from third-party APIs, and DNS resolution delays can cause individual scanners to not complete within the window. These codes are surfaced transparently so you know which checks ran successfully versus which were skipped — incomplete coverage is better disclosed than silently omitted.
What to do
No remediation action is needed on your infrastructure. If a particular
scanner consistently shows as unavailable, re-running the scan typically
resolves transient network issues. Persistent SCANNER_ERROR
results for the same scanner may indicate a configuration issue — contact
support with the scan ID so the specific scanner can be investigated.