What Are You Trying to Do?
Pick your situation. Follow the steps. See real output at every stage.
Run Your First Scan
You just installed WAFtester. You have a URL. Let's see what happens.
Run the Auto Command
The auto command does everything: detects the WAF, crawls the site, tests 2,800+ payloads, and generates a report.
$ waf-tester auto -u https://your-target.com
# Both 'waf-tester' and 'waftester' work โ same binary
Expected output:
[PHASE 0] Smart Mode - WAF Detection
Skipped (use --smart to enable)
[PHASE 1] Target Discovery & Reconnaissance
โ Crawled 15 pages, found 42 parameters
[PHASE 4] WAF Security Testing
Testing 2,800+ payloads across all categories...
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 100%
[PHASE 5] Comprehensive Report
โ ./waf-assessment-your-target/report.html
Step 2 of 3 Read the Report
Open the generated HTML file in your browser. It contains a complete breakdown โ detection rates, bypass list, severity rankings.
$ open ./waf-assessment-your-target/report.html
# On Windows: start report.html
# On Linux: xdg-open report.html
Step 3 of 3 Export in Other Formats
Need JSON for automation, SARIF for GitHub, or Markdown for a ticket? Add -format. The -o flag sets the output file base name โ WAFtester appends the right extension.
$ waf-tester auto -u https://your-target.com \
-format json,html,sarif \
-o results
Expected output:
โ results.json (Machine-readable data)
โ results.html (Interactive report)
โ results.sarif (GitHub Code Scanning)
What You've Achieved
- โ Full WAF security assessment with 2,800+ attack payloads
- โ Discovery of endpoints, parameters, and attack surface
- โ HTML report with severity rankings and bypass details
- โ Exportable results in JSON, SARIF, Markdown, CSV, and more
Try next: Smart WAF Testing โ make the scan adapt to your specific WAF vendor.
Smart WAF Testing
You know (or suspect) there's a WAF in front of your target. Smart mode identifies the vendor and auto-optimizes the scan strategy.
Detect the WAF
Before scanning, find out what WAF you're dealing with. The vendor command checks against 198 signatures.
$ waf-tester vendor -u https://your-target.com
Expected output:
[VENDOR] Primary: Cloudflare (98% confidence)
[VENDOR] CDN: Fastly detected
[RECOMMEND] tampers: charunicodeencode, randomcase
[RECOMMEND] categories: xss, sqli, ssti
Step 2 of 4 Run with Smart Mode
Add --smart and WAFtester automatically detects the WAF, adjusts rate limits, and selects optimal tampers.
$ waf-tester auto -u https://your-target.com --smart
Expected output:
[PHASE 0] Smart Mode - WAF Detection & Strategy Optimization
Detecting WAF vendor from 197+ signatures...
โ WAF Vendor: Cloudflare (98% confidence, from smart mode)
Rate limit: 50 req/sec (WAF-optimized for Cloudflare)
Concurrency: 10 workers (WAF-optimized)
[PHASE 1] Target Discovery & Reconnaissance
โ Crawled 18 pages, found 56 parameters
[PHASE 4] WAF Security Testing
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 100%
โ WAF Effectiveness: 96.8% - STRONG
--smart, the scan uses default rate limits and generic payloads. With it, WAFtester adapts โ rate limits are WAF-optimized, tampers target known weaknesses, and bypass discovery improves significantly.
Step 3 of 4 Choose a Smart Mode Level
Smart mode has five levels. Pick based on how thorough you need to be:
| Mode | Speed | Best For |
|---|---|---|
quick | ~2 min | Fast triage, smoke tests |
standard | ~8 min | Default โ balanced coverage |
full | ~20 min | Complete analysis + all tampers |
bypass | ~15 min | Focus on finding WAF bypasses |
stealth | ~30 min | Low-and-slow to avoid detection |
$ waf-tester auto -u https://your-target.com \
--smart --smart-mode=bypass
--smart-mode requires --smart โ always use both together. See the full Smart Modes reference.
Step 4 of 4 Get Verbose WAF Intel
Want to see exactly what Smart Mode discovered? Add --smart-verbose for detailed WAF behavior analysis.
$ waf-tester auto -u https://your-target.com \
--smart --smart-verbose
Additional output:
[SMART] Response signatures matched: 3
[SMART] Header: cf-ray โ Cloudflare
[SMART] Cookie: __cfduid โ Cloudflare
[SMART] Body: Cloudflare error page pattern
[STRATEGY] Prioritizing: unicode encoding, case randomization
[STRATEGY] Deprioritizing: basic URL encoding (Cloudflare normalizes)
What You've Achieved
- โ Identified the exact WAF vendor and confidence level
- โ WAF-optimized scan with adapted rate limits and concurrency
- โ Targeted tamper selection based on vendor-specific weaknesses
- โ Quantitative WAF effectiveness score
Try next: Behind a Login โ scan pages that require authentication.
Scan Behind a Login
Your app requires authentication โ SSO, MFA, or a simple login form. WAFtester opens a real browser so you can log in, then scans the authenticated pages.
Scan Without Auth First
See what's visible to unauthenticated users. This establishes a baseline you can compare against.
$ waf-tester auto -u https://app.example.com \
--browser=false
Expected output:
[PHASE 1] Target Discovery & Reconnaissance
โ Crawled 12 pages (public only)
โ Login form detected at /login
[PHASE 4] WAF Security Testing
โ 2,800+ payloads tested (public surface only)
Step 2 of 5 Choose Your Auth Method
WAFtester supports three authentication methods. Pick the one that matches your app:
Browser Login
Best for SSO, MFA, CAPTCHA. Chrome opens, you log in normally.
โ Step 3a
Cookie
Copy a session cookie from your browser DevTools.
โ Step 3b
Bearer Token
Pass a JWT or API token via header. Best for APIs.
โ Step 3c
Step 3a Browser Login (Recommended)
The auto command opens a browser by default. Just run it โ Chrome will launch, navigate to your target, and wait for you to log in.
$ waf-tester auto -u https://app.example.com
What happens:
[PHASE 7] Authenticated Browser Scanning
Launching browser for authenticated scanning...
โณ Browser will open - please log in when prompted
You have 2m0s to complete authentication
โ Session captured: 3 cookies
โ Crawled 67 pages (authenticated)
โ 55 more pages than unauthenticated scan
-browser-timeout 5m for a 5-minute window.
--browser-headless for headless Chrome in pipelines. Or skip the browser entirely with --browser=false and pass cookies via -cookie. See Headless reference.
Step 3b Cookie-Based Auth
Open your browser DevTools โ Application โ Cookies. Look for the session cookie (often named session, PHPSESSID, or JSESSIONID). Copy its name and value:
$ waf-tester auto -u https://app.example.com \
--browser=false \
-cookie "session=abc123def456"
Step 3c Bearer Token / API Key
For JWT tokens, API keys, or any custom auth header. Use scan (not auto) when targeting a specific endpoint:
$ waf-tester scan -u https://api.example.com \
-H "Authorization: Bearer eyJhbG..."
Multiple headers? Repeat the flag:
$ waf-tester scan -u https://api.example.com \
-H "Authorization: Bearer eyJhbG..." \
-H "X-API-Key: your-key"
Step 4 of 5 Compare Authenticated vs Public
Run both scans and compare. The authenticated scan almost always finds more โ more pages, more parameters, more potential bypasses.
| Metric | Public Only | Authenticated |
|---|---|---|
| Pages discovered | 12 | 67 |
| Parameters found | 18 | 142 |
| Attack surface | Public forms | Admin panels, APIs, uploads |
| Bypasses found | 4 | 23 |
Step 5 of 5 Combine with Smart Mode
The real power: authenticated scanning + WAF-aware smart mode together.
$ waf-tester auto -u https://app.example.com \
--smart \
-format html,json \
-o assessment
What You've Achieved
- โ Authenticated browser scanning with session capture
- โ Full attack surface coverage (not just public pages)
- โ Three auth methods for different app architectures
- โ Quantitative comparison: public vs authenticated coverage
Try next: From an API Spec โ drive scans from OpenAPI, Swagger, or Postman collections.
Scan from an API Spec
You have an OpenAPI spec, Swagger file, Postman collection, or HAR recording. Point WAFtester at it โ every endpoint gets tested with schema-aware payloads.
Supported Formats
| Format | Extensions | Example Flag |
|---|---|---|
| OpenAPI 3.x | .yaml, .json | --spec openapi.yaml |
| Swagger 2.0 | .json, .yaml | --spec swagger.json |
| Postman v2.x | .json | --spec collection.json |
| HAR v1.2 | .har | --spec recording.har |
| AsyncAPI 2.x | .yaml, .json | --spec asyncapi.yaml |
Format is auto-detected from file content, not the extension. See all spec flags and API testing commands.
Preview the Scan Plan
Always dry-run first. See exactly which endpoints will be tested, what attack categories apply, and how long it'll take โ before sending a single request.
$ waf-tester scan --spec openapi.yaml \
-u https://api.example.com \
--dry-run
Expected output:
Spec Scan Plan (Dry Run)
Total Entries: 47
Total Tests: 1,280
Est. Duration: 4m30s
POST /api/v1/users
Scans: sqli, xss, nosqli, cmdi
GET /api/v1/users/{id}
Scans: sqli, idor, traversal
PUT /api/v1/settings
Scans: sqli, xss, massassignment
...
Remove --dry-run to execute
Step 2 of 4 Run the Scan
Happy with the plan? Remove --dry-run and add -y (or --yes) to skip the confirmation prompt and start immediately:
$ waf-tester scan --spec openapi.yaml \
-u https://api.example.com -y
Expected output:
[SPEC] API Spec Scan
Scanning POST /api/v1/users [sqli]
Scanning POST /api/v1/users [xss]
Scanning GET /api/v1/users/{id} [sqli]
...
[RESULTS] 12 endpoints, 1,280 tests, 4m12s
Step 3 of 4 Filter by Group or Path
Don't need to test everything? Groups map to your OpenAPI tags (or Postman folders). Filter by group, path, or skip-group:
$ waf-tester scan --spec openapi.yaml \
-u https://api.example.com \
--group auth -y
$ waf-tester scan --spec openapi.yaml \
-u https://api.example.com \
--path "/api/v2/*" -y
$ waf-tester scan --spec openapi.yaml \
-u https://api.example.com \
--skip-group deprecated -y
Step 4 of 4 Postman Collections & HAR Files
Same --spec flag works for Postman and HAR. WAFtester auto-detects the format.
$ waf-tester scan \
--spec MyAPI.postman_collection.json \
--env staging.postman_environment.json \
-u https://api.example.com -y
$ waf-tester scan --spec recording.har \
-u https://api.example.com -y
What You've Achieved
- โ Schema-aware API security testing from your existing spec
- โ Dry-run preview before any requests hit your API
- โ Endpoint filtering by group, path, or tag
- โ Support for OpenAPI, Swagger, Postman, HAR, and AsyncAPI
Try next: In Your Pipeline โ automate this in CI/CD to catch WAF regressions on every deploy.
Add to Your CI/CD Pipeline
Block deployments when a WAF bypass is found. WAFtester integrates with GitHub Actions, GitLab CI, and any CI system that runs shell commands.
Test Locally First
Before adding to CI, run the exact command you'll use in the pipeline. The -types flag limits which attack categories to test โ here we scan for SQL injection, XSS, and remote code execution.
$ waf-tester scan \
-u https://staging.example.com \
-types sqli,xss,rce \
--smart \
-format sarif \
-o results.sarif
Step 2 of 4 GitHub Actions
Add this workflow file. It runs on every PR, uploads SARIF to GitHub Security, and blocks the merge if critical bypasses are found.
name: WAF Security Gate
on:
pull_request:
branches: [main]
jobs:
waf-test:
runs-on: ubuntu-latest
steps:
- name: Install WAFtester
run: npm install -g @waftester/cli
- name: WAF Scan
run: |
waf-tester scan \
-u $STAGING_URL \
-types sqli,xss,rce --smart \
-format sarif -o results.sarif
- name: Upload SARIF
uses: github/codeql-action/upload-sarif@v3
if: always()
with:
sarif_file: results.sarif
Step 3 of 4 GitLab CI / Docker
Same concept, GitLab syntax. Uses the Docker image for zero-install scanning:
waf-security-gate:
stage: security
image: waftester/waftester:latest
script:
- waf-tester scan -u $STAGING_URL
-types sqli,xss,rce --smart
-format json -o results.json
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
Step 4 of 4 Set Severity Thresholds
Control what blocks the pipeline. The -msev flag (match severity) filters output to only include findings at or above your threshold:
# Only report critical and high severity findings
$ waf-tester scan -u https://staging.example.com \
-types sqli,xss,rce --smart \
-msev critical,high \
-format json -o results.json
What You've Achieved
- โ CI/CD pipeline that blocks deploys on WAF bypasses
- โ SARIF integration with GitHub/GitLab security dashboards
- โ Configurable severity thresholds
- โ Docker image for zero-install CI environments
Try next: Let AI Handle It โ use MCP to let Claude or Copilot run WAF tests for you.
Let AI Handle It
WAFtester ships a Model Context Protocol (MCP) server with 27 tools. Connect it to Claude Desktop, VS Code Copilot, or any MCP-compatible AI โ then just ask in plain English.
Add the MCP Config
Tell your AI client where to find WAFtester. The mcp command starts the server. Add this to your configuration file:
{
"mcpServers": {
"waf-tester": {
"command": "waf-tester",
"args": ["mcp"]
}
}
}
| Client | Config file location |
|---|---|
| Claude Desktop (macOS) | ~/Library/Application Support/Claude/claude_desktop_config.json |
| Claude Desktop (Windows) | %APPDATA%\Claude\claude_desktop_config.json |
| VS Code Copilot | .vscode/mcp.json in your project |
Just Ask
No flags to memorize. No syntax to look up. Just describe what you want in plain English:
Quick scan
"Scan example.com for SQL injection and XSS vulnerabilities"
Smart mode
"Run a full smart scan on staging.example.com and save the report"
Authenticated
"Test my API at api.example.com with this bearer token: abc123"
Analysis
"What WAF is protecting example.com? How can I test around it?"
What the AI Can Do
The MCP server exposes 27 tools. The AI picks the right ones automatically:
Scanning
Run scans, smart mode, auto-scan, custom payloads
WAF Detection
Identify WAF vendor, get bypass strategies
Evasion
Tamper payloads, encode, chain bypasses
Analysis
Decode responses, compare results, explain findings
Reporting
Generate reports, export SARIF, JSON, HAR
Config
List payloads, presets, templates, health check
What You've Achieved
- โ AI-powered WAF testing with natural language
- โ 27 tools available through MCP
- โ Works with Claude Desktop, VS Code, and any MCP client
- โ No CLI flags to memorize โ just describe what you need
What's Next?
Now that you've run through the scenarios, dive deeper into the full reference docs.