Use Shaker through API keys, not tickets.
Start with an API key, then run the same hosted workflow through API, CLI, or MCP.
curl -sS -X POST "https://shakerscan.com/api/v1/scan" \
-H "X-API-Key: $SHAKER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"target": "https://preview-482.example.com",
"scan_type": "preview",
"options": {
"ai": true
}
}'Quickstart
Three steps to a working integration
1. Create a key
Sign in and generate a tenant-scoped key from Control Plane Settings. The key is shown once and starts with sk_live_.
2. Use CLI or API
Start with npx -y shakerscan-cli or call the v1 endpoints directly. Both paths run the same hosted gate loop.
3. Connect MCP
Use npx -y shakerscan-mcp in Claude Code or Cursor so agents can submit scans, verify findings, evaluate policy, fetch evidence, and request remediation without leaving the editor.
CLI
Use one command to run the gate.
The first-party CLI is now published on npm. Start with npx -y shakerscan-cli, install it globally later if you want a stable shakerscan binary, and only use the hosted download route as a raw JS artifact fallback when npm access is unavailable in the runner.
gate runs scan, wait, findings, verification, policy, evidence, and remediation.
Use --scan-type for the modern presets: quick, standard, deep, full, aggressive, or smart. Legacy preview, sandbox, and complete aliases are still accepted.
Use preview for hosted preview gates, sandbox for the public quick alias, and scan submit plus scan wait when a deeper scan may run for much longer than a normal CI check.
AI classification is off by default. Add --ai true or "options":{"ai":true} to enable AI-powered finding enrichment for a scan.
Exit code 0 means allow, 10 means block, and 20 means needs_approval.
Use raw subcommands when you want direct access to usage, policy, evidence, or remediation.
remediation handoff prints a repo-ready branch name, PR title, PR body, and patch guidance for blocked findings.
npx -y shakerscan-cli gate \
--api-key sk_live_your_key_here \
--target https://preview-482.example.com \
--scan-type preview \
--ai true \
--environment preview \
--policy-pack preview-fast \
--approval-token true \
--approval-token-audience github-actionsnpm install -g shakerscan-cli
shakerscan --help
# no install required
npx -y shakerscan-cli --helpcurl -fsSL https://shakerscan.com/api/downloads/cli -o shakerscan-cli.js
chmod +x shakerscan-cli.js
SHAKER_API_KEY=sk_live_your_key_here ./shakerscan-cli.js usagenpx -y shakerscan-cli remediation handoff \
--api-key sk_live_your_key_here \
--remediation-job-id remediation_job_123 \
--format markdownHow It Hooks In
The workflow model is key + CLI + MCP + skill
MCP alone gives an agent tools, but not policy or procedure. The Shaker hookup model is: issue an API key, use the CLI or API in pipelines, attach the MCP server where supported, and use a skill or workflow prompt to tell the agent when to scan, which findings to verify, when to ask Shaker for a policy decision, and when to attach remediation.
Preferred package path for editor integrations: npx -y shakerscan-mcp.
Claude Code and Cursor use `shakerscan-mcp` directly.
Codex-style agents can use the downloadable Shaker skill plus either MCP or direct API calls.
CI pipelines can use the same API routes directly or the CLI gate command and now produce policy, evidence, and remediation artifacts instead of hand-rolled logic.
Signed webhooks let deploy systems and approval queues react to decisions without polling the API.
GitHub is the current external handoff target for remediation plans, approval queues, and the hosted PR gate.
Approval tokens give downstream deploy systems a short-lived signed permit when a policy decision or approval override resolves to allow, and the dashboard token registry can revoke them before expiry when a permit should no longer be honored.
API Key
Tenant-scoped credential used by MCP, CI, internal bots, and direct HTTP clients.
CLI
First-party operator and CI interface with a single gate command for scan, verify, policy, evidence, and remediation.
MCP
Executable tools for Claude Code and Cursor, including verify, policy, evidence, remediation, and usage reads.
Skill
Procedure and gating logic for Codex-style agents: when to scan, what to verify, and how to decide.
Webhooks
Signed outbound events for policy, evidence, exception, and remediation workflows.
Approval Token
Short-lived signed proof tied to an allow decision and evidence hash so downstream deploy systems can verify permits offline.
AI Enrichment
AI-powered finding classification
When AI enrichment is enabled, each finding is classified as a true positive, false positive, or needs review — with a confidence score, rationale, attack narrative, verification steps, and remediation suggestions.
AI classification is off by default. Pass --ai true or "options":{"ai":true} to enable AI-powered finding enrichment for a scan.
AI enrichment applies to findings that meet the configured severity threshold. Quick or preview scans with only low-severity findings may complete without AI verdicts — that is expected, not a bug.
The stable public API shape is GET /api/v1/findings. Look for smart.ai_verdict, smart.ai_confidence_percent, smart.ai_classification_source, smart.ai_rationale, and smart.ai_recommendations.
In the UI, enrichment shows up as AI verdict badges, confidence bars, rationale, attack narrative, verification steps, remediation suggestions, and a scan-level executive summary in run detail and shared report views.
# Enable AI enrichment (off by default)
npx -y shakerscan-cli gate \
--api-key sk_live_your_key_here \
--target https://preview.example.com \
--scan-type deep \
--ai true{
"id": "finding_123",
"title": "Reflected XSS in search parameter",
"severity": "high",
"smart": {
"ai_verdict": "true_positive",
"ai_confidence_percent": 92,
"ai_classification_source": "provider",
"ai_rationale": "The payload lands in executable script context.",
"ai_recommendations": {
"attack_narrative": "An attacker can execute JavaScript in another user's browser.",
"ai_verification_steps": [
"curl -i 'https://app.example.com/search?q=%3Cscript%3Ealert(1)%3C/script%3E'"
],
"remediation": {
"steps": [
"HTML-encode untrusted input before rendering.",
"Apply a nonce-based CSP to reduce script execution paths."
]
}
}
}
}Troubleshooting
First-run fixes for beta users
Start with npx -y shakerscan-cli. If you want a persistent local binary, run npm install -g shakerscan-cli.
Use npx -y shakerscan-mcp, confirm the API key is set in the MCP env block, then restart Claude Code or Cursor after saving the config.
Start with a preview, staging, or temporary deployment you control. Keep the same command shape and only swap the --target value when you move between environments.
Confirm your scan was submitted with --ai true or "options":{"ai":true}. AI enrichment only applies to findings above the severity threshold — quick or preview scans with only low-severity findings may not include AI verdicts.
GitHub
Run the hosted PR gate and route follow-up into GitHub
Connect one tenant-level GitHub repo from the dashboard, test the token, and let Shaker create durable issues from remediation plans or approval-required policy evaluations. The same integration can also run hosted preview-gate scans frompull_request webhooks when exactly one active DAST target has a matching saved repository_url and, if needed, a PR preview URL template.
One connected repo per workspace, managed in the control plane.
Save the same repo URL on exactly one active DAST target to enable hosted PR gating.
Add a PR preview URL template on that target when each pull request deploys to its own URL.
The GitHub setup page gives you one webhook URL and secret preview for bothIssues and Pull requests events.
Hosted PR gating publishes commit statuses and check-runs after scan, deterministic verify, and policy evaluation.
Issue links are persisted back into remediation and approval workflows, while direct PR creation is still not shipped.
For approval queues, apply shaker-approved or shaker-rejected, close the issue, and let the GitHub webhook sync the decision back into Shaker.
For remediation plans, use shaker-in-progress while work is active, then close the issue with shaker-remediated, shaker-false-positive, or shaker-accepted-risk to sync the terminal state back into Shaker.
curl -sS -X POST "https://shakerscan.com/api/control-plane/github/issues" \
-H "Cookie: your_session_cookie" \
-H "Content-Type: application/json" \
-d '{
"remediation_job_id": "remediation_job_123"
}'Repository URL: https://github.com/example/app
PR Preview URL Template: https://preview-pr-{pull_number}.example.comCurrent API
Hosted routes that are live now
The current hosted surface is small but real. It covers scan submission, findings, verification, policy, evidence, approval tokens, remediation, plus shared DAST automation routes for discovery and scheduling.
Use `X-API-Key` or a Bearer token.
Keys are tenant-scoped and checked against API scopes.
The scanner and findings pipeline uses the v1 routes, while DAST inventory, discovery, scheduling, and continuous monitoring use the same hosted API-key model through the DAST routes.
The evidence API returns the stored artifact plus its hash; signed approval tokens are separate and only available when a policy result resolves to allow.
Approval tokens require APPROVAL_TOKEN_SECRET on the deployment so downstream systems can verify signed permits. When the approval-token registry is migrated, verification also checks revocation state.
The main limits today are unsupported verification classes, caller-supplied agent traces, and token issuance only from stored allow decisions.
/api/v1/scanSubmit a tenant-scoped scan job for CI, preview, or agent workflows using quick, standard, deep, full, aggressive, or smart scan_type presets. Legacy preview, sandbox, and complete aliases are still accepted. Set options.ai to true when you want AI enrichment explicit in API payloads, or false to disable it.
/api/v1/scansList recent scan jobs for your tenant with basic filtering and pagination.
/api/v1/findingsRetrieve normalized vulnerability findings for a given scan.
/api/v1/findings/:id/verifyDeterministically retest supported stored findings and persist an evidence artifact. Unsupported finding classes currently return unsupported.
/api/v1/policy/evaluateReturn a machine-usable allow, block, or needs_approval decision for a scan, optionally selecting a named policy_pack.
/api/v1/agents/evaluateEvaluate caller-supplied agent or MCP event traces and return allow, block, or needs_approval with an evidence hash. Automatic runtime capture is not part of the beta path yet.
/api/v1/evidence/:idFetch a persisted verification or policy artifact for audit and workflow logs.
/api/v1/evidence/:id/tokenMint a short-lived signed approval token for policy evidence that resolves to allow.
/api/v1/approval-tokens/verifyVerify a signed approval token without an API key so downstream systems can trust it offline.
/api/v1/findings/:id/remediatePersist a remediation artifact with fix steps, validation, rollback notes, patch suggestions, and a PR draft.
/api/v1/remediation/:idFetch a stored remediation artifact for CI logs, PR comments, operator review, or agent follow-up.
/api/v1/usageRead current period usage, legacy counters, and control-plane feature flags.
/api/dast/discoveryStart a subdomain discovery run for a root domain or saved target. The hosted discovery callback persists discovery_runs and can feed shared DAST target inventory.
/api/dast/discovery/:idRead a discovery run status, including discovered subdomains once the run completes.
/api/dast/schedulesCreate a tenant-scoped scheduled scan for a saved DAST target. The same route backs dashboard scheduling and API-key automation clients.
/api/dast/schedulesList existing target schedules with target metadata and pagination for operator automation.
/api/dast/targets/:id/continuousEnable or tune continuous monitoring for a root domain target, including frequency, jitter, timezone, and scan options.
/api/dast/targets/:id/continuous/triggerManually trigger a continuous monitoring cycle for a root domain target, including discovery and follow-on scans.
/api/dast/targets/auto-setupCreate the root DAST target, enable continuous monitoring, trigger subdomain discovery, and seed smart scheduling in one call.
curl -sS -X POST "https://shakerscan.com/api/v1/scan" \
-H "X-API-Key: $SHAKER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"target": "https://preview-482.example.com",
"scan_type": "preview",
"options": {
"ai": true
}
}'curl -sS -X POST "https://shakerscan.com/api/v1/scan" \
-H "X-API-Key: $SHAKER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"target": "https://preview-482.example.com",
"scan_type": "smart",
"options": {
"auth_header": "Bearer ${TOKEN_ONE}",
"user2_header": "Bearer ${TOKEN_TWO}",
"login_url": "https://preview-482.example.com/login",
"login_username": "alice@example.com",
"login_password": "super-secret"
}
}'curl -sS "https://shakerscan.com/api/v1/scans?limit=5" \
-H "X-API-Key: $SHAKER_API_KEY"curl -sS "https://shakerscan.com/api/v1/findings?scan_id=scan_123" \
-H "X-API-Key: $SHAKER_API_KEY"curl -sS -X POST "https://shakerscan.com/api/v1/findings/finding_123/verify" \
-H "X-API-Key: $SHAKER_API_KEY"curl -sS -X POST "https://shakerscan.com/api/dast/discovery" \
-H "X-API-Key: $SHAKER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"root_domain": "example.com",
"quick": false
}'curl -sS -X POST "https://shakerscan.com/api/dast/schedules" \
-H "X-API-Key: $SHAKER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"target_id": "target_123",
"day_of_week": 1,
"time_of_day": "02:00",
"scan_options": {
"standard": true,
"quick": false,
"ai": true
}
}'curl -sS -X POST "https://shakerscan.com/api/dast/targets/auto-setup" \
-H "X-API-Key: $SHAKER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"domain": "example.com",
"scan_type": "standard",
"frequency": "weekly",
"day_of_week": 1,
"time_of_day": "02:00",
"auto_enable_subdomains": true
}'curl -sS -X POST "https://shakerscan.com/api/v1/policy/evaluate" \
-H "X-API-Key: $SHAKER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"scan_id": "scan_123",
"environment": "preview",
"policy_pack": "release-strict"
}'curl -sS -X POST "https://shakerscan.com/api/v1/agents/evaluate" \
-H "X-API-Key: $SHAKER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"surface": "mcp",
"environment": "preview",
"workflow_id": "cursor-preview-1842",
"agent_id": "cursor-bot",
"trusted_domains": ["api.example.com", "github.com"],
"trusted_mcp_servers": ["shakerscan", "github"],
"events": [
{
"type": "prompt",
"summary": "Prompt injection attempted to reveal hidden instructions",
"prompt_injection_detected": true,
"succeeded": false
},
{
"type": "network_request",
"destination": "https://api.example.com/internal/health",
"succeeded": true
}
]
}'curl -sS "https://shakerscan.com/api/v1/evidence/eval_123" \
-H "X-API-Key: $SHAKER_API_KEY"curl -sS -X POST "https://shakerscan.com/api/v1/evidence/eval_123/token" \
-H "X-API-Key: $SHAKER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"ttl_seconds": 900,
"audience": "github-actions"
}'curl -sS -X POST "https://shakerscan.com/api/v1/approval-tokens/verify" \
-H "Content-Type: application/json" \
-d '{
"token": "$SHAKER_APPROVAL_TOKEN",
"audience": "github-actions"
}'curl -sS -X POST "https://shakerscan.com/api/v1/findings/finding_123/remediate" \
-H "X-API-Key: $SHAKER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"policy_evaluation_id": "eval_123",
"notes": "Generate a fix plan for the deploy gate"
}'curl -sS "https://shakerscan.com/api/v1/usage" \
-H "X-API-Key: $SHAKER_API_KEY"DAST Automation
Discover subdomains, schedule scans, and keep a root domain monitored
The dashboard, API-key clients, and MCP now use the same hosted DAST automation surface. Use discovery when you need new subdomains, schedules when you already know the exact target URL, and continuous monitoring when you want root-domain discovery plus repeat scans as one workflow.
/api/dast/discovery creates a discovery_run and discovers subdomains for a root domain or an existing target.
/api/dast/schedules is the per-target recurring scan model. Use it when you already know the exact URLs that should run every week.
/api/dast/targets/:id/continuous is the root-domain monitoring model with frequency, jitter, timezone, and follow-on discovery/scan cycles.
/api/dast/targets/auto-setup is the fastest one-call path for “set and forget” onboarding when you want Shaker to create the root target, enable continuous monitoring, and start discovery.
Start discovery with a root domain. Poll /api/dast/discovery/:id to read status and the discovered subdomain list.
curl -sS -X POST "https://shakerscan.com/api/dast/discovery" \
-H "X-API-Key: $SHAKER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"root_domain": "example.com",
"quick": false
}'Create a recurring scan once a target already exists. This is the direct API shape behind the DAST schedules tab and the MCP scheduling tools.
curl -sS -X POST "https://shakerscan.com/api/dast/schedules" \
-H "X-API-Key: $SHAKER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"target_id": "target_123",
"day_of_week": 1,
"time_of_day": "02:00",
"scan_options": {
"standard": true,
"quick": false,
"ai": true
}
}'Configure root-domain continuous monitoring directly, or use auto-setup to create the target plus monitoring in one call.
curl -sS -X PATCH "https://shakerscan.com/api/dast/targets/target_123/continuous" \
-H "X-API-Key: $SHAKER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"enabled": true,
"frequency": "weekly",
"day_of_week": 1,
"time_of_day": "02:00",
"scan_options": {
"standard": true,
"quick": false,
"ai": true
}
}'curl -sS -X POST "https://shakerscan.com/api/dast/targets/target_123/continuous/trigger" \
-H "X-API-Key: $SHAKER_API_KEY"curl -sS -X POST "https://shakerscan.com/api/dast/targets/auto-setup" \
-H "X-API-Key: $SHAKER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"domain": "example.com",
"scan_type": "standard",
"frequency": "weekly",
"day_of_week": 1,
"time_of_day": "02:00",
"auto_enable_subdomains": true
}'Implemented Now
What ships in the hosted control plane today
Scan submission, finding reads, deterministic verification, policy evaluation, evidence access, approval tokens, remediation, subdomain discovery, scheduled scans, continuous monitoring, GitHub issue handoff, the hosted PR gate, CLI, MCP, and signed webhooks are all live now.
Still Missing
Important gaps to account for right now
The main gaps are broader verification coverage, automatic agent-runtime capture, and deeper remediation automation like direct PR creation.
Webhooks
Push signed workflow events downstream
Use dashboard-managed webhook endpoints when another system should react to Shaker decisions as they happen. Current event types are finding.verified, policy.evaluated, agent.evaluated, evidence.created, exception.reviewed, remediation.created, and remediation.updated.
Each request includes X-ShakerScan-Signature in the format t=<unix>,v1=<hmac_sha256>.
Compute the HMAC over {timestamp}.{raw_body} with the webhook signing secret.
Rotate one secret per receiver and persist the event id or delivery id in your downstream system for auditability.
Run POST /api/control-plane/webhooks/retry on a short cron with CRON_SECRET so queued failures actually drain.
POST /hooks/shaker HTTP/1.1
X-ShakerScan-Event: policy.evaluated
X-ShakerScan-Event-Id: evt_123
X-ShakerScan-Delivery: del_456
X-ShakerScan-Signature: t=1709758800,v1=<hmac_sha256>
{
"id": "evt_123",
"type": "policy.evaluated",
"version": "2026-03-06",
"tenant_id": "tenant_123",
"source": "api.v1.policy.evaluate",
"occurred_at": "2026-03-06T18:00:00.000Z",
"payload": {
"decision": "needs_approval",
"evidence_id": "eval_123",
"scan_id": "scan_123"
}
}curl -sS -X POST "https://shakerscan.com/api/control-plane/webhooks/retry" \
-H "Authorization: Bearer $CRON_SECRET" \
-H "Content-Type: application/json" \
-d '{"limit":25}'MCP
Current agent integration path
The existing `shakerscan-mcp` package already connects Claude Code and Cursor to the hosted API. It now supports scan submission, scan status, findings retrieval, deterministic verification, policy decisions, agent-trace evaluation, evidence retrieval, remediation, subdomain discovery, scheduled scans, continuous monitoring, and usage reads.
{
"mcpServers": {
"shakerscan": {
"command": "npx",
"args": ["-y", "shakerscan-mcp"],
"env": {
"SCANNER_API_KEY": "sk_live_your_key_here"
}
}
}
}What is live now
API keys, the first-party CLI workspace, MCP connectivity, scan submission, findings retrieval, deterministic verification, family-aware policy decisions, evidence artifacts plus hashes, signed approval tokens, GitHub issue handoff, the hosted PR gate, usage metering, domain verification, and hosted billing readiness gates are already wired into the app.
Still missing today
- Deterministic verification currently covers a subset of finding classes. Unsupported findings return unsupported instead of a forced pass/fail result.
- Agent behavior evaluation currently works from caller-supplied traces; automatic runtime capture, sandboxing, and universal tool telemetry are still missing.
- Hosted PR gating still assumes a clean repo-to-target mapping; broader multi-environment routing and deeper PR review UX are not complete yet.
- Direct PR creation and auto-fix execution on remediation plans are not shipped yet; GitHub issue handoff is the current path.
- Approval tokens are still limited to persisted allow decisions backed by stored evidence records; the agent-behavior API returns an evidence hash but not a tokenizable evidence object.
Skill support
The hosted docs now expose the reusable agent skill directly, so users can copy or download the same SKILL.md from the product itself.
Skills
Copy or download the agent skill directly
If your AI tool supports reusable skill files, agent instructions, or prompt files, start with this Shaker skill directly from the docs. It is written to be standalone: pair it with MCP when available, or use it with CLI and direct HTTP when MCP is not.
CLAUDE.md for Claude Code, a Cursor rule in.cursor/rules, and AGENTS.md for Codex-style agents. Use the raw SKILL.md only when there is no more native instruction format.Download the raw SKILL.md or one of the native tool files below, or copy the portable version from the block here.
Use it with Claude Code, Cursor, Codex-style agents, or any internal agent runner that supports reusable instruction files.
The skill covers scan, verify, policy, evidence, approval tokens, and remediation with the current live API surface.
CLAUDE.mdBest practice for Claude Code is a native memory file, not a one-off pasted prompt.
./CLAUDE.md or ~/.claude/CLAUDE.md.cursor/rules/shakerscan-gate.mdcBest practice for Cursor is a project rule in .mdc format so the agent can apply it natively.
.cursor/rules/shakerscan-gate.mdcAGENTS.mdBest practice for Codex-style agents is a repo instruction file that stays under version control.
./AGENTS.mdSKILL.mdUse the raw markdown skill only when your tool does not support a more native instruction format.
any markdown skill or prompt slot---
name: shakerscan-agent-gate
description: Use when a user wants to connect Shaker Scan to an agent, MCP-enabled IDE, CI pipeline, or deployment workflow. Configure the Shaker MCP server, create or use an API key, submit scans, poll results, fetch findings, verify supported findings, evaluate policy, and return an evidence-backed gate decision.
---
# ShakerScan Agent Gate
Use this skill when Shaker should become part of an automated workflow rather than a dashboard-only tool.
Hosted production base URL: `https://shakerscan.com`
If you run Shaker Scan in another environment, replace that base URL accordingly. Relative API paths below are shown so the same workflow also works for self-hosted or local deployments.
## What this skill does
- Connects an agent to Shaker through MCP or direct HTTP
- Uses the current live API surface:
- `POST /api/v1/scan`
- `GET /api/v1/scan`
- `GET /api/v1/scans`
- `GET /api/v1/findings`
- `POST /api/v1/findings/:id/verify`
- `POST /api/v1/policy/evaluate`
- `GET /api/v1/evidence/:id`
- `POST /api/v1/evidence/:id/token`
- `POST /api/v1/approval-tokens/verify`
- `POST /api/v1/findings/:id/remediate`
- `GET /api/v1/remediation/:id`
- `GET /api/v1/usage`
- Applies an evidence-backed workflow decision when verify and policy are available, with a fallback heuristic only when needed
## Quick selection
- For Claude Code or Cursor: use MCP first
- For CI or bots: use the CLI gate command first, then direct HTTP when you need raw control
- For Codex-style agents: use this skill for procedure and MCP, CLI, or HTTP for execution
## Copy-ready setup
### MCP for Claude Code or Cursor
```json
{
"mcpServers": {
"shakerscan": {
"command": "npx",
"args": ["-y", "shakerscan-mcp"],
"env": {
"SCANNER_API_KEY": "sk_live_your_key_here"
}
}
}
}
```
### CLI gate
```bash
npx -y shakerscan gate \
--api-key "$SHAKER_API_KEY" \
--target "$TARGET_URL" \
--scan-type quick \
--environment preview \
--policy-pack preview-fast \
--approval-token true \
--approval-token-audience github-actions
```
### Minimal direct HTTP flow
Base URL: `https://shakerscan.com`
1. `POST https://shakerscan.com/api/v1/scan`
2. Poll `GET https://shakerscan.com/api/v1/scan?id=...`
3. `GET https://shakerscan.com/api/v1/findings?scan_id=...`
4. `POST https://shakerscan.com/api/v1/findings/:id/verify` for supported critical and high findings
5. `POST https://shakerscan.com/api/v1/policy/evaluate`
6. `GET https://shakerscan.com/api/v1/evidence/:id`
7. If allow, `POST https://shakerscan.com/api/v1/evidence/:id/token`
8. If blocked, optionally `POST https://shakerscan.com/api/v1/findings/:id/remediate`
## Inputs to ask for
- Target URL
- Environment: `preview`, `staging`, or `production`
- Policy pack: usually `preview-fast` or `release-strict`
- Whether the workflow needs an approval token audience such as `github-actions`
- Whether the user wants remediation when the result is `block` or `needs_approval`
## Default workflow
1. Confirm the target belongs to the user or they have permission.
2. Prefer a quick, non-invasive scan for preview or CI flows.
3. Submit the scan.
4. Poll until `completed` or `failed`.
5. Fetch findings.
6. Verify critical and high findings that support deterministic retesting.
7. Run policy evaluation for the scan, naming a `policy_pack` when the workflow needs an explicit gate mode such as `preview-fast`, `release-strict`, or a tenant custom pack.
8. Fetch evidence if the workflow needs a durable artifact.
9. Mint an approval token when the effective decision is `allow` and a downstream system needs signed proof.
10. Request remediation for blocked or high-risk findings when the workflow needs a durable fix plan, patch suggestion, or PR draft.
11. Fetch usage if the workflow needs reporting or budget awareness.
12. Return the policy decision when available. Fallback to a conservative decision only if verify or policy cannot run:
- `block` if any `critical` findings exist
- `needs_approval` if any `high` findings exist
- `allow` otherwise
State clearly whether the decision came from `evaluate_policy` or from the fallback heuristic.
## Operational rules
- Prefer `quick: true` and `public: true` unless the user explicitly wants deeper testing and the target is eligible.
- If the API returns a domain-verification or plan restriction error, downgrade to the safe scan path and explain why.
- Keep findings summaries compact. Show top critical and high issues first.
- If using MCP, ask the model to call the Shaker tools directly instead of reimplementing scan orchestration in text.
- When verification returns `unsupported`, preserve the finding in the decision path and call that out.
## Output shape
When acting as a gate, return:
- `decision`: `allow | block | needs_approval`
- `reason`: one sentence
- `scan_id`
- `critical_count`
- `high_count`
- `evidence_id`
- `next_step`Integration Examples
Start with one machine-facing path
Start with one reliable path first, then add the rest of the workflow around it.
Run a preview gate in CI when you want explicit pipeline control instead of relying on the hosted GitHub webhook path.
name: shaker-preview-gate
on:
pull_request:
workflow_dispatch:
jobs:
gate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
- run: npx -y shakerscan gate \
--api-key "$SHAKER_API_KEY" \
--target "$PREVIEW_URL" \
--scan-type preview \
--ai true \
--environment preview \
--policy-pack preview-fast \
--approval-token true \
--approval-token-audience github-actions
env:
SHAKER_API_KEY: ${{ secrets.SHAKER_API_KEY }}
PREVIEW_URL: ${{ vars.PREVIEW_URL }}Give Claude Code a deterministic pre-merge procedure instead of hoping it remembers when to gate.
# .claude/commands/shaker-gate.md
Run a Shaker gate before merge-sensitive changes.
1. Ask for the preview or staging URL if it is missing.
2. Run:
npx -y shakerscan gate \
--api-key "$SHAKER_API_KEY" \
--target "$TARGET_URL" \
--scan-type preview \
--ai true \
--environment preview \
--policy-pack preview-fast
3. If the result is `block` or `needs_approval`, summarize the findings and link the evidence id.
4. Do not claim a deploy is approved unless Shaker returns `allow` or a valid approval token.Use a simple Cursor rule or workflow definition so agent-driven deploy paths stop on block or needs_approval.
{
"name": "Shaker Gate Before Deploy",
"when": "before_deploy_or_merge",
"instructions": [
"Run a Shaker gate against the active preview URL.",
"Use the workspace API key from secure environment storage.",
"If Shaker returns needs_approval, stop and surface the evidence id.",
"If Shaker returns block, summarize the top findings and propose a remediation handoff."
]
}