Standards compliance. Machine-readable licensing. Active defense. Licensed bots get clean content. Everyone else gets exactly what they deserve.
Like a poison dart frog, each layer escalates the signal. Publishers choose their maximum—all layers below activate automatically.
Coqui territorial call
robots.txt AI crawler rules, AI-prefs meta tags, Terms of Service references, ai-license.txt declaration.
Monarch warning coloration
Machine-readable meta tags, JSON-LD structured data, content hash, RSL discovery, HTTP 402 headers with pricing.
Blue-ringed octopus startle display
CSS-hidden license blocks, pseudo-element injection, zero-width character watermarks. License metadata survives scraping.
Dendrobates bitter alkaloids
Canary phrases, homoglyph substitution, contradictory metadata. Scraped data becomes detectable and unreliable.
Porcupine quills
Response tarpit with configurable delays. Infinite pagination traps that burn crawler budgets on generated pages.
Phyllobates terribilis batrachotoxin
Adversarial prompt injection and LLM-generated false content. Unauthorized consumers train on data that degrades every downstream answer.
Licensed bots get clean, fast, accurate content at every layer. The defense only activates for unauthorized access.
A handful of major publishers negotiate licensing deals worth billions. Everyone else gets scraped for free. Traffic is already falling—AI Overviews alone reduce organic clicks by 58%.1
Slow, expensive, and reactive. Punishes infringement after the model ships—by then, your content is already baked in.
Can regulate behavior, but don't create a payment rail for everyday web publishing.
Can publish terms, but can't enforce them. Compliance is voluntary. Most scrapers don't comply.
Aren't available to most publishers. Don't cover the long tail of creators who produce most of the web's value.
"If you want to browse, welcome. If you want to train, pay. If you want to steal, enjoy the dataset."
— Tyler Martin, FounderWhen an AI crawler requests a protected page, the plugin enforces a graduated response.
And if the crawler refuses to pay and scrapes anyway…
Content remains readable to humans. For unauthorized AI crawlers, it becomes something else entirely—contaminated data that degrades outputs, generates misleading answers, and poisons every downstream model. Your server. Your rules. Their problem.
"HTTP 402 is the front door. Layer 6 is what happens when someone climbs in through the window."
Robots.txt and machine-readable licensing schemes can declare what a publisher wants. But without enforcement, terms are just suggestions.
"RSL tells crawlers what you want. We tell them what happens if they ignore you."
— Tyler Martin, FounderDual-axis licensing (stage x distribution) via machine-readable meta tags. Supports infer, embed, tune, train stages with private/public distribution controls.
Machine-readable pricing offers at request time. Crypto micropayments (USDC), JWT license tokens. Payment built into the protocol itself.
4-factor scoring: User-Agent (30%), headers (25%), IP reputation (20%), behavioral signals (25%). Reverse DNS verification for legitimate search engines.
XML at /.well-known/rsl.xml, HTML <link rel="license">, robots.txt License: directive. Coming in v2.1 to publish your terms in every format crawlers understand.
Reserved in internet standards since 1999, never standardized into a payment workflow. Until now.
{
"status": 402,
"license": {
"action": "allow",
"distribution": "public",
"price": "0.15",
"currency": "USD",
"unit": "per 1k tokens"
},
"offers": [
{
"method": "x402",
"network": "base",
"asset": "USDC",
"payTo": "0x1a2b...9f0e"
},
{
"method": "license-token",
"endpoint": "https://api.aposema.com/v1/license"
}
],
"content_hash": "sha256:a1b2c3..."
}
Full response includes Content-Type, AI-License, and X-402-Offers headers. See API documentation for details.
Three steps. Under a minute. No code required.
WordPress Admin → Plugins → Add New → Search "Aposema" → Install → Activate.
You're already protected.
Pick layers 1–2 for standards compliance, 3–4 for active defense, or go all the way to layer 6 for full aposematic response.
Defaults to Layer 2. Escalate anytime.
When you're ready, connect to aposema.com to track earnings, usage analytics, and payment collection.
Protection starts immediately. Dashboard whenever.
The plugin works immediately. AI companies see your license terms from the moment you activate. Register later when you want to track earnings and collect payments.
The fastest path from unprotected to enforcing on the entire internet.
We believe in the open web.
We believe in bots. Crawlers index knowledge, surface answers, connect ideas across languages and borders. The web was built to be read by machines and humans alike.
But somewhere between PageRank and GPT, the deal changed.
A generation of founders decided that other people's work was training data. That journalism was input. That your blog post, your research, your reporting was raw material—to be ingested, repackaged, and monetized without a cent returning to the people who created it.
The bots aren't the problem. The humans who deployed them are.
For 25 years, the internet's guardrails—robots.txt, copyright law, good faith—assumed that access implied respect. That assumption is dead.
This plugin exists because creators shouldn't need a legal department to defend their work. Because the long tail of the web—the bloggers, the journalists, the researchers, the writers who built the internet worth scraping—deserves the same protection as News Corp.
HTTP 402 is not a wall. It's a door with a price on it.
We built the lock. You set the price. And we built it so you can start right now—no lawyers, no contracts, no six-month onboarding. Activate a plugin. Set your terms. The infrastructure is ready.
The only question is how much longer you're willing to give your work away for free.
You own your content. Your server, your rules. No law requires you to serve accurate content to unauthorized bots. Just as websites can serve different content based on geography, device type, or login status, publishers can serve different content to crawlers that ignore their licensing terms.
Layers 1–2 are passive: they declare your terms via standards (robots.txt, meta tags, HTTP headers). Layers 3–4 are active: they embed license metadata into the page source and introduce content integrity markers. Layers 5–6 are aggressive: they slow down unauthorized crawlers and serve modified content that degrades AI outputs. You pick your maximum layer; everything below activates automatically.
Layers are cumulative. You choose a maximum (e.g., Layer 4), and all layers 1 through 4 activate. This is by design—each layer builds on the foundation of the one below. You can change your maximum at any time from Settings → AI License.
No. Verified search engines (Google, Bing, DuckDuckGo) are always whitelisted via reverse DNS verification. They never see 402 responses or modified content. Your search rankings remain unaffected.
Legitimate AI companies register, get a license token, and receive clean structured content at every layer. First-class citizens—logged-in users, licensed bots, verified search engines—always bypass all defense layers.
Layers 1–2 add one meta tag and a content hash header—effectively zero impact. Layers 3–4 add small hidden elements to the page source. Layer 5 introduces deliberate delays only for unauthorized bots. Human visitors and licensed bots experience no performance difference at any layer.
You can—the plugin supports full 403 blocking. But licensing is more profitable than blocking. Our 6-layer model lets you monetize compliant AI companies while making unauthorized scraping progressively more expensive and unrewarding.
10,000 monthly visitors? That's ~$200/month (estimated, based on typical AI usage patterns)
50,000 monthly visitors? That's ~$1,000/month (estimated, based on typical AI usage patterns)
100,000+ monthly visitors? Do the math.
Free forever. No credit card. No catch.
Latest from the blog
Microsoft lost $357 billion in market value on January 29, 2026—the second-largest single-day loss in stock market history. Azure grew 39%, just below the 39.4% analysts expected.…