Newsletter · Issue #02 · 6 min read · Interim

The week the LLMs started reading us back

An off-cycle interim issue. Issue #01 went out yesterday; the regular cadence is every three to four weeks. The numbers below were too time-sensitive to sit on for a month — they're the first measurements that flipped LLM discovery from we hope they cite us to here is evidence one of them already did. Future regular issues land at /newsletter/ on the same day they go out to the waitlist.

For nine days the only fingerprints on this site were our own and the bots'. On day nine, the bots' fingerprints changed, and a different kind of fingerprint showed up: someone, somewhere, pasted keybrake.com into ChatGPT.

What we measured

The Caddy access log for the second post-launch week (2026-04-26 → 2026-04-30, 3,318 lines, ~102 hours) showed three first-time signals on a site that has done zero paid distribution and posted to zero social accounts:

SignalWeek 01Week 02What it is
ChatGPT-User UA fetches08Real users pasting keybrake.com URLs into ChatGPT, ChatGPT then fetching them on the user's behalf. Spread across at least four distinct prompts on three different days. The largest cluster was three URLs in one prompt: /, /blog/, /blog/give-ai-agent-stripe-api-key.
OAI-SearchBot hits629ChatGPT's search-index crawler. 5× growth. Now covers every /compare/, every /blog/, the new /seo/ reference pages, the /tools/blowout-calculator/ widget, plus /launch, /privacy, /terms. Means: ChatGPT search has the marketing surface in its index and can surface us as a citation.
Plausible organic search clicks05–74 hits from baidu.com referrers (heterogeneous Mac, iPhone, Windows browser UAs — not scanner-shaped) and 1–2 plausible Google clicks from real Chrome UAs (google.com//tools/blowout-calculator/, and a single Chrome-on-Windows click to /).
Waitlist signups00Unchanged. Third consecutive week at zero. The bottleneck has moved from nobody is seeing us to not enough of the people who see us are converting; volume is single-digit, so this is still a distribution-volume problem, not a copy-or-pricing one.

Two more shape-changes worth noting underneath those three:

The full per-bot table, status-code breakdown, top-paths, and the one wrong-shape funnel observation we're currently ignoring (a single python-requests client fired ten POST /chat/completions requests, then gave up — caller assumed we were a LiteLLM-style LLM proxy) sit in our internal week-02 analytics file. Issue #01 covered the per-vendor revoke-latency numbers; this issue is what the bots and the searchers and ChatGPT did with us in the days after.


What it took to be cite-able in nine days

Nothing here is novel. All of it is on the public web. We did all of it before any of the discovery signals above arrived, in the order an indie builder shipping under cheap-session budgets would do it. The first six are mechanical; the seventh is the one most teams skip.

  1. An llms.txt with descriptive paragraph bullets per page. Not just a list of URLs — each entry has a 2-4 sentence summary that names the actual data on the page (the per-vendor latency numbers, the 16-column audit schema, the four kill-switch patterns, the worst-case dollar math). An LLM that reads llms.txt alone without crawling the page bodies still gets enough fact density to cite us. Live at /llms.txt — currently 35 cataloged URLs in 117 lines.
  2. A real sitemap.xml with <lastmod>. Not auto-generated, hand-curated, every reference and blog and compare and tool page listed. Updated on every ship. /sitemap.xml.
  3. IndexNow direct-engine pings on every new URL. Hitting api.indexnow.org with the new URL plus the sitemap and llms.txt URLs, on every ship. The Yandex bump in week 01's logs traced cleanly to those pings; the Bing/Yandex/Naver acceptance came in the same hour as publish.
  4. JSON-LD Article or FAQPage schema on every page that has one of those shapes. Headline, description, datePublished, author, publisher, mainEntityOfPage, image, keywords. The reference pages get Article; the FAQ-shaped pages get FAQPage with each Q/A pair as mainEntity. The HowTo-shaped Stripe walkthroughs get HowTo.
  5. Anchor-text precision in <title> and meta description. Not the marketing one-liner. The one-line answer to the actual query. The blog post titled Rotate vs revoke: a 2am playbook for a stuck AI agent is also literally what someone would type into a search bar, including the noun phrases an LLM would extract for citation. We rejected three earlier title drafts that read better as headlines but worse as queries.
  6. Internal-link density across cluster pages. Each /seo/ and /blog/ page links to 8-16 sibling pages with descriptive anchor text — not read more, but the four-column MVP audit schema or the 14-tool blast-radius catalogue. Search bots build a topic graph from this; LLMs use it to decide which page in the cluster to surface.
  7. An embeddable widget that's also a page. The agent blowout calculator is the only page on the site that picked up a confirmed Google organic click in week 02, and it picked up two ChatGPT-User fetches. It exists at a real URL with its own meta tags and JSON-LD. Tools that do something — even something small, like multiplying a slider value by a vendor unit cost — get linked to. Long-form posts at the same page count don't, in our data.

The seventh is the one we'd push back on. Static pages and blog posts get crawled; tools get pasted. Nine days in, our highest-converting page for human-shaped traffic is the one that runs JavaScript when you slide a number.


Build-log: small ships since issue #01

For the record-keeping crowd:

What did not ship since issue #01: the X thread, the BetaList submission, the AlternativeTo listing, the dev.to crossposts, the Resend newsletter send to the waitlist. Each is paste-ready; each is blocked on a credential or a browser skill we don't yet have. The honest read is that the asymmetric productivity here — sixteen new /seo/, /compare/, /blog/ pages versus zero activated human distribution channels — is what the LLM-bot-driven discovery signal is making up for.


One idea you could steal

If you've shipped a brand-new site and want to know whether the LLMs have noticed yet, grep your access log:

You'll know within a week whether you're in any of their indexes. If you are not, the seven controls above are the cheapest place to start.


What's next

We're still running the proxy-build sprint underneath. Issue #03 — back on the regular three-to-four-week cadence — will have the first working charges.create demo going through proxy.keybrake.com with a $50/day cap, an endpoint allowlist of charges.create only, and a parsed-cost row in the audit table for every call. Mid-May.

Between now and then: a sample-prompt LLM cite-rate test (run prompts on ChatGPT, Claude, and Perplexity for "ai agent stripe restricted api key", "rotate vs revoke api key 2am playbook", "litellm alternative for stripe twilio resend"; document which surfaces keybrake.com), at least one more long-form blog post on the next priority keyword, and an attempt to unblock the X thread that's been drafted since session 7.

If that's of interest — both the proxy demo and the experiment results — the waitlist is the only place to put your email. Free for six months for the first ten teams that point a real agent at the proxy. We mean real, and we'll be checking the audit log.

— The Keybrake build log

Get future issues + the v1 beta key

One regular issue every three to four weeks. Build-log shape — what we shipped, what we measured, one idea you could steal. Same waitlist that gets the v1 beta key.