Not a developer? Book your AI Ops Assessment instead and we'll handle this for you.
The engine behind iMakeMVPs' AI Visibility Kit, for developers and agencies who want to run it themselves.
Schema, llms.txt, multi-engine citation tracking across ChatGPT, Claude, and Perplexity, and a single self-contained HTML report. All from one CLI run.
Self-contained HTML · Open offline · No login, no dashboard
60-second walkthrough
Demo coming soon
A 60-second walkthrough: install geo-kit, run an audit, open the HTML report.
Want to see it now? Open a real report →
What it is
A real audit, not a vibes check.
geo-kit is a Node CLI + TypeScript library that audits any website for AI search visibility. One geokit audit <url> run crawls the site, generates the structural files (schema, llms.txt) AI engines look for, asks ChatGPT, Claude, and Perplexity whether they cite the business for category-relevant queries, and writes a single self-contained HTML report.
- Real crawler: reads up to 20 of your live pages, not a screenshot
- Real citation API queries: ChatGPT, Claude, and Perplexity in parallel, not scraping
- Schema.org JSON-LD bundle: Organization, WebSite, Service, FAQPage, BreadcrumbList
- llms.txt and llms-full.txt files generated per llmstxt.org
- White-label brand profile: accent color, logo, CTA copy, contact email
- Filesystem-free library API: runAudit() returns an in-memory result
How you use it
Three ways in.
Distribution is currently via private repo invite. npm release coming v2. Email sam@imakemvps.com for access.
For one-off audits & operators
Build from source in the monorepo, then run geokit directly.
git clone https://github.com/samershaker/imakemvps-plugins.git
cd imakemvps-plugins
npm install # auto-builds geo-kit
# Self audit
npx --workspace @imakemvps/geo-kit \
geokit audit https://example.com
# With competitors + citation tracking
echo "OPENAI_API_KEY=sk-..." > geo-kit/.env
npx --workspace @imakemvps/geo-kit \
geokit audit https://example.com \
--compare competitor1.com,competitor2.comFor embedding in your stack
runAudit() returns an in-memory AuditResult. Render & sink wherever you want.
import {
runAudit,
renderReportHtml,
buildRecommendations,
} from "@imakemvps/geo-kit";
const result = await runAudit({
url: "https://example.com",
maxPages: 20,
delayMs: 500,
noCitations: false,
json: false,
verbose: false,
compare: [],
});
const html = renderReportHtml({
result,
recommendations:
buildRecommendations(result.target),
geokitVersion: "1.2.0",
});For AI-assisted runs
Loads as a Claude Code plugin so you can audit a URL from any session with one slash command.
# From the repo root
claude --plugin-dir ./geo-kit
# Then in the session:
/geo-kit:audit-url example.comThe skill builds dist/ on first use and surfaces a clear note if no API keys are set.
Integrations
No CMS plugin to install.
geo-kit reads the source HTML of any live site, generates the deliverables, and your developer pastes them in.
- Works on the source HTML of any site: read-only crawl, no install on your stack
- Shopify storefronts: we read product pages, collections, and FAQs
- WordPress: any theme; we read the rendered HTML, no plugin to install
- Webflow, including custom domains
- Next.js, Astro, Remix: any SSG/SSR framework
- Plain static HTML, no CMS required
- Headless / JAMstack stacks: if it serves HTML, geo-kit can audit it
- No CMS plugin to install. We read your live site, generate the deliverables, your developer pastes them in
Two paths, one engine
Run it yourself, or have us run it for you.
Both paths produce the same deliverable. Pick the one that matches your role.
Run it yourself
- · Your machine, your CI, your data
- · Bring your own OpenAI / Anthropic / Perplexity keys
- · White-label every report you ship to clients
- · Source-available, read every line
- · No recurring cost beyond AI API spend
Best for: developers, agencies, in-house teams, anyone reselling AI visibility work.
We run it for you
- · Fill an intake form on the marketing site
- · We run the audit, you get a PDF/HTML in your inbox
- · Two-business-day turnaround
- · No CLI, no API keys, no setup
- · Free for the first run
Best for: business owners, marketers, anyone who just wants the report.
Go to the Free Audit →How it stacks up
vs DIY prompts. vs SaaS dashboards.
Honest framing. Trade-offs, not slogans.
DIY (asking ChatGPT yourself)
- Free
- No tools to install
- Works for one-off curiosity checks
- Doesn't generate schema.org JSON-LD
- Doesn't generate llms.txt or llms-full.txt
- No side-by-side comparison across engines
- No monthly cadence. Every run is from scratch
- No engine attribution (which engine cited you, and where)
SaaS dashboard ($150–$300/mo)
- Real-time graphs and historical trendlines
- No setup. Log in and go
- Useful if you only need a status board
- Monthly subscription, billed indefinitely
- Closed-source engine: you can't audit how the numbers are produced
- Can't run offline or inside your own CI pipeline
- White-label and API access usually gated behind an enterprise tier
- Your audit data lives on their servers
geo-kit (self-run)
- You own the engine. Runs on your machine or your CI
- Source-available: read every line of how the audit is produced
- White-label brand profile included by default
- In-memory library API: no filesystem writes unless you ask for them
- Multi-engine attribution (ChatGPT, Claude, Perplexity) in one report
- No recurring subscription. You pay only for the AI API calls you make
- Requires a developer to operate (CLI, env vars, API keys)
- No hosted dashboard. The deliverable is a self-contained HTML report
- You bring your own OpenAI / Anthropic / Perplexity API keys
Want to run this for your clients?
Free + book a call. We'll talk through what you're building, send the private-repo invite, and walk you through your first audit. No pricing tiers, no demo funnel.
FAQ
Developer questions
Is it open source?+
Source-available. The repo is private through v1.x and a public release is planned alongside v2.0. If you want access today, email sam@imakemvps.com and we'll invite you to the private GitHub repo. The license is MIT, so you keep what you build with it.
Can I white-label it?+
Yes. Pass --brand <path> on the CLI (or { brand } on renderReportHtml in the library) with a BrandProfile JSON. You can swap the eyebrow text, mid-document CTA strip, closing card, footer credit, contact email, accent color, and header logo from one file. No source edit. The neutral palette and status pills are intentionally locked so the comparison matrix keeps its meaning across brands.
What's the API cost per audit?+
About $0.10 per audit if only OPENAI_API_KEY is set, or about $0.20 if all three keys (OpenAI, Anthropic, Perplexity) are set: five keywords across each engine. In compare mode, citation queries are reused across the target and competitors, so adding competitors is free at the API layer. See the README for the per-engine cost table.
How do I run this in CI?+
The CLI is stateless and the library is filesystem-free in core. Set the API keys you have as env vars, call runAudit(...) from your CI step, and fail the build if the schema score regresses. A turnkey GitHub Action is on the v2.0 roadmap.
Does it write to my filesystem?+
Not in the library. runAudit() returns an in-memory AuditResult: schema, llms.txt strings, citation hits, and the rendered HTML body. The CLI is a thin sink that calls the same library and writes five files under ./geokit-out/<host>/. If you need a different sink (S3, an HTTP response, a database row), use the library directly.
Why no pricing tiers on this page?+
Free + book a call for v1.x. The plugin distribution is private-repo invite (free), and we'll talk through what you're building before quoting any custom work. The managed AI Visibility Kit (we run the audits, you receive the reports) has its own pricing on the home page.