No login required. We fetch the page server-side with a polite user agent, run 25 SEO checks, and return a score. Nothing is stored — results are cached for 10 minutes then dropped. Rate limit: 10 audits per hour per IP.

What this SEO audit tool checks

Paste any URL into the box above and this tool fetches the page, parses its HTML, and runs 25 distinct on-page SEO checks against it. You get a score out of 100, an A–F grade, a prioritised to-do list and a ready-to-paste prompt for any AI assistant so you don't even need to write the fixes yourself.

Meta tags & headings

  • Title tag length (50–60 characters is the sweet spot)
  • Meta description presence and length (140–160 characters recommended)
  • Canonical URL present and self-referencing
  • Robots meta tag checked for accidental noindex
  • Viewport meta tag (mobile rendering)
  • <html lang> attribute
  • Exactly one H1 per page; H1 length 10–70 chars
  • Heading hierarchy — no skipping H1 → H3

Social sharing & AI discovery

  • Open Graph tags (og:title, og:description, og:image, og:url)
  • Twitter Card tags
  • Schema.org JSON-LD structured data — each block is parsed, expanded and validated per type (Article, Product, FAQPage, HowTo, Organization, Event, Recipe, BreadcrumbList, VideoObject, Person, WebSite, WebPage). Missing required fields (Article without datePublished, Product without offers, FAQPage without Question entities) are flagged as warnings.
  • Favicon link

Why structured data matters more than ever

Schema.org used to be a "nice to have" for prettier Google search cards. In 2026 it does two big jobs:

  1. Traditional rich results. FAQ accordions, recipe cards, star ratings, event dates, product prices inside Google search results — every one requires the right schema block with the required fields filled in.
  2. AI citation and recommendation. Google's AI Overviews, Perplexity, ChatGPT Search, Bing Copilot, voice assistants like Alexa and Google Assistant — they all preferentially cite and quote pages with clean schema. The reason is practical: when an AI has to answer "what's the best laptop for video editing?" it's far easier to build a confident answer from pages where Product, brand, offers and aggregateRating are explicit fields than from paragraphs where those facts have to be inferred.

A half-filled schema block is almost worse than none — it takes the effort to add but gets rejected silently by both Google (no rich result) and AI systems (less likely to cite). This tool flags the specific missing fields per type so you can fix them quickly.

Content & readability

  • Word count (thin content is a ranking penalty)
  • Flesch Reading Ease score
  • Top keywords (with stopwords filtered)

Images

  • Alt text present on every image
  • Image file sizes flagged over 200 KB
  • Modern format detection (WebP/AVIF vs JPEG/PNG)
  • Natural vs displayed size — flags images served at more than 2× their display dimensions (a top Lighthouse / Core Web Vitals finding). Uses partial downloads to read image headers cheaply.

Links

  • Internal vs external link split
  • Nofollow detection
  • target="_blank" missing rel="noopener" (security)
  • Total link count (over 150 is a warning sign)

Technical & performance

  • HTTPS + security headers (HSTS, X-Content-Type-Options, Referrer-Policy, CSP)
  • HTTP → HTTPS redirect — checks the http:// variant explicitly returns a 301 to https, not just that the site can serve https
  • www / non-www canonicalisation — the alternate host (www.example.com when you're using example.com, or vice versa) should 301 to the canonical version. Both variants serving the same content is one of the most common and easily-fixable SEO leaks.
  • Redirect chains — each non-canonical variant (http://, http://www., https://www. or their non-www opposites) should redirect to the final canonical URL in a single hop. Chains like http://www.xhttp://xhttps://x cost extra round-trips and leak link equity at each step. The tool traces up to 5 redirects per variant and shows the full chain.
  • Server response time
  • Total HTML payload size
  • robots.txt present and referencing a sitemap

About the "Fix with AI" prompt

Rather than giving generic advice, the tool builds a self-contained prompt from your specific audit: current title, current H1, detected topic keywords, declared language, and a prioritised list of the exact issues found with reference HTML patterns.

Paste that prompt into Claude, ChatGPT, Gemini or any AI you trust and it will produce the replacement HTML snippets tailored to your page. No hallucinated content — the prompt instructs the AI to base any replacement on your existing title, H1 and top keywords so the output matches your page's actual topic.

Your data isn't sent anywhere by this site — the prompt is generated in memory and you decide which AI sees it. Results are cached for 10 minutes per URL then automatically discarded.

Frequently asked questions

Is this really free?
Yes. No account, no signup, no watermarks. Audits are rate-limited to 10 per hour per IP to keep the server polite to target sites.
Does this check page speed like Google PageSpeed Insights or GTmetrix?
No — it measures server response time and total HTML size, but doesn't run a full rendering engine. For Core Web Vitals numbers use PageSpeed Insights. This tool focuses on on-page SEO: metas, headings, structure, content quality.
How does the tool identify images on the page?
It reads all <img> tags in the rendered HTML. Images loaded lazily via JavaScript after page load may not be detected. The tool then does a HEAD request for each image URL to get its file size.
Will my audit hit the target site multiple times?
One GET for the page itself, one HEAD per image (max 50 images, 5 in parallel), plus one GET for /robots.txt. All with a transparent User-Agent: chedgzoy-seo-audit/1.0 (+https://chedgzoy.com/seo-check). Repeat audits of the same URL within 10 minutes are served from cache.
What if the target site is behind Cloudflare or a paywall?
Some bot-protection systems will block server-side fetches. If that happens the tool reports the error rather than pretending to succeed.
Does this work for Wix / Squarespace / React / Angular sites?

Short answer: fully server-rendered sites work well, purely JavaScript-rendered ones don't.

Works fine: WordPress, Shopify, modern Squarespace, Webflow, Drupal, Joomla, Ghost, Next.js (SSR/SSG), Nuxt, Astro, Gatsby, Hugo, Jekyll, any PHP / ASP.NET / Rails / Django site, and most corporate or CMS-driven sites.

Limited: Wix is the most common problem — Wix renders nearly all content via JavaScript (static.parastorage.com scripts), so the raw HTML this tool fetches is nearly empty. Similar story for old React / Angular / Vue single-page apps without SSR. The tool detects these cases and shows a banner explaining the limitation so you're not confused by sparse results.

Why not just run JavaScript? Running a headless browser (Playwright, Puppeteer) per audit would slow every request down by 5–10× and require significantly more server resources. Most sites don't need it, so we optimise for the common case. If you control the site and want better SEO / AI discoverability, switching to SSR or SSG solves the problem once for every crawler, not just our tool.

Can I audit my competitor's site?
Yes. The tool only reads publicly available HTML, same as Googlebot does. Nothing you couldn't see with "View Source" in your browser.
Do you store the audit results?
Only in memory, for 10 minutes, to serve repeat requests quickly. After that the result is discarded. No audit history is kept. See the privacy policy.
Can I use the AI prompt with Claude or ChatGPT even if I'm not paying for the pro tier?
Yes — the prompt works on free tiers. The prompt itself is maybe 500–1500 words depending on how many issues your page has, which fits comfortably in every current free plan.