About · Last reviewed May 2026
About Check.AI
Check.AI is a comparison database for AI platforms and models — plans, API pricing, context windows, capabilities, and benchmarks. The site is built and maintained by a single developer; this page explains who, where the data comes from, and how it stays accurate.
Maintainer
Check.AI is maintained by @zayuerweb-dev on GitHub. The full source for this site, including all comparison data, page templates, and the daily sync workflow, is public:
- Repository: github.com/zayuerweb-dev/check-ai
- Issues / corrections: open a GitHub issue or PR on the repo
- Hosted on: Cloudflare Pages
If you find a wrong number, an outdated price, or a missing model — file an issue and it will be fixed in the next sync.
Data sources
Three categories of data sit behind every page:
- Pricing & specs. Pulled daily from the public models.dev dataset (a community-maintained snapshot of every major provider's documented model list), then cross-checked against the providers' own pricing pages (OpenAI, Anthropic, Google, xAI, DeepSeek, Alibaba, Mistral).
- Benchmark scores. Sourced from public leaderboards: SWE-bench Verified, LMArena, HumanEval, LiveCodeBench, MMLU-Pro, GPQA, AIME. Numbers in compare and topic pages are rounded to whole percentages because exact figures shift between evaluation runs.
- Editorial verdicts. The "best for", "verdict", and "30-second TL;DR" sections are written by the maintainer based on hands-on usage of each model in production code, agent loops, and content workflows. They are opinions, not facts. They are dated so you can judge how stale they may be.
Update cadence
The data layer and the editorial layer move at different speeds:
- Daily: a GitHub Action pulls models.dev and opens a PR if anything changed. Pricing, new model releases, and capability flags are updated within 24–48 hours of upstream.
- Weekly: editorial verdicts and benchmark scores are reviewed against fresh data and rewritten where they no longer hold.
- Per release: when a major model launches (Claude 5, GPT-5.5, etc.) the affected topic and compare pages are rewritten the same week.
Editorial standards
- Dated, not "current". Every article and topic page carries a published date and a "last updated" date. AI moves too fast to claim "current" without a date.
- Sourced. Pricing and benchmark numbers are linked back to the source page or leaderboard. If a number is an estimate or a community report, it is labelled.
- Neutral. The site is not affiliated with any model vendor. Editorial verdicts pick winners on specific axes; we do not designate an overall "best AI" because the answer depends on your workload.
- Falsifiable. If you can show a concrete benchmark or pricing fact is wrong, it gets corrected.
Conflict of interest disclosure
- Check.AI has no paid sponsorships and runs no display ads.
- Some outbound links (e.g. to OpenRouter) carry a
utm_sourceparameter for traffic attribution. OpenRouter does not currently operate a public referral program, so these links pay no commission — they are recommendations on merit. - The maintainer has no equity, employment, or contracting relationship with any model vendor (OpenAI, Anthropic, Google, xAI, DeepSeek, Alibaba, Mistral) at the time of writing.
- If a real affiliate relationship is added in the future (e.g. Together AI, Cursor referrals), it will be disclosed at the link and listed here.
What Check.AI is not
- Not a replacement for running your own evaluation against your own data.
- Not an SLA or a real-time API status page — for vendor outages, check the provider's official status page.
- Not exhaustive — only the eight frontier models with serious global mindshare in 2026 are tracked deeply. Smaller / regional models may be added based on reader requests.
Contact
Best channel for corrections, model requests, or factual disputes: open a GitHub issue. For everything else: contact page.