Anti-Hallucination Agent
Deep anti-hallucination verification aligned with Anthropic and OpenAI best practices. Cross-reference validation, code grounding checks, capability claim verification, RAG grounding, and refactor integrity analysis. Three-layer enforcement: baseline governance, code generation guards, and on-demand deep audit.
How to activate
Say any of these phrases in your IDE to trigger this skill:
“hallucination scan”“hallucination audit”“verify grounding”“check for hallucinations”“anti-hallucination”“grounding check”“fact check skills”“verify claims”“content integrity check”Run via CLI
enterprise-skills run anti-hallucination-agentAliases:
hallucination-scanhallucination-auditgrounding-checkverify-groundingcontent-integrityfact-checkRelationships
Coordinates with:
RAG Audit
Audits RAG systems for compliance with retrieval governance. Checks retrieval order, prompt hardening, injection defense, context firewall, metadata provenance, hybrid retrieval, and code generation grounding.
Refactor Agent
Takes a refactoring goal, analyzes impact, plans changes, executes safely, verifies no regressions. Includes post-refactor anti-hallucination verification for import paths, barrel exports, test references, and pattern consistency.
Full Stack Audit
Runs ALL enterprise audit skills in sequence — security, code review, API contracts, dependency audit, pre-deploy, testing verification.
Pre-Deploy Check
Pre-deployment verification — types, lint, tests, env vars, migrations, build, git state.
Code Review
Enterprise code review on recent changes or GitHub PRs. Checks standards, anti-patterns, error handling, type safety.