Hacker Newsnew | past | comments | ask | show | jobs | submit | anotherCodder's commentslogin

Hey HN - I built agnix because I kept losing time to the same class of bug: AI tool configs that are almost right but silently wrong.

The trigger: I had a Claude Code skill named `Review-Code`. It never auto-triggered. No error message. I spent 20 minutes debugging prompt engineering before realizing the Agent Skills spec requires kebab-case names (`review-code`). The spec is clear about this, but Claude Code doesn't validate it - it just silently ignores the skill.

This happens across the entire AI tool ecosystem. Cursor has .mdc rule files with YAML frontmatter - invalid YAML means your rule metadata is silently dropped. MCP server configs can use deprecated transport types with no warning. Claude Code hooks support `type: "prompt"` only on Stop and SubagentStop events - use it on PreToolUse and nothing happens. GitHub Copilot instruction files need valid glob patterns in their `applyTo` field - a malformed glob matches nothing.

None of these tools validate their own configuration files. It's the same gap that ESLint filled for JavaScript or clippy fills for Rust: catching things that are syntactically valid but semantically wrong.

agnix currently has 156 validation rules across 28 categories, covering 11 tools (Claude Code, Cursor, GitHub Copilot, Codex CLI, Cline, MCP, OpenCode, Gemini CLI, and more). Every rule is traced to an authoritative source - official specs, vendor documentation, or research papers.

Technical choices:

- Written in Rust, parallel validation via rayon - LSP server for real-time IDE diagnostics (VS Code, JetBrains, Neovim, Zed) - 57 auto-fixable rules (`agnix --fix .`) - SARIF 2.1.0 output for CI/security workflows - GitHub Action for CI integration - Deterministic benchmarks via iai-callgrind in CI (blocks merge on perf regression) - Single file: <10ms. 100 files: ~200ms.

Try it: `npx agnix .` (zero install, zero config)

I'd like feedback on two things: (1) whether the rule coverage feels right - are there config mistakes I'm not catching? and (2) whether anyone has thoughts on the cross-platform validation approach (detecting conflicts between CLAUDE.md, AGENTS.md, and .cursorrules in the same project).

I also wrote a longer post about the problem space: https://dev.to/avifenesh/your-ai-agent-configs-are-probably-...

MIT/Apache-2.0.


For users of Node.js, Java, Python and very soon Go - Valkey-Glide: https://github.com/valkey-io/valkey-glide It will stay out of the hand of Redis, can promise that, and if you use another key-val db and want to contribute to its compatibility with it, we will be happy, come talk to us.


Want to understand what dev's will appreciate and will lead decision makers to choose our client. What will make you choose a client library over the other, and what will make consider refactoring to a new one? ValKey-Glide, ValKey/redis-OSS client library. Multilingual wrap over Rust core (available in python and java, nodejs is 1.0 in two weeks, and go is under developments, active roadmap for C# and PHP). OSS under Valkey org, part of Linux foundation, backed by AWS and GCP.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: