Move sniffbench to new Evaluation section

As requested by maintainer, created a new Evaluation section
for tools that benchmark/evaluate AI coding agents, distinct
from Testing tools that help generate tests.
This commit is contained in:
jharris1679
2025-12-21 16:42:32 -05:00
parent cb7f6d1933
commit 4d8fd07dfd
+4
View File
@@ -19,6 +19,7 @@ This is a curated list of AI-powered developer tools. These tools leverage AI to
- [OpenAI plugins](#openai-plugins)
- [Search](#search)
- [Testing](#testing)
- [Evaluation](#evaluation)
## IDEs
- [Google Antigravity](https://antigravity.google/) — An agent-first IDE that orchestrates autonomous AI agents to plan, execute, and verify complex coding tasks with deep browser integration.
@@ -253,4 +254,7 @@ This is a curated list of AI-powered developer tools. These tools leverage AI to
- [Test Gru](https://gru.ai/home#test-gru) — Provides enterprise-level unit test automation services.
- [AgentsKB](https://agentskb.com) - Expert knowledge layer for AI assistants. Your AI searches, we research. That's the difference.
## Evaluation
- [sniffbench](https://github.com/AnswerLayer/sniffbench) — Benchmark suite for evaluating coding agents. Compare configurations, track metrics, and A/B test with real issues from your repos.