About ACB

What AgenticCodingBench Is

AgenticCodingBench (ACB) is an open-source inference performance benchmark purpose-built for agentic coding workloads - the kind of LLM request patterns that Claude Code, Cursor, Windsurf, and Copilot generate in practice.

It measures how fast your serving stack runs under growing multi-turn contexts (6K to 400K tokens), with tool schemas, file contents, error traces, and concurrent agents. No existing benchmark tests these specific access patterns.

ACB produces a clear verdict - 🟢 GOOD, 🟔 MARGINAL, or šŸ”“ POOR - answering one question: "Is this endpoint good enough for agentic coding?"

Built by SwarmOne

ACB is created and maintained by SwarmOne - the AI-native cloud for agentic workloads. SwarmOne provides optimized infrastructure for running coding agents at scale, and ACB was born from the need to rigorously benchmark that infrastructure.

Project Architecture

agentic-coding-bench/
ā”œā”€ā”€ acb/
│   ā”œā”€ā”€ cli/           # CLI entry points (speed, eval, agent, record, replay)
│   ā”œā”€ā”€ core/          # Benchmark engine, request generation, metrics
│   ā”œā”€ā”€ tasks/         # 110 coding tasks (P1-P110)
│   ā”œā”€ā”€ context/       # Context profile builder & cache control
│   ā”œā”€ā”€ reporting/     # Report generation, verdicts, comparison
│   └── proxy/         # Recording proxy for acb agent/record
ā”œā”€ā”€ tests/             # Test suite
ā”œā”€ā”€ docker/            # Dockerfile and docker-compose
ā”œā”€ā”€ docs/              # Documentation source
└── examples/          # Example configs and workloads

Key Features

  • 110 coding tasks across 6 difficulty tiers (trivial → expert + multi-language)
  • 7 context profiles simulating real session growth (6K → 400K tokens)
  • 5 CLI modes: speed, eval, agent, record, replay
  • Cold vs warm cache measurement for prefix caching evaluation
  • Concurrent user simulation (1, 8, 32+ users)
  • Automated verdict system with per-metric grading
  • Docker support for reproducible benchmarking
  • JSON output for CI/CD integration

License

AgenticCodingBench is open source under the Apache 2.0 License. Free to use, modify, and distribute.

Apache-2.0

How to Cite

If you use ACB in research or publications, please cite:

@software{agenticcodingbench2026,
  title  = {AgenticCodingBench},
  author = {SwarmOne},
  url    = {https://github.com/SwarmOne/agentic-coding-bench},
  year   = {2026},
  note   = {Open-source benchmark for LLM inference under agentic coding workloads}
}

Links