About ACB
What AgenticCodingBench Is
AgenticCodingBench (ACB) is an open-source inference performance benchmark purpose-built for agentic coding workloads - the kind of LLM request patterns that Claude Code, Cursor, Windsurf, and Copilot generate in practice.
It measures how fast your serving stack runs under growing multi-turn contexts (6K to 400K tokens), with tool schemas, file contents, error traces, and concurrent agents. No existing benchmark tests these specific access patterns.
ACB produces a clear verdict - š¢ GOOD, š” MARGINAL, or š“ POOR - answering one question: "Is this endpoint good enough for agentic coding?"
Built by SwarmOne
ACB is created and maintained by SwarmOne - the AI-native cloud for agentic workloads. SwarmOne provides optimized infrastructure for running coding agents at scale, and ACB was born from the need to rigorously benchmark that infrastructure.
Project Architecture
agentic-coding-bench/ āāā acb/ ā āāā cli/ # CLI entry points (speed, eval, agent, record, replay) ā āāā core/ # Benchmark engine, request generation, metrics ā āāā tasks/ # 110 coding tasks (P1-P110) ā āāā context/ # Context profile builder & cache control ā āāā reporting/ # Report generation, verdicts, comparison ā āāā proxy/ # Recording proxy for acb agent/record āāā tests/ # Test suite āāā docker/ # Dockerfile and docker-compose āāā docs/ # Documentation source āāā examples/ # Example configs and workloads
Key Features
- 110 coding tasks across 6 difficulty tiers (trivial ā expert + multi-language)
- 7 context profiles simulating real session growth (6K ā 400K tokens)
- 5 CLI modes: speed, eval, agent, record, replay
- Cold vs warm cache measurement for prefix caching evaluation
- Concurrent user simulation (1, 8, 32+ users)
- Automated verdict system with per-metric grading
- Docker support for reproducible benchmarking
- JSON output for CI/CD integration
License
AgenticCodingBench is open source under the Apache 2.0 License. Free to use, modify, and distribute.
How to Cite
If you use ACB in research or publications, please cite:
@software{agenticcodingbench2026,
title = {AgenticCodingBench},
author = {SwarmOne},
url = {https://github.com/SwarmOne/agentic-coding-bench},
year = {2026},
note = {Open-source benchmark for LLM inference under agentic coding workloads}
}