Getting Started

Install AgenticCodingBench and run your first benchmark in under 2 minutes.

Installation

AgenticCodingBench is available on PyPI. Requires Python 3.9+.

pip install agentic-coding-bench

For proxy support (required for acb agent and acb record):

pip install "agentic-coding-bench[proxy]"

Your First Benchmark

Run a quick speed test against any OpenAI-compatible endpoint. This sends realistic agentic coding requests at 6K and 40K context with 1 and 8 concurrent users.

acb speed \
  --endpoint http://localhost:8000 \
  --model my-model \
  --suite quick

acb is the short alias. agentic-coding-bench also works.

Full Suite with Report

Sweep all context sizes (6K → 400K) and concurrency levels. Generate a Markdown report with verdicts, charts, and recommendations.

acb speed \
  --endpoint http://localhost:8000 \
  --model my-model \
  --suite full \
  --output report.md

Endpoint URL Handling

Pass any URL. If it doesn't end with /v1/chat/completions, the path is appended automatically. Both of these work:

acb speed -e http://localhost:8000 -m my-model
acb speed -e https://api.example.com/v1/chat/completions -m my-model

Authentication

By default, --api-key is sent as Authorization: Bearer <key>. If your endpoint uses a different header:

acb speed -e URL -m MODEL -k MY_KEY --api-key-header X-API-Key

Dry Run

Preview exactly what will be sent to the endpoint without making any requests. Useful for validating configuration.

acb speed -e URL -m MODEL --dry-run

Docker Quickstart

Run without installing Python. Results are mounted to your host via volume:

docker run --rm -v $(pwd)/results:/results \
  swarmone/agentic-coding-bench speed \
  --endpoint http://host.docker.internal:8000 \
  --model my-model \
  --suite quick \
  --output /results/report.md

Use host.docker.internal to reach services running on your host machine from inside the container.

Next Steps