Problem we're trying to solve: Writing API tests is tedious, and hand-written mocks drift from reality. We wanted tests that stay realistic because they come from real traffic.
versus mocking libraries: Tools like VCR/Nock intercept HTTP within your tests. Tusk Drift records full request/response traces externally (HTTP, DB, Redis, etc.) and replays them against your running service, no test code or fixtures to write/maintain.
How it works:
1. Add a lightweight SDK (we currently support Python and Node.js)
2. Record traffic in any environment.
3. Run `tusk run`, the CLI sandboxes your service and serves mocks via Unix socket
We run this in CI on every PR. Also been using it as a test harness for AI coding agents, they can make changes, run `tusk run`, and get immediate feedback without needing live dependencies.
Source: https://github.com/Use-Tusk/tusk-drift-cli
Demo: https://github.com/Use-Tusk/drift-node-demo
Happy to answer questions!
Also,I loved the approach here!
That said, we're also exploring extending it for capacity modeling and resource estimation, which would be a differentiated approach from traditional load testing. Synthetic benchmarks fail to capture how traffic patterns (not just volume) affect resource usage. Since we already record real production traffic, we're uniquely positioned to:
1. Replay specific time periods (e.g., last year's Black Friday sale)
2. Preserve the natural distribution of request types
3. Control downstream latency via our mock system
4. Build models beyond linear regression for QPS -> CPU/mem prediction
What performance testing use case did you have in mind? We're actively exploring this space.