Feedback System Pro
Calibrated estimates, anomaly detection, and AI usage insights
PairCoder's feedback system analyzes your telemetry data to provide calibrated estimates, detect anomalies, and help you understand your AI-assisted development patterns.
Overview
The feedback system builds on the telemetry data you collect. It provides:
- Calibrated estimates — token and duration predictions per task type
- Model recommendations — suggests haiku, sonnet, or opus based on task complexity
- Anomaly detection — flags unusual token spikes, duration outliers, and failure patterns
- Overhead tracking — measures human intervention and compaction costs
- Task classification — automatically categorizes tasks for better estimates
All feedback processing happens locally on your machine. If you opt in to Standard or Full telemetry, anonymized calibration data may also be shared during license validation to help improve estimation accuracy across the product.
CLI Commands
bpsai-pair feedback status
Shows calibration health, per-type estimates, and anomaly summary.
bpsai-pair feedback status
bpsai-pair feedback status --json
Output includes: calibration state, total records analyzed, per-task-type estimates, and recent anomalies.
bpsai-pair feedback accuracy
Compares estimated vs actual performance over a time window.
bpsai-pair feedback accuracy
bpsai-pair feedback accuracy --days 14
bpsai-pair feedback accuracy --json
Output includes: total tasks in the period, success rate, average tokens and duration, and breakdown by task type.
bpsai-pair feedback calibrate
Triggers a manual recalibration from current telemetry data.
bpsai-pair feedback calibrate
bpsai-pair feedback calibrate --json
Calibration is also automatic — the engine recalibrates when its cache is older than 24 hours.
bpsai-pair feedback query <task_type>
Gets estimates for a specific task type.
bpsai-pair feedback query feature
bpsai-pair feedback query bugfix --json
Output includes: token estimates (avg, p50, p90), duration estimates, sample count and success rate, recommended model, and effort level.
How Calibration Works
- The engine reads all telemetry records from
.paircoder/telemetry/telemetry.jsonl - Records are grouped by task type (feature, bugfix, refactor, etc.)
- For each type, it computes: mean, standard deviation, percentiles (p50, p90)
- Results are cached as
.paircoder/telemetry/calibration.json(24h TTL) - Token estimates drive model recommendations:
- haiku: avg tokens < 20,000
- sonnet: avg tokens 20,000–60,000
- opus: avg tokens > 60,000
Task Types
The classifier recognizes these task types from task titles and descriptions:
| Type | Examples |
|---|---|
feature | Add user authentication, Build dashboard |
bugfix | Fix login error, Resolve crash on startup |
refactor | Extract helper module, Clean up imports |
test | Add unit tests for parser |
api_endpoint | Create REST endpoint for users |
cli_command | Add export command |
docs_page | Write API documentation |
test_suite | Integration test suite |
config | Update configuration schema |
schema | Define data model |
Anomaly Detection
The anomaly detector scans telemetry for five anomaly types:
| Type | Trigger |
|---|---|
token_spike | Token count > 3x the type average |
duration_spike | Duration > 3x the type average |
repeated_failure | Multiple consecutive failures |
compaction_heavy | High compaction count per session |
overhead_spike | Excessive human interventions |
Each anomaly includes a severity level and a recommendation for action.
Data Storage
.paircoder/telemetry/
├── telemetry.jsonl # Raw telemetry records
├── calibration.json # Calibration cache (auto-generated)
├── audit.jsonl # Telemetry audit log
└── config.yaml # Telemetry settings
The calibration.json cache is regenerated automatically. You can safely delete it and run bpsai-pair feedback calibrate to rebuild.