Your prompting habits, visualized. A Claude Code hook + terminal dashboard that scores, categorizes, and tracks every prompt you send — helping you spot patterns and level up.
| TIME | CATEGORY | COMPLEXITY | SCORE | QUALITY | INSIGHTS | |
|---|---|---|---|---|---|---|
| ▸ | 2m ago | feature | medium | 8 | ████████░░ | Clear goal, good context |
| 15m ago | debug | high | 6 | ██████░░░░ | Missing error details | |
| 1h ago | refactor | low | 9 | █████████░ | Precise scope, clean ask | |
| 2h ago | explain | medium | 4 | ████░░░░░░ | Too vague, needs focus | |
| 3h ago | config | low | 7 | ███████░░░ | [img] Screenshot adds context | |
| 4h ago | test | medium | 7 | ███████░░░ | Good scope, add edge cases | |
| 5h ago | docs | low | 8 | ████████░░ | Specific, well-structured |
A
UserPromptSubmit
hook runs silently in the background every time you send a prompt.
Nothing changes about your workflow.
A fast, cheap LLM call via OpenRouter classifies your prompt's category, complexity, and quality (1-10) with a brief insight. Results are stored locally in SQLite.
Open the dashboard anytime to see your prompt history — quality trends, what types of work you do most, which projects get the most attention.
Every prompt scored 1-10. Spot vague prompts, track improvement over time.
Are you mostly debugging? Building features? See the breakdown across 8 categories.
Per-project breakdowns show which codebases get the most attention and prompt quality.
Runs silently as a hook. No changes to how you use Claude Code. Just open the dashboard when curious.
All results stored in a local SQLite database. The only external call is a cheap LLM inference via OpenRouter for analysis.
Detects when you attach screenshots or diagrams and factors them into the analysis.
You'll need Bun and an OpenRouter API key (used for the cheap LLM call that scores each prompt).
Copy this prompt, paste it into a Claude Code session, and it will set everything up for you. You'll just need to provide your OpenRouter API key when asked.
$ git clone https://github.com/iaserrat/promptlens ~/.claude/hooks/promptlens
$ cd ~/.claude/hooks/promptlens
$ bun install
$ bun link # makes `promptlens` available globally
# Create ~/.claude/hooks/promptlens/.env
OPENROUTER_API_KEY=sk-or-...
OPENROUTER_MODEL=anthropic/claude-haiku-4.5 # optional, this is the default
PROMPTLENS_MIN_LENGTH=50# optional, min chars to analyze
Add this to
~/.claude/settings.json
(or your project-level settings):
{
"hooks": {
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": "bun --env-file=$HOME/.claude/hooks/promptlens/.env run $HOME/.claude/hooks/promptlens/hook.ts",
"async": true
}
]
}
]
}
}
$ promptlens
Start using Claude Code as normal. Analyses appear in the dashboard automatically.
Navigate
Filter project
Filter category
Group sessions
Filter session
Delete entry
Help
Quit