>_ resources

Claude Code Statusline

A real-time analytics dashboard that lives in your Claude Code terminal. Tracks session cost, token burn rate, context window usage, cache efficiency, code changes, git status, API usage limits, and productivity -all in a single line.

~280 lines of Bash ยท Zero dependencies beyond jq and bc

claude-code statusline
๐Ÿ“ rabapl|๐Ÿค– Opus 4.6|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘62%|โ†‘142.5K โ†“18.3K|๐Ÿ’ฐ $2.85$0.42/m|๐Ÿ“Š today:$14.23|๐Ÿ”ฅ 8420t/m ~23m|+847/-129|๐Ÿ’พ 73%|โšก 24810tok/$๐Ÿฅ‡|๐Ÿงช 42/45 โœ—|๐ŸŒฟ mainS:2U:1|๐Ÿ• 16:42|โณ ~23m|๐Ÿ“ก 5h:37% 7d:26%

>_Installation

1. Prerequisites

The script requires jq (JSON parsing) and bc (math). Both are pre-installed on macOS. On Linux:

# Debian/Ubuntu
sudo apt install jq bc

# Arch
sudo pacman -S jq bc

2. Download the script

Save the script to your Claude config directory:

curl -fsSL https://raba.pl/statusline-command.sh \
  -o ~/.claude/statusline-command.sh

3. Configure Claude Code

Add the statusline configuration to your ~/.claude/settings.json. Merge this into your existing settings:

{
  "statusLine": {
    "type": "command",
    "command": "bash ~/.claude/statusline-command.sh"
  }
}

4. Restart Claude Code

Close and reopen Claude Code. The statusline will appear at the bottom of your terminal. That's it.

>_Sections Breakdown

The statusline is composed of segments separated by dim vertical bars. Each segment shows a different metric. Here is what each one means:

๐Ÿ“

Directory

rabapl

Shows the name of your current working directory so you always know which project Claude is operating in.

๐Ÿค–

Model

Opus 4.6

Displays the active Claude model. Helps confirm whether you're running Opus, Sonnet, or Haiku.

โ–ˆ

Context Bar

โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘ 62%

A 20-character visual progress bar showing context window usage. Colors shift from green (0-35%) through yellow (35-70%) to red (70-100%) as the window fills up.

โ–ˆ

Tokens

โ†‘142.5K โ†“ 18.3K

Total input (โ†‘) and output (โ†“) tokens for the session. Values are formatted as K (thousands) or M (millions) for readability.

๐Ÿ’ฐ

Session Cost

$2.85 $0.42/m

Current session cost in USD (bold). When the session exceeds 60 seconds, a cost-per-minute rate is appended.

๐Ÿ“Š

Daily Cost

today:$14.23

Cumulative spending across all sessions for the current day. Costs are logged to ~/.claude/statusline-cost-log and deduplicated by session ID before summing.

๐Ÿ”ฅ

Burn Rate & ETA

8420t/m ~23m

Token burn rate (tokens per minute) and estimated time until the context window is exhausted at the current pace. Shows hours for long sessions.

โœ๏ธ

Code Changes

+847/-129

Lines of code added (green) and removed (red) during the session. A quick indicator of how much code is being generated.

๐Ÿ’พ

Cache Hit Rate

73%

Prompt cache hit rate - the percentage of input tokens served from cache vs. freshly processed. Higher values mean lower cost and faster responses.

โšก

Productivity

24810tok/$

Output tokens per dollar spent. A measure of how efficiently the model converts your spend into generated output.

โ–ˆ

Rank

๐Ÿ† / ๐Ÿฅ‡ / ๐Ÿฅˆ / ๐Ÿฅ‰

Productivity tier badge. ๐Ÿ† = 50K+ tok/$, ๐Ÿฅ‡ = 20K+, ๐Ÿฅˆ = 10K+, ๐Ÿฅ‰ = 5K+. Gamifies your cost efficiency.

๐Ÿงช

Test Status

42/45 โœ—

Test pass/fail indicator. Auto-detects JUnit XML (Gradle, Maven, vitest, pytest), Playwright results, or reads a .test-status convention file. Cached for 30 seconds. Omitted entirely when no test data is found.

๐ŸŒฟ

Git Status

main S:2 U:1 A:0

Current branch name with file counts -S (staged, green), U (unstaged, yellow), A (untracked/added, red). Branch name turns yellow when dirty. Git data is cached for 5 seconds.

๐Ÿ•

Clock

16:42

Current time of day. Simple but useful during long sessions to avoid losing track of time.

โณ

Time Remaining

~23m

Estimated time remaining before the context window fills up, based on the current token burn rate.

๐Ÿ“ก

Usage Limits (Pro/Max)

5h:37% 7d:26%

Real-time usage limits from the Anthropic API. Shows 5-hour and 7-day rolling window utilization percentages. Requires macOS Keychain credentials (Pro/Max plans only). Cached for 60 seconds.

>_Usage Limits (Pro/Max)

Claude Code Pro and Max plans have rolling usage windows. The script can fetch your real-time utilization from the Anthropic API and display it directly in the statusline - no more running /usage manually.

API Endpoint

The script calls the undocumented /api/oauth/usage endpoint on api.anthropic.com. It returns JSON with rolling window data:

{
  "five_hour": {
    "utilization": 37.0,
    "resets_at": "2026-02-24T04:59:59.000000+00:00"
  },
  "seven_day": {
    "utilization": 26.0,
    "resets_at": "2026-02-28T14:59:59.771647+00:00"
  },
  "seven_day_opus": { "utilization": 12.0, "resets_at": "..." },
  "seven_day_sonnet": { "utilization": 1.0, "resets_at": "..." },
  "extra_usage": {
    "is_enabled": false,
    "monthly_limit": null,
    "used_credits": null,
    "utilization": null
  }
}

Authentication

The script extracts your OAuth token from the macOS Keychain using security find-generic-password -s "Claude Code-credentials" -w. The token is a JSON blob containing an access token which is used as a Bearer token in the API call. This only works on macOS with Pro/Max plans - API key users don't have rolling limits.

Caching & Rate Limiting

The limits response is cached to /tmp/claude-statusline-limits-cache for 60 seconds. This avoids hammering the API on every statusline render. If the API call fails (network error, expired token, Linux), the limits segment is silently omitted.

What's Displayed

The statusline shows two values:

  • 5h:37% -5-hour rolling window utilization. This is the short-term rate limit that resets most frequently.
  • 7d:26% -7-day rolling window utilization. The longer-term cap on total usage.

Colors shift from green (<50%) through yellow (50-80%) to red (80%+) to warn you before hitting limits.

>_How It Works

Data Flow

Claude Code pipes a JSON object to the statusline command via stdin on every render cycle. The JSON includes the active model, context window metrics, token counts, cost data, code change stats, session ID, and workspace info. The script parses this with jq, computes derived metrics with bc, and echoes a formatted string with ANSI color codes.

Performance

The script runs frequently, so performance matters. Git operations (branch, diff, ls-files) are cached for 5 seconds in /tmp/claude-statusline-git-cache. Daily cost totals are cached for 30 seconds. Token counts use K/M formatting to avoid long numbers.

Cost Logging

Every render appends a line to ~/.claude/statusline-cost-log in the format YYYY-MM-DD|session_id|cost_usd. When computing the daily total, the script deduplicates by session_id (keeping the latest cost per session) before summing. This gives you accurate cross-session daily spend tracking.

Color Gradient

The context window progress bar uses a 20-character block display. The first 7 blocks are green (safe), blocks 8-14 are yellow (caution), and blocks 15-20 are red (critical). Empty positions use a dim shade character.

New in 2026: Additional JSON Fields

Claude Code now exposes several additional fields in the statusline JSON that you can use in your scripts:

  • exceeds_200k_tokens -boolean, true when total tokens exceed 200K threshold
  • transcript_path -path to the conversation transcript file
  • output_style.name -current output style name
  • vim.mode -NORMAL or INSERT when vim mode is enabled
  • agent.name -agent name when running with --agent flag

Multi-line output and OSC 8 clickable links are also now officially supported. Each echo statement produces a separate row, and you can use the padding setting to add horizontal spacing.

>_Full Script

The complete statusline-command.sh -~280 lines of Bash. You can also download it directly.

#!/usr/bin/env bash

input=$(cat)

# --- Extract fields (correct paths from official docs) ---
model_name=$(echo "$input" | jq -r '.model.display_name // "?"')
cwd=$(echo "$input" | jq -r '.workspace.current_dir // .cwd // "~"')
project_dir=$(echo "$input" | jq -r '.workspace.project_dir // ""')
used_pct=$(echo "$input" | jq -r '.context_window.used_percentage // 0')
remaining_pct=$(echo "$input" | jq -r '.context_window.remaining_percentage // 100')
total_in=$(echo "$input" | jq -r '.context_window.total_input_tokens // 0')
total_out=$(echo "$input" | jq -r '.context_window.total_output_tokens // 0')
cur_in=$(echo "$input" | jq -r '.context_window.current_usage.input_tokens // 0')
cur_out=$(echo "$input" | jq -r '.context_window.current_usage.output_tokens // 0')
cache_create=$(echo "$input" | jq -r '.context_window.current_usage.cache_creation_input_tokens // 0')
cache_read=$(echo "$input" | jq -r '.context_window.current_usage.cache_read_input_tokens // 0')
context_size=$(echo "$input" | jq -r '.context_window.context_window_size // 200000')
cost_usd=$(echo "$input" | jq -r '.cost.total_cost_usd // 0')
duration_ms=$(echo "$input" | jq -r '.cost.total_duration_ms // 0')
api_ms=$(echo "$input" | jq -r '.cost.total_api_duration_ms // 0')
lines_added=$(echo "$input" | jq -r '.cost.total_lines_added // 0')
lines_removed=$(echo "$input" | jq -r '.cost.total_lines_removed // 0')
version=$(echo "$input" | jq -r '.version // ""')
session_id=$(echo "$input" | jq -r '.session_id // ""')

# --- Colors ---
red=$'\033[31m'; grn=$'\033[32m'; ylw=$'\033[33m'; blu=$'\033[34m'
mag=$'\033[35m'; cyn=$'\033[36m'; dim=$'\033[90m'; bld=$'\033[1m'; rst=$'\033[0m'; wht=$'\033[97m'
sep=" ${dim}${rst} "

# --- Format tokens ---
fmt() {
    local n=$1
    if [ "$n" -ge 1000000 ] 2>/dev/null; then echo "$(echo "scale=1; $n/1000000" | bc -l)M"
    elif [ "$n" -ge 1000 ] 2>/dev/null; then echo "$(echo "scale=1; $n/1000" | bc -l)K"
    else echo "$n"; fi
}

# --- Gradient progress bar (20 chars) ---
filled=$(printf "%.0f" "$(echo "$used_pct * 0.2" | bc -l 2>/dev/null || echo 0)")
bar=""
for ((i=0; i<filled && i<20; i++)); do
    if [ $i -lt 7 ]; then bar+=$'\033[32m'"โ–ˆ"
    elif [ $i -lt 14 ]; then bar+=$'\033[33m'"โ–ˆ"
    else bar+=$'\033[31m'"โ–ˆ"; fi
done
for ((i=filled; i<20; i++)); do bar+="${dim}โ–‘"; done
bar+="${rst}"

# --- Cost ---
if [ "$(echo "$cost_usd < 0.01" | bc -l 2>/dev/null)" = "1" ]; then cost_str='<$0.01'
elif [ "$(echo "$cost_usd < 1" | bc -l 2>/dev/null)" = "1" ]; then cost_str=$(printf '$%.3f' "$cost_usd")
else cost_str=$(printf '$%.2f' "$cost_usd"); fi

# --- Duration ---
total_secs=$((duration_ms / 1000))
api_secs=$((api_ms / 1000))
h=$((total_secs / 3600)); m=$(((total_secs % 3600) / 60)); s=$((total_secs % 60))
if [ $h -gt 0 ]; then dur="${h}h${m}m"
elif [ $m -gt 0 ]; then dur="${m}m${s}s"
else dur="${s}s"; fi

# --- Burn rate & ETA ---
burn_str=""
if [ $total_secs -gt 0 ]; then
    tpm=$(echo "scale=0; ($total_in + $total_out) * 60 / $total_secs" | bc -l 2>/dev/null || echo "0")
    if [ "$tpm" -gt 0 ] 2>/dev/null; then
        used_tokens=$((cur_in + cache_create + cache_read))
        remaining=$((context_size - used_tokens))
        if [ "$remaining" -gt 0 ] 2>/dev/null; then
            ml=$(echo "scale=0; $remaining / $tpm" | bc -l 2>/dev/null || echo "?")
            if [ "$ml" -lt 60 ] 2>/dev/null; then eta="~${ml}m"
            else eta="~$((ml / 60))h"; fi
        else
            eta="full!"
        fi
        burn_str="${tpm}t/m ${eta}"
    fi
fi

# --- Cost per minute ---
cpm_str=""
if [ $total_secs -gt 60 ]; then
    cpm=$(echo "scale=3; $cost_usd * 60 / $total_secs" | bc -l 2>/dev/null || echo "0")
    cpm_str="$(printf '$%.2f/m' "$cpm")"
fi

# --- Cache hit rate ---
cache_str=""
total_cache=$((cache_create + cache_read))
if [ "$total_cache" -gt 0 ] 2>/dev/null; then
    hit_pct=$(echo "scale=0; $cache_read * 100 / ($cache_read + $cache_create + $cur_in)" | bc -l 2>/dev/null || echo "0")
    cache_str="${hit_pct}%"
fi

# --- Lines changed ---
lines_str=""
if [ "$lines_added" -gt 0 ] 2>/dev/null || [ "$lines_removed" -gt 0 ] 2>/dev/null; then
    lines_str="${grn}+${lines_added}${rst}/${red}-${lines_removed}${rst}"
fi

# --- Git (with caching for perf) ---
CACHE_FILE="/tmp/claude-statusline-git-cache"
CACHE_MAX_AGE=5
cd "$cwd" 2>/dev/null || cd ~

cache_stale() {
    [ ! -f "$CACHE_FILE" ] || \
    [ $(($(date +%s) - $(stat -f %m "$CACHE_FILE" 2>/dev/null || stat -c %Y "$CACHE_FILE" 2>/dev/null || echo 0))) -gt $CACHE_MAX_AGE ]
}

if cache_stale; then
    if git rev-parse --git-dir > /dev/null 2>&1; then
        branch=$(git branch --show-current 2>/dev/null || echo "detached")
        staged=$(git diff --cached --numstat 2>/dev/null | wc -l | tr -d ' ')
        unstaged=$(git diff --numstat 2>/dev/null | wc -l | tr -d ' ')
        untracked=$(git ls-files --others --exclude-standard 2>/dev/null | wc -l | tr -d ' ')
        echo "${branch}|${staged}|${unstaged}|${untracked}" > "$CACHE_FILE"
    else
        echo "|||" > "$CACHE_FILE"
    fi
fi

IFS='|' read -r branch staged unstaged untracked < "$CACHE_FILE"

git_str=""
if [ -n "$branch" ]; then
    dirty=""
    [ "$staged" -gt 0 ] 2>/dev/null && dirty+=" ${grn}S:${staged}${rst}"
    [ "$unstaged" -gt 0 ] 2>/dev/null && dirty+=" ${ylw}U:${unstaged}${rst}"
    [ "$untracked" -gt 0 ] 2>/dev/null && dirty+=" ${red}A:${untracked}${rst}"
    if [ -n "$dirty" ]; then
        git_str="${ylw}${branch}${rst}${dirty}"
    else
        git_str="${grn}${branch}${rst}"
    fi
fi

# --- Folder name (needed for test cache key) ---
dir_name="${cwd##*/}"

# --- Test status (cached, 30s refresh) ---
TEST_CACHE="/tmp/claude-statusline-test-${dir_name}"
TEST_CACHE_AGE=30

test_cache_stale() {
    [ ! -f "$TEST_CACHE" ] || \
    [ $(($(date +%s) - $(stat -f %m "$TEST_CACHE" 2>/dev/null || stat -c %Y "$TEST_CACHE" 2>/dev/null || echo 0))) -gt $TEST_CACHE_AGE ]
}

if test_cache_stale; then
    t_pass=0; t_fail=0; t_total=0; t_found=false

    # 1. Convention file: .test-status (pass|fail|total)
    if [ -f "$cwd/.test-status" ]; then
        IFS='|' read -r t_pass t_fail t_total < "$cwd/.test-status"
        [ "$t_total" -gt 0 ] 2>/dev/null && t_found=true
    fi

    # 2. Auto-detect: JUnit XML (Gradle, Maven, vitest --reporter=junit, pytest --junitxml)
    if ! $t_found; then
        junit_files=$(find "$cwd" -maxdepth 5 -name "*.xml" \
            \( -path "*/test-results/*" -o -path "*/surefire-reports/*" -o -path "*/junit*" \) \
            -mmin -60 2>/dev/null | head -10)
        if [ -n "$junit_files" ]; then
            t_tests=0; t_failures=0; t_errors=0
            for xf in $junit_files; do
                ts=$(grep -o 'tests="[0-9]*"' "$xf" 2>/dev/null | grep -o '[0-9]*' | head -1)
                fl=$(grep -o 'failures="[0-9]*"' "$xf" 2>/dev/null | grep -o '[0-9]*' | head -1)
                er=$(grep -o 'errors="[0-9]*"' "$xf" 2>/dev/null | grep -o '[0-9]*' | head -1)
                [ -n "$ts" ] && t_tests=$((t_tests + ts))
                [ -n "$fl" ] && t_failures=$((t_failures + fl))
                [ -n "$er" ] && t_errors=$((t_errors + er))
            done
            if [ "$t_tests" -gt 0 ]; then
                t_fail=$((t_failures + t_errors))
                t_pass=$((t_tests - t_fail))
                t_total=$t_tests
                t_found=true
            fi
        fi
    fi

    # 3. Auto-detect: Playwright .last-run.json
    if ! $t_found && [ -f "$cwd/test-results/.last-run.json" ]; then
        pw_pass=$(jq '[.suites[]?.specs[]?.tests[]? | select(.results[]?.status == "passed")] | length' "$cwd/test-results/.last-run.json" 2>/dev/null || echo "0")
        pw_fail=$(jq '[.suites[]?.specs[]?.tests[]? | select(.results[]?.status != "passed")] | length' "$cwd/test-results/.last-run.json" 2>/dev/null || echo "0")
        t_pass=$((pw_pass)); t_fail=$((pw_fail)); t_total=$((pw_pass + pw_fail))
        [ "$t_total" -gt 0 ] && t_found=true
    fi

    # Write cache (or clear it)
    if $t_found && [ "$t_total" -gt 0 ] 2>/dev/null; then
        echo "${t_pass}|${t_fail}|${t_total}" > "$TEST_CACHE"
    else
        rm -f "$TEST_CACHE"
    fi
fi

# Read test cache and format
test_str=""
if [ -f "$TEST_CACHE" ]; then
    IFS='|' read -r t_pass t_fail t_total < "$TEST_CACHE"
    if [ "$t_total" -gt 0 ] 2>/dev/null; then
        if [ "$t_fail" -gt 0 ] 2>/dev/null; then
            test_str="${red}${t_pass}/${t_total} โœ—${rst}"
        else
            test_str="${grn}${t_total}/${t_total} โœ“${rst}"
        fi
    fi
fi

# --- Daily cumulative cost (cached, 30s refresh) ---
COST_LOG="$HOME/.claude/statusline-cost-log"
COST_CACHE="/tmp/claude-statusline-cost-totals"
COST_CACHE_AGE=30

today=$(date +%Y-%m-%d)
echo "${today}|${session_id}|${cost_usd}" >> "$COST_LOG" 2>/dev/null

cost_totals_stale() {
    [ ! -f "$COST_CACHE" ] || \
    [ $(($(date +%s) - $(stat -f %m "$COST_CACHE" 2>/dev/null || stat -c %Y "$COST_CACHE" 2>/dev/null || echo 0))) -gt $COST_CACHE_AGE ]
}

daily_cost=""
if cost_totals_stale && [ -f "$COST_LOG" ]; then
    d_cost=$(awk -F'|' -v d="$today" '$1==d {a[$2]=$3} END {s=0; for(k in a) s+=a[k]; printf "%.2f",s}' "$COST_LOG" 2>/dev/null || echo "0")
    echo "${d_cost}" > "$COST_CACHE"
fi

if [ -f "$COST_CACHE" ]; then
    read -r daily_cost < "$COST_CACHE"
fi

cost_history_str=""
if [ -n "$daily_cost" ] && [ "$daily_cost" != "0.00" ] 2>/dev/null; then
    cost_history_str="today:\$${daily_cost}"
fi

# --- MCP server count ---
mcp_str=""
mcp_count=0
if [ -f "$HOME/.claude/settings.json" ]; then
    plugin_count=$(jq '[.enabledPlugins // {} | to_entries[] | select(.value == true)] | length' "$HOME/.claude/settings.json" 2>/dev/null || echo "0")
    server_count=$(jq '.mcpServers // {} | keys | length' "$HOME/.claude/settings.json" 2>/dev/null || echo "0")
    mcp_count=$((plugin_count + server_count))
fi
[ "$mcp_count" -gt 0 ] 2>/dev/null && mcp_str="${mcp_count}"

# --- Productivity score & rank ---
prod_str=""
rank_str=""
if [ "$(echo "$cost_usd > 0.01" | bc -l 2>/dev/null)" = "1" ]; then
    prod=$(echo "scale=0; $total_out / $cost_usd" | bc -l 2>/dev/null || echo "0")
    prod_str="${prod}tok/\$"
    if [ "$prod" -ge 50000 ] 2>/dev/null; then rank_str="๐Ÿ†"
    elif [ "$prod" -ge 20000 ] 2>/dev/null; then rank_str="๐Ÿฅ‡"
    elif [ "$prod" -ge 10000 ] 2>/dev/null; then rank_str="๐Ÿฅˆ"
    elif [ "$prod" -ge 5000 ] 2>/dev/null; then rank_str="๐Ÿฅ‰"
    else rank_str=""; fi
fi

# --- Usage limits (Pro/Max only, macOS Keychain, cached 60s) ---
LIMITS_CACHE="/tmp/claude-statusline-limits-cache"
LIMITS_CACHE_AGE=60
limits_str=""

limits_stale() {
    [ ! -f "$LIMITS_CACHE" ] || \
    [ $(($(date +%s) - $(stat -f %m "$LIMITS_CACHE" 2>/dev/null || stat -c %Y "$LIMITS_CACHE" 2>/dev/null || echo 0))) -gt $LIMITS_CACHE_AGE ]
}

if limits_stale; then
    cred_json=$(security find-generic-password -s "Claude Code-credentials" -w 2>/dev/null || echo "")
    if [ -n "$cred_json" ]; then
        access_token=$(echo "$cred_json" | jq -r '.claudeAiOauth.accessToken // empty' 2>/dev/null)
        if [ -n "$access_token" ]; then
            usage_json=$(curl -sf --max-time 5 -H "Authorization: Bearer ${access_token}" \
                "https://api.anthropic.com/api/oauth/usage" 2>/dev/null || echo "")
            if [ -n "$usage_json" ]; then
                five_h=$(echo "$usage_json" | jq -r '.five_hour.utilization // empty' 2>/dev/null)
                seven_d=$(echo "$usage_json" | jq -r '.seven_day.utilization // empty' 2>/dev/null)
                echo "${five_h}|${seven_d}" > "$LIMITS_CACHE"
            fi
        fi
    fi
fi

if [ -f "$LIMITS_CACHE" ]; then
    IFS='|' read -r five_h seven_d < "$LIMITS_CACHE"
    if [ -n "$five_h" ] && [ -n "$seven_d" ]; then
        lim_color="${grn}"
        [ "$(echo "$five_h > 50" | bc -l 2>/dev/null)" = "1" ] && lim_color="${ylw}"
        [ "$(echo "$five_h > 80" | bc -l 2>/dev/null)" = "1" ] && lim_color="${red}"
        five_h_int=$(printf "%.0f" "$five_h" 2>/dev/null || echo "$five_h")
        seven_d_int=$(printf "%.0f" "$seven_d" 2>/dev/null || echo "$seven_d")
        limits_str="${lim_color}5h:${five_h_int}%${rst} ${dim}7d:${seven_d_int}%${rst}"
    fi
fi

# --- Build output ---
parts=""
parts+="๐Ÿ“ ${wht}${dir_name}${rst}"
parts+="${sep}${cyn}๐Ÿค– ${model_name}${rst}"
parts+="${sep}${bar} $(printf '%.0f' "$used_pct")%"
parts+="${sep}โ†‘$(fmt $total_in) โ†“$(fmt $total_out)"
parts+="${sep}๐Ÿ’ฐ ${bld}${cost_str}${rst}"
[ -n "$cpm_str" ] && parts+=" ${dim}${cpm_str}${rst}"
[ -n "$cost_history_str" ] && parts+="${sep}๐Ÿ“Š ${dim}${cost_history_str}${rst}"
[ -n "$burn_str" ] && parts+="${sep}๐Ÿ”ฅ ${dim}${burn_str}${rst}"
[ -n "$lines_str" ] && parts+="${sep}โœ๏ธ  ${lines_str}"
[ -n "$cache_str" ] && parts+="${sep}๐Ÿ’พ ${dim}${cache_str}${rst}"
[ -n "$prod_str" ] && parts+="${sep}โšก ${dim}${prod_str}${rst}"
[ -n "$rank_str" ] && parts+=" ${rank_str}"
[ -n "$test_str" ] && parts+="${sep}๐Ÿงช ${test_str}"
[ -n "$git_str" ] && parts+="${sep}๐ŸŒฟ ${git_str}"
parts+="${sep}๐Ÿ• ${dim}$(date '+%H:%M')${rst}"
if [ -n "$eta" ]; then
    parts+="${sep}โณ ${dim}${eta}${rst}"
fi
[ -n "$limits_str" ] && parts+="${sep}๐Ÿ“ก ${limits_str}"

echo "$parts"