5.6 KiB
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Project Overview
Breehavior Monitor (BCS) — a Python Discord bot that uses LLM-powered analysis to monitor chat toxicity, topic drift, coherence degradation, and game channel routing. It runs as a Docker container on barge.lan.
Development Commands
# Local dev (requires .env with DISCORD_BOT_TOKEN, DB_CONNECTION_STRING, LLM vars)
python bot.py
# Local dev with Docker (bot + MSSQL)
docker compose up --build
# View logs
docker logs bcs-bot --tail 50
There are no tests or linting configured.
Deployment
Production runs at barge.lan:/mnt/docker/breehavior-monitor/. Image hosted on Gitea registry.
# Full deploy (code + config)
git push origin master
docker build -t git.thecozycat.net/aj/breehavior-monitor:latest .
docker push git.thecozycat.net/aj/breehavior-monitor:latest
scp config.yaml aj@barge.lan:/mnt/docker/breehavior-monitor/config.yaml
ssh aj@barge.lan "cd /mnt/docker/breehavior-monitor && docker compose pull && docker compose up -d"
# Config-only deploy (no code changes)
scp config.yaml aj@barge.lan:/mnt/docker/breehavior-monitor/config.yaml
ssh aj@barge.lan "cd /mnt/docker/breehavior-monitor && docker compose restart bcs-bot"
Architecture
LLM Tier System
The bot uses three LLM client instances (LLMClient wrapping OpenAI-compatible API):
bot.llm(triage): Cheap local model on athena.lan for first-pass sentiment analysis. Configured viaLLM_BASE_URL,LLM_MODEL.bot.llm_heavy(escalation): More capable model for re-analysis when triage scores aboveescalation_threshold(0.25), admin commands (/bcs-scan,/bcs-test). Configured viaLLM_ESCALATION_*env vars.bot.llm_chat(chat/roast): Dedicated model for conversational replies and image roasts. Falls back tollm_heavyifLLM_CHAT_MODELnot set.
LLM calls use OpenAI tool-calling for structured output (ANALYSIS_TOOL, CONVERSATION_TOOL in utils/llm_client.py). Chat uses streaming. All calls go through a semaphore for concurrency control.
Cog Structure
cogs/sentiment.py(SentimentCog): Core moderation engine. Listens to all messages, debounces per-channel (batches messages withinbatch_window_seconds), runs triage → escalation analysis, issues warnings/mutes. Also handles mention-triggered conversation scans and game channel redirects. Flushes dirty user states to DB every 5 minutes.cogs/chat.py(ChatCog): Conversational AI. Responds to @mentions, replies to bot messages, proactive replies based on mode config. Handles image roasts via vision model. Strips leaked LLM metadata brackets from responses.cogs/commands.py(CommandsCog): Slash commands —/dramareport,/dramascore,/bcs-status,/bcs-threshold,/bcs-reset,/bcs-immune,/bcs-history,/bcs-scan,/bcs-test,/bcs-notes,/bcs-mode.
Key Utilities
utils/drama_tracker.py: In-memory per-user state (toxicity entries, offense counts, coherence baselines, LLM notes). Rolling window with time + size pruning. Weighted scoring with post-warning escalation boost. Hydrated from DB on startup.utils/database.py: MSSQL via pyodbc. Schema auto-creates/migrates on init. Per-operation connections (no pool). Tables:Messages,AnalysisResults,Actions,UserState,BotSettings,LlmLog. Gracefully degrades to memory-only mode if DB unavailable.utils/llm_client.py: OpenAI-compatible client. Methods:analyze_message(single),analyze_conversation(batch/mention scan),chat(streaming),analyze_image(vision),raw_analyze(debug). All calls logged toLlmLogtable.
Mode System
Modes are defined in config.yaml under modes: and control personality, moderation level, and proactive reply behavior. Each mode specifies a prompt_file from prompts/, moderation level (full or relaxed with custom thresholds), and reply chance. Modes persist across restarts via BotSettings table. Changed via /bcs-mode command.
Moderation Flow
- Message arrives → SentimentCog buffers it (debounce per channel)
- After
batch_window_seconds, buffered messages analyzed as conversation block - Triage model scores each user → if any score >=
escalation_threshold, re-analyze with heavy model - Results feed into DramaTracker rolling window → weighted drama score calculated
- Warning if score >= threshold AND user hasn't been warned recently
- Mute (timeout) if score >= mute threshold AND user was already warned (requires warning first)
- Post-warning escalation: each subsequent high-scoring message adds
escalation_boostto drama score
Prompts
prompts/*.txt are loaded at import time and cached. The analysis system prompt (analysis.txt) defines scoring bands and rules. Chat personality prompts are per-mode. Changes to prompt files require container rebuild.
Environment Variables
Key vars in .env: DISCORD_BOT_TOKEN, DB_CONNECTION_STRING, LLM_BASE_URL, LLM_MODEL, LLM_API_KEY, LLM_ESCALATION_BASE_URL, LLM_ESCALATION_MODEL, LLM_ESCALATION_API_KEY, LLM_CHAT_BASE_URL, LLM_CHAT_MODEL, LLM_CHAT_API_KEY, MSSQL_SA_PASSWORD.
Important Patterns
- DB operations use
asyncio.to_thread()wrapping synchronous pyodbc calls - Fire-and-forget DB writes use
asyncio.create_task() - Single-instance guard via TCP port binding (
BCS_LOCK_PORT, default 39821) config.yamlis volume-mounted in production, not baked into the image- Bot uses
network_mode: hostin Docker to reach LAN services - Models that don't support temperature (reasoning models like o1/o3/o4-mini) are handled via
_NO_TEMPERATURE_MODELSset