Files
Breehavior-Monitor/CLAUDE.md
AJ Isaacs 88536b4dca chore: remove wordle cog
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 10:48:44 -05:00

5.6 KiB

CLAUDE.md

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

Project Overview

Breehavior Monitor (BCS) — a Python Discord bot that uses LLM-powered analysis to monitor chat toxicity, topic drift, coherence degradation, and game channel routing. It runs as a Docker container on barge.lan.

Development Commands

# Local dev (requires .env with DISCORD_BOT_TOKEN, DB_CONNECTION_STRING, LLM vars)
python bot.py

# Local dev with Docker (bot + MSSQL)
docker compose up --build

# View logs
docker logs bcs-bot --tail 50

There are no tests or linting configured.

Deployment

Production runs at barge.lan:/mnt/docker/breehavior-monitor/. Image hosted on Gitea registry.

# Full deploy (code + config)
git push origin master
docker build -t git.thecozycat.net/aj/breehavior-monitor:latest .
docker push git.thecozycat.net/aj/breehavior-monitor:latest
scp config.yaml aj@barge.lan:/mnt/docker/breehavior-monitor/config.yaml
ssh aj@barge.lan "cd /mnt/docker/breehavior-monitor && docker compose pull && docker compose up -d"

# Config-only deploy (no code changes)
scp config.yaml aj@barge.lan:/mnt/docker/breehavior-monitor/config.yaml
ssh aj@barge.lan "cd /mnt/docker/breehavior-monitor && docker compose restart bcs-bot"

Architecture

LLM Tier System

The bot uses three LLM client instances (LLMClient wrapping OpenAI-compatible API):

  • bot.llm (triage): Cheap local model on athena.lan for first-pass sentiment analysis. Configured via LLM_BASE_URL, LLM_MODEL.
  • bot.llm_heavy (escalation): More capable model for re-analysis when triage scores above escalation_threshold (0.25), admin commands (/bcs-scan, /bcs-test). Configured via LLM_ESCALATION_* env vars.
  • bot.llm_chat (chat/roast): Dedicated model for conversational replies and image roasts. Falls back to llm_heavy if LLM_CHAT_MODEL not set.

LLM calls use OpenAI tool-calling for structured output (ANALYSIS_TOOL, CONVERSATION_TOOL in utils/llm_client.py). Chat uses streaming. All calls go through a semaphore for concurrency control.

Cog Structure

  • cogs/sentiment.py (SentimentCog): Core moderation engine. Listens to all messages, debounces per-channel (batches messages within batch_window_seconds), runs triage → escalation analysis, issues warnings/mutes. Also handles mention-triggered conversation scans and game channel redirects. Flushes dirty user states to DB every 5 minutes.
  • cogs/chat.py (ChatCog): Conversational AI. Responds to @mentions, replies to bot messages, proactive replies based on mode config. Handles image roasts via vision model. Strips leaked LLM metadata brackets from responses.
  • cogs/commands.py (CommandsCog): Slash commands — /dramareport, /dramascore, /bcs-status, /bcs-threshold, /bcs-reset, /bcs-immune, /bcs-history, /bcs-scan, /bcs-test, /bcs-notes, /bcs-mode.

Key Utilities

  • utils/drama_tracker.py: In-memory per-user state (toxicity entries, offense counts, coherence baselines, LLM notes). Rolling window with time + size pruning. Weighted scoring with post-warning escalation boost. Hydrated from DB on startup.
  • utils/database.py: MSSQL via pyodbc. Schema auto-creates/migrates on init. Per-operation connections (no pool). Tables: Messages, AnalysisResults, Actions, UserState, BotSettings, LlmLog. Gracefully degrades to memory-only mode if DB unavailable.
  • utils/llm_client.py: OpenAI-compatible client. Methods: analyze_message (single), analyze_conversation (batch/mention scan), chat (streaming), analyze_image (vision), raw_analyze (debug). All calls logged to LlmLog table.

Mode System

Modes are defined in config.yaml under modes: and control personality, moderation level, and proactive reply behavior. Each mode specifies a prompt_file from prompts/, moderation level (full or relaxed with custom thresholds), and reply chance. Modes persist across restarts via BotSettings table. Changed via /bcs-mode command.

Moderation Flow

  1. Message arrives → SentimentCog buffers it (debounce per channel)
  2. After batch_window_seconds, buffered messages analyzed as conversation block
  3. Triage model scores each user → if any score >= escalation_threshold, re-analyze with heavy model
  4. Results feed into DramaTracker rolling window → weighted drama score calculated
  5. Warning if score >= threshold AND user hasn't been warned recently
  6. Mute (timeout) if score >= mute threshold AND user was already warned (requires warning first)
  7. Post-warning escalation: each subsequent high-scoring message adds escalation_boost to drama score

Prompts

prompts/*.txt are loaded at import time and cached. The analysis system prompt (analysis.txt) defines scoring bands and rules. Chat personality prompts are per-mode. Changes to prompt files require container rebuild.

Environment Variables

Key vars in .env: DISCORD_BOT_TOKEN, DB_CONNECTION_STRING, LLM_BASE_URL, LLM_MODEL, LLM_API_KEY, LLM_ESCALATION_BASE_URL, LLM_ESCALATION_MODEL, LLM_ESCALATION_API_KEY, LLM_CHAT_BASE_URL, LLM_CHAT_MODEL, LLM_CHAT_API_KEY, MSSQL_SA_PASSWORD.

Important Patterns

  • DB operations use asyncio.to_thread() wrapping synchronous pyodbc calls
  • Fire-and-forget DB writes use asyncio.create_task()
  • Single-instance guard via TCP port binding (BCS_LOCK_PORT, default 39821)
  • config.yaml is volume-mounted in production, not baked into the image
  • Bot uses network_mode: host in Docker to reach LAN services
  • Models that don't support temperature (reasoning models like o1/o3/o4-mini) are handled via _NO_TEMPERATURE_MODELS set