Commit Graph

37 Commits

Author SHA1 Message Date
3261cdd21c Fix proactive replies appearing before the triggering message
Proactive replies used channel.send() which posted standalone messages
with no visual link to what triggered them. Now all replies use
message.reply() so the response is always attached to the source message.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 13:35:40 -05:00
3f9dfb1e74 Fix reaction clap-backs replying to the bot's own message
Send as a channel message instead of message.reply() so it doesn't
look like the bot is talking to itself.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 12:32:08 -05:00
86b23c2b7f Let users @ the bot on a message to make it respond
Reply to any message + @bot to have the bot read and respond to it.
Also picks up image attachments from referenced messages so users
can reply to a photo with "@bot roast this".

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 12:24:26 -05:00
8a06ddbd6e Support hybrid LLM: local Qwen triage + OpenAI escalation
Triage analysis runs on Qwen 8B (athena.lan) for free first-pass.
Escalation, chat, image roasts, and commands use GPT-4o via OpenAI.

Each tier gets its own base URL, API key, and concurrency settings.
Local models get /no_think and serialized requests automatically.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 12:20:07 -05:00
b5e401f036 Generalize image roast to handle selfies, memes, and any image
The prompt was scoreboard-only, so selfies got nonsensical stat-based
roasts. Now the LLM identifies what's in the image and roasts accordingly.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 12:15:22 -05:00
28fb66d5f9 Switch LLM backend from llama.cpp/Qwen to OpenAI
- Default models: gpt-4o-mini (triage), gpt-4o (escalation)
- Remove Qwen-specific /no_think hacks
- Reduce timeout from 600s to 120s, increase concurrency semaphore to 4
- Support empty LLM_BASE_URL to use OpenAI directly

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 12:07:53 -05:00
a9bc24e48e Tune english teacher to catch more errors, bump roast reply chance
- Raised sentence limit from 3 to 5 for english teacher mode
- Added instruction to list multiple corrections rapid-fire
- Roast mode reply chance: 10% -> 35%

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 11:03:03 -05:00
431d63da72 Fix metadata leaking and skip sentiment for bot-directed messages
1. Broader regex to strip leaked metadata even when the LLM drops
   the "Server context:" prefix but keeps the content.

2. Skip sentiment analysis for messages that mention or reply to
   the bot. Users interacting with the bot in roast/chat modes
   shouldn't have those messages inflate their drama score.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 10:52:33 -05:00
7743b22795 Add reaction clap-back replies (50% chance)
When someone reacts to the bot's message, there's a 50% chance it
fires back with a reply commenting on their emoji choice, in
character for the current mode.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 10:48:13 -05:00
86aacfb84f Add 120s timeout to image analysis streaming
The vision model request was hanging indefinitely, freezing the bot.
The streaming loop had no timeout so if the model never returned
chunks, the bot would wait forever. Now times out after 2 minutes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 10:37:37 -05:00
e1dea84d08 Strip leaked metadata from LLM responses
The local LLM was echoing back [Server context: ...] metadata lines
in its responses despite prompt instructions not to. Now stripped
via regex before sending to Discord.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 10:23:49 -05:00
c3274dc702 Add announce script for posting to Discord channels
Usage: ./scripts/announce.sh "message" [channel_name]
Fetches the bot token from barge, resolves channel by name,
and posts via the Discord API. Defaults to #general.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 10:11:27 -05:00
4283078e23 Add english teacher mode
Insufferable grammar nerd that corrects spelling, translates slang
into proper English, and overanalyzes messages like literary essays.
20% proactive reply chance with relaxed moderation.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 10:06:31 -05:00
b6cdea7329 Include replied-to message text in LLM context
When a user replies to the bot's message, the original bot message
text is now included in the context sent to the LLM. This prevents
the LLM from misinterpreting follow-up questions like "what does
this even mean?" since it can see what message is being referenced.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 09:59:51 -05:00
66ca97760b Add context format explanation to chat prompts
LLM was misinterpreting usernames as channel names because
the [Server context: ...] metadata format was never explained
in the system prompts. This caused nonsensical replies like
treating username "thelimitations" as "the limitations channel".

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 09:54:08 -05:00
0feef708ea Set bot status from active mode on startup
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 09:27:34 -05:00
b050c6f844 Set default startup mode to roast
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 09:26:49 -05:00
6e1a73847d Persist bot mode across restarts via database
Adds a BotSettings key-value table. The active mode is saved
when changed via /bcs-mode and restored on startup.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 09:26:00 -05:00
622f0a325b Add auto-polls to settle disagreements between users
LLM analysis now detects when two users are in a genuine
disagreement. When detected, the bot creates a native Discord
poll with each user's position as an option.

- Disagreement detection added to LLM analysis tool schema
- Polls last 4 hours with 1 hour per-channel cooldown
- LLM extracts topic, both positions, and usernames
- Configurable via polls section in config.yaml

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 09:22:32 -05:00
13a2030021 Add switchable bot modes: default, chatty, and roast
Adds a server-wide mode system with /bcs-mode command.
- Default: current hall-monitor behavior unchanged
- Chatty: friendly chat participant with proactive replies (~10% chance)
- Roast: savage roast mode with proactive replies
- Chatty/roast use relaxed moderation thresholds
- 5-message cooldown between proactive replies per channel
- Bot status updates to reflect active mode
- /bcs-status shows current mode and effective thresholds

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 08:59:51 -05:00
3f56982a83 Simplify user notes trimming to keep last 10 lines
Replace character-based truncation loop with a simple line count cap.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 23:43:36 -05:00
d41873230d Reduce repetitive drama score mentions in chat replies
Only inject drama score/offense context when values are noteworthy
(score >= 0.2 or offenses > 0). Update personality prompt to avoid
harping on zero scores and vary responses more.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 22:57:25 -05:00
b04d3da2bf Add LLM request/response logging to database
Log every LLM call (analysis, chat, image, raw_analyze) to a new
LlmLog table with request type, model, token counts, duration,
success/failure, and truncated request/response payloads. Enables
debugging prompt issues and tracking usage.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 22:55:19 -05:00
fd798ce027 Silently log LLM failures instead of replying to user
When the LLM is offline, post to #bcs-log instead of sending
the "brain offline" message in chat.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 16:55:07 -05:00
85ddba5e4b Lower mute thresholds and order warnings before chat replies
- spike_mute: 0.8→0.7, mute: 0.75→0.65 so escalating users get
  timed out after a warning instead of endlessly warned
- Skip debounce on @mentions so sentiment analysis fires immediately
- Chat cog awaits pending sentiment analysis before replying,
  ensuring warnings/mutes appear before the personality response

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 14:16:34 -05:00
e2404d052c Improve LLM context with full timestamped channel history
Send last ~8 messages from all users (not just others) as a
multi-line chat log with relative timestamps so the LLM can
better understand conversation flow and escalation patterns.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 14:04:30 -05:00
b9bac899f9 Add two-tier LLM analysis with triage/escalation
Triage model (LLM_MODEL) handles every message cheaply. If toxicity
>= 0.25, off_topic, or coherence < 0.6, the message is re-analyzed
with the heavy model (LLM_ESCALATION_MODEL). Chat, image analysis,
/bcs-test, and /bcs-scan always use the heavy model.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 18:33:36 -05:00
64e9474c99 Add message batching (debounce) for rapid-fire senders
Buffer messages per user+channel and wait for a configurable window
(batch_window_seconds: 3) before analyzing. Combines burst messages
into a single LLM call instead of analyzing each one separately.
Replaces cooldown_between_analyses with the debounce approach.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 18:19:01 -05:00
cf02da4051 Add CLAUDE.md with deployment instructions
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 17:09:19 -05:00
fee3e3e1bd Add game channel redirect feature and sexual_vulgar detection
Detect when users discuss a game in the wrong channel (e.g. GTA talk
in #warzone) and send a friendly redirect to the correct channel.
Also add sexual_vulgar category and scoring rules so crude sexual
remarks directed at someone aren't softened by "lmao".

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 17:02:59 -05:00
e41845de02 Add scoreboard roast feature via image analysis
When @mentioned with an image attachment, the bot now roasts players
based on scoreboard screenshots using the vision model. Text-only
mentions continue to work as before.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 16:30:26 -05:00
cf88f003ba Add LLM warm-up request at startup to preload model into VRAM
Sends a minimal 1-token completion during setup_hook so the model is
ready before Discord messages start arriving, avoiding connection
errors and slow first responses after a restart.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 15:16:52 -05:00
b410200146 Add max_tokens=1024 to LLM analysis calls
The analyze_message and raw_analyze methods had no max_tokens limit,
causing thinking models (Qwen3-VL-32B-Thinking) to generate unlimited
reasoning tokens before responding — taking 5+ minutes per message.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 14:17:59 -05:00
1151b705c0 Add LLM request queue, streaming chat, and rename ollama_client to llm_client
- Serialize all LLM requests through an asyncio semaphore to prevent
  overloading athena with concurrent requests
- Switch chat() to streaming so the typing indicator only appears once
  the model starts generating (not during thinking/loading)
- Increase LLM timeout from 5 to 10 minutes for slow first loads
- Rename ollama_client.py to llm_client.py and self.ollama to self.llm
  since the bot uses a generic OpenAI-compatible API
- Update embed labels from "Ollama" to "LLM"

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 13:45:12 -05:00
645b924011 Extract LLM prompts to separate text files and fix quoting penalty
Move the analysis and chat personality system prompts from inline Python
strings to prompts/analysis.txt and prompts/chat_personality.txt for
easier editing. Also add a rule so users quoting/reporting what someone
else said are not penalized for the quoted words.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 12:19:28 -05:00
63b4b3adb8 Fix missing libgssapi-krb5-2 dependency in Docker image
The ODBC driver failed to load at runtime because libgssapi_krb5.so.2
was not installed. Add it explicitly to the apt-get install step.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 22:48:15 -05:00
a35705d3f1 Initial commit: Breehavior Monitor Discord bot
Discord bot for monitoring chat sentiment and tracking drama using
Ollama LLM on athena.lan. Includes sentiment analysis, slash commands,
drama tracking, and SQL Server persistence via Docker Compose.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 22:39:40 -05:00