last_offense_time was in-memory only — lost on restart, so the
offense_reset_minutes check never fired after a reboot. Now persisted
as LastOffenseAt FLOAT in UserState. On startup hydration, stale
offenses (and warned flag) are auto-cleared if the reset window has
passed. Bumped offense_reset_minutes from 2h to 24h.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Gate mutes behind a prior warning — first offense always gets a warning,
mute only fires if warned_since_reset is True. Warned flag is persisted
to DB (new Warned column on UserState) and survives restarts.
Add post-warning escalation boost to drama_score: each high-scoring
message after a warning adds +0.04 (configurable) so sustained bad
behavior ramps toward the mute threshold instead of plateauing.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Switch from per-user message batching to per-channel conversation
analysis. The LLM now sees the full interleaved conversation with
relative timestamps, reply chains, and consecutive message collapsing
instead of isolated flat text per user.
Key changes:
- Fix gpt-5-nano temperature incompatibility (conditional temp param)
- Add mention-triggered scan: users @mention bot to analyze recent chat
- Refactor debounce buffer from (channel_id, user_id) to channel_id
- Replace per-message analyze_message() with analyze_conversation()
returning per-user findings from a single LLM call
- Add CONVERSATION_TOOL schema with coherence, topic, and game fields
- Compact message format: relative timestamps, reply arrows (→),
consecutive same-user message collapsing
- Separate mention scan tasks from debounce tasks
- Remove _store_context/_get_context (conversation block IS the context)
- Escalation timeout config: [30, 60, 120, 240] minutes
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add ignored_channels config to topic_drift section, supporting
channel names or IDs. General channel excluded from off-topic
warnings while still receiving full moderation.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Lovable hammered friend with typos, strong nonsensical opinions,
random tangents, and overwhelming affection for everyone in chat.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
New mode that gasses people up for their plays and takes using
gaming hype terminology, but reads the room and dials back to
genuine encouragement when someone's tilted or frustrated.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add guidance for ~25% genuinely positive/hype responses
- Lean toward playful ribbing over pure negativity
- Reduce reply_chance from 35% to 20%
- Increase proactive_cooldown_messages from 5 to 8
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Raised sentence limit from 3 to 5 for english teacher mode
- Added instruction to list multiple corrections rapid-fire
- Roast mode reply chance: 10% -> 35%
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Insufferable grammar nerd that corrects spelling, translates slang
into proper English, and overanalyzes messages like literary essays.
20% proactive reply chance with relaxed moderation.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
LLM analysis now detects when two users are in a genuine
disagreement. When detected, the bot creates a native Discord
poll with each user's position as an option.
- Disagreement detection added to LLM analysis tool schema
- Polls last 4 hours with 1 hour per-channel cooldown
- LLM extracts topic, both positions, and usernames
- Configurable via polls section in config.yaml
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds a server-wide mode system with /bcs-mode command.
- Default: current hall-monitor behavior unchanged
- Chatty: friendly chat participant with proactive replies (~10% chance)
- Roast: savage roast mode with proactive replies
- Chatty/roast use relaxed moderation thresholds
- 5-message cooldown between proactive replies per channel
- Bot status updates to reflect active mode
- /bcs-status shows current mode and effective thresholds
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- spike_mute: 0.8→0.7, mute: 0.75→0.65 so escalating users get
timed out after a warning instead of endlessly warned
- Skip debounce on @mentions so sentiment analysis fires immediately
- Chat cog awaits pending sentiment analysis before replying,
ensuring warnings/mutes appear before the personality response
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Send last ~8 messages from all users (not just others) as a
multi-line chat log with relative timestamps so the LLM can
better understand conversation flow and escalation patterns.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Triage model (LLM_MODEL) handles every message cheaply. If toxicity
>= 0.25, off_topic, or coherence < 0.6, the message is re-analyzed
with the heavy model (LLM_ESCALATION_MODEL). Chat, image analysis,
/bcs-test, and /bcs-scan always use the heavy model.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Buffer messages per user+channel and wait for a configurable window
(batch_window_seconds: 3) before analyzing. Combines burst messages
into a single LLM call instead of analyzing each one separately.
Replaces cooldown_between_analyses with the debounce approach.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Detect when users discuss a game in the wrong channel (e.g. GTA talk
in #warzone) and send a friendly redirect to the correct channel.
Also add sexual_vulgar category and scoring rules so crude sexual
remarks directed at someone aren't softened by "lmao".
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Discord bot for monitoring chat sentiment and tracking drama using
Ollama LLM on athena.lan. Includes sentiment analysis, slash commands,
drama tracking, and SQL Server persistence via Docker Compose.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>