The LLM was interpreting "sassy hall monitor" as warm/motherly with pet
names like "oh sweetheart" and "bless your heart". Added explicit guidance
for deadpan, dry Discord mod energy instead.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Global sync can take up to an hour to propagate. Now also syncs commands
per-guild in on_ready for immediate availability.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Queries Messages, AnalysisResults, and Actions tables to rank users by a
composite drama score (weighted avg toxicity, peak toxicity, and action rate).
Public command with configurable time period (7d/30d/90d/all-time).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace static random templates with LLM-generated redirect messages that
reference what the user actually said and why it's off-topic. Sass escalates
with higher strike counts. Falls back to static templates if LLM fails or
use_llm is disabled in config.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Prevents inserting a memory if an identical one already exists for the
user. Also cleaned up 30 anonymized and 4 duplicate memories from DB.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The LLM returns note_update, reasoning, and worst_message with
anonymized names. These are now replaced with real display names
before storage, so user profiles no longer contain meaningless
User1/User2 references.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Aliases now stored in UserState table instead of config.yaml. Adds
Aliases column (NVARCHAR 500), loads on startup, persists via flush.
New /bcs-alias slash command (view/set/clear) for managing nicknames.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds user_aliases config section mapping Discord IDs to known nicknames.
Aliases are anonymized and injected into LLM analysis context so it can
recognize when someone name-drops another member (even absent ones).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
LLM can now flag possessive name-dropping, territorial behavior, and
jealousy signals when users mention others not in the conversation.
Scores feed into existing drama pipeline for warnings/mutes.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When @mentioned, fetch recent messages from ALL users in the channel
(up to 15 messages) instead of only the mentioner's messages. This lets
the bot understand debates and discussions it's asked to weigh in on.
Also update the personality prompt to engage with topics substantively
when asked for opinions, rather than deflecting with generic jokes.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds LLM triage on bot @mentions to determine if the user is chatting
or reporting bad behavior. Only 'report' intents trigger the 30-message
scan; 'chat' intents skip the scan and let ChatCog handle it.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Use LLM_ESCALATION_* env vars for better profile generation
- Fall back to joining permanent memories if profile_update is null
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Merge worktree: adds _extract_and_save_memories() method and fire-and-forget
extraction call after each chat reply. Combined with Task 4's memory
retrieval and injection.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replaces the entire notes field with an LLM-generated profile summary,
used by the memory extraction system for permanent facts.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
9-task step-by-step plan covering DB schema, LLM extraction tool, memory
retrieval/injection in chat, sentiment pipeline routing, background pruning,
and migration script.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Outlines persistent memory system for making the bot a real conversational
participant that knows people and remembers past interactions. Uses existing
UserNotes column for permanent profiles and a new UserMemory table for
expiring context with LLM-assigned lifetimes.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Display names like "Calm your tits" were causing the LLM to inflate toxicity
scores on completely benign messages. Usernames are now replaced with User1,
User2, etc. before sending to the LLM, then mapped back to real names in the
results.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Filter out non-dict entries from user_findings and handle non-dict
result to prevent 'str' object has no attribute 'setdefault' errors.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The triage LLM was blending context message content into its reasoning
for new messages (e.g., citing profanity from context when the new
message was just "I'll be here"). Added per-message [CONTEXT] tags
inline and strengthened the prompt to explicitly forbid referencing
context content in reasoning/scores.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Convert cogs/sentiment.py (1050 lines) into cogs/sentiment/ package:
- __init__.py (656 lines): core SentimentCog with new _process_finding()
that deduplicates the per-user finding loop from _process_buffered and
_run_mention_scan (~90 lines each → single shared method)
- actions.py: mute_user, warn_user
- topic_drift.py: handle_topic_drift
- channel_redirect.py: handle_channel_redirect, build_channel_context
- coherence.py: handle_coherence_alert
- log_utils.py: log_analysis, log_action, score_color
- state.py: save_user_state, flush_dirty_states
All extracted modules use plain async functions (not methods) receiving
bot/config as parameters. Named log_utils.py to avoid shadowing stdlib
logging. Also update CLAUDE.md with comprehensive project documentation.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The display name "Calm your tits" was being factored into toxicity
scores. Updated the analysis prompt to explicitly instruct the LLM
to ignore all usernames/display names when scoring messages.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The conversation analysis was re-scoring old messages alongside new ones,
causing users to get penalized repeatedly for already-scored messages.
A "--- NEW MESSAGES ---" separator now marks which messages are new, and
the prompt instructs the LLM to score only those. Also fixes bot-mention
detection to require an explicit @mention in message text rather than
treating reply-pings as scans (so toxic replies to bot warnings aren't
silently skipped).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
last_offense_time was in-memory only — lost on restart, so the
offense_reset_minutes check never fired after a reboot. Now persisted
as LastOffenseAt FLOAT in UserState. On startup hydration, stale
offenses (and warned flag) are auto-cleared if the reset window has
passed. Bumped offense_reset_minutes from 2h to 24h.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Gate mutes behind a prior warning — first offense always gets a warning,
mute only fires if warned_since_reset is True. Warned flag is persisted
to DB (new Warned column on UserState) and survives restarts.
Add post-warning escalation boost to drama_score: each high-scoring
message after a warning adds +0.04 (configurable) so sustained bad
behavior ramps toward the mute threshold instead of plateauing.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When the LLM returns text instead of a tool call for conversation
analysis, try parsing the content as JSON before giving up. Also
log what the model actually returns on failure for debugging.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Switch from per-user message batching to per-channel conversation
analysis. The LLM now sees the full interleaved conversation with
relative timestamps, reply chains, and consecutive message collapsing
instead of isolated flat text per user.
Key changes:
- Fix gpt-5-nano temperature incompatibility (conditional temp param)
- Add mention-triggered scan: users @mention bot to analyze recent chat
- Refactor debounce buffer from (channel_id, user_id) to channel_id
- Replace per-message analyze_message() with analyze_conversation()
returning per-user findings from a single LLM call
- Add CONVERSATION_TOOL schema with coherence, topic, and game fields
- Compact message format: relative timestamps, reply arrows (→),
consecutive same-user message collapsing
- Separate mention scan tasks from debounce tasks
- Remove _store_context/_get_context (conversation block IS the context)
- Escalation timeout config: [30, 60, 120, 240] minutes
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
gpt-5-nano and other newer models require max_completion_tokens
instead of max_tokens. The new parameter is backwards compatible
with older models.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add ignored_channels config to topic_drift section, supporting
channel names or IDs. General channel excluded from off-topic
warnings while still receiving full moderation.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Lovable hammered friend with typos, strong nonsensical opinions,
random tangents, and overwhelming affection for everyone in chat.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
New mode that gasses people up for their plays and takes using
gaming hype terminology, but reads the room and dials back to
genuine encouragement when someone's tilted or frustrated.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add guidance for ~25% genuinely positive/hype responses
- Lean toward playful ribbing over pure negativity
- Reduce reply_chance from 35% to 20%
- Increase proactive_cooldown_messages from 5 to 8
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The model was inventing rankings and scoreboards from the drama score
metadata. Explicitly tell it not to make up stats it doesn't have.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Slim down chat_roast.txt — remove anti-repetition rules that were
compensating for the local model (gpt-4o-mini handles this natively).
Remove disagreement detection from analysis prompt, tool schema, and
sentiment handler. Saves ~200 tokens per analysis call.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add a separate llm_chat client so chat responses use a smarter model
(gpt-4o-mini) while analysis stays on the cheap local Qwen3-8B.
Falls back to llm_heavy if LLM_CHAT_MODEL is not set.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>