Compare commits

...

114 Commits

Author SHA1 Message Date
f79de0ea04 feat: add unblock-nag detection and redirect
Keyword-based detection for users repeatedly asking to be unblocked in
chat. Fires an LLM-generated snarky redirect (with static fallback),
tracks per-user nag count with escalating sass, and respects a 30-min
cooldown. Configurable via config.yaml unblock_nag section.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 13:19:29 -04:00
733b86b947 feat: add /bcs-pause command to toggle monitoring
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 15:28:56 -04:00
f7dfb7931a feat: add redirect channel to topic drift messages
Topic drift reminders and nudges now direct users to a specific
channel (configurable via redirect_channel). Both static templates
and LLM-generated redirects include the clickable channel mention.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 17:44:25 -05:00
a836584940 fix: skip game redirect when topic drift already handled
Changed if to elif so detected_game redirect only fires when
the topic_drift branch wasn't taken.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 17:44:21 -05:00
9872c36b97 improve chat_personality prompt with better structure and guidance
- Fix metadata description to match actual code behavior (optional fields)
- Add texting cadence guidance (lowercase, fragments, casual punctuation)
- Add multi-user conversation handling, conversation exit, deflection, and
  genuine-upset guidance
- Expand examples from 3 to 7 covering varied response styles
- Organize into VOICE/ENGAGEMENT sections for clarity
- Trim over-explained AFTERTHOUGHTS section

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 19:23:31 -05:00
53803d920f fix: sanitize note_updates before storing in sentiment pipeline
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 22:04:00 -05:00
b7076dffe2 fix: sanitize profile updates before storing in chat memory pipeline
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 22:03:59 -05:00
c5316b98d1 feat: add sanitize_notes() method to LLMClient
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 22:03:59 -05:00
f75a3ca3f4 fix: instruct LLM to never quote toxic content in note_updates
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 22:03:59 -05:00
09f83f8c2f fix: move slutty prompt to personalities/ dir, match reply chance
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 10:11:46 -05:00
20e4e7a985 feat: add slutty mode — flirty, thirsty, full of innuendos
New personality mode with 25% reply chance, very relaxed moderation
thresholds (0.85/0.90), suggestive but not explicit personality.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 10:11:21 -05:00
72735c2497 fix: address review feedback for proactive reply logic
- Parse display names with ': ' split to handle colons in names
- Reset cooldown to half instead of subtract-3 to reduce LLM call frequency
- Remove redundant message.guild check

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 11:38:06 -05:00
787b083e00 feat: add relevance-gated proactive replies
Replace random-only proactive reply logic with LLM relevance check.
The bot now evaluates recent conversation context and user memory
before deciding to jump in, then applies reply_chance as a second
gate. Bump reply_chance values higher since the relevance filter
prevents most irrelevant replies.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 11:34:53 -05:00
175c7ad219 fix: clean ||| from chat history and handle afterthoughts in reaction replies
- Extract _split_afterthought helper method
- Store cleaned content (no |||) in chat history to prevent LLM reinforcement
- Handle afterthought splitting in reaction-reply path too
- Log main_reply instead of raw response

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 11:33:11 -05:00
6866ca8adf feat: add afterthoughts, memory callbacks, and callback-worthy extraction
Add triple-pipe afterthought splitting to chat replies so the bot can
send a follow-up message 2-5 seconds later, mimicking natural Discord
typing behavior. Update all 6 personality prompts with afterthought
instructions (~1 in 5 replies) and memory callback guidance so the bot
actively references what it knows about users. Enhance memory extraction
prompt to flag bold claims, contradictions, and embarrassing moments as
high-importance callback-worthy memories with a "callback" topic tag.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 11:30:16 -05:00
97e5738a2f fix: address review feedback for ReactionCog
- Use time.monotonic() at reaction time instead of stale message-receive timestamp
- Add excluded_channels config and filtering
- Truncate message content to 500 chars in pick_reaction

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 11:28:20 -05:00
a8e8b63f5e feat: add ReactionCog for ambient emoji reactions
Add a new cog that gives the bot ambient presence by reacting to
messages with contextual emoji chosen by the triage LLM. Includes
RNG gating and per-channel cooldown to keep reactions sparse and
natural.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 11:25:17 -05:00
5c84c8840b fix: use emoji allowlist instead of length check in pick_reaction
Prevents text words like "skull" from passing the filter and causing
Discord HTTPException noise.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 11:24:28 -05:00
661c252bf7 feat: add pick_reaction method to LLMClient
Lightweight LLM call that picks a contextual emoji reaction for a
Discord message. Uses temperature 0.9 for variety, max 16 tokens,
and validates the response is a short emoji token or returns None.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 11:22:08 -05:00
2ec9b16b99 fix: address multiple bugs found in code review
- Fix dirty-user flush race: discard IDs individually after successful save
- Escape LIKE wildcards in LLM-generated topic keywords for DB queries
- Anonymize absent-member aliases to prevent LLM de-anonymization
- Pass correct MIME type to vision model based on image file extension
- Use enumerate instead of list.index() in bcs-scan loop
- Allow bot @mentions with non-report intent to fall through to moderation

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 01:16:38 -05:00
eb7eb81621 feat: add warning expiration and exclude moderated messages from context
Warning flag now auto-expires after a configurable duration
(warning_expiration_minutes, default 30m). After expiry, the user must
be re-warned before a mute can be issued.

Messages that triggered moderation actions (warnings/mutes) are now
excluded from the LLM context window in both buffered analysis and
mention scans, preventing already-actioned content from influencing
future scoring. Uses in-memory tracking plus bot reaction fallback
for post-restart coverage.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 13:39:49 -05:00
36df4cf5a6 chore: add .claude/ to .gitignore
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 22:16:18 -05:00
bf32a9536a feat: add server rule violation detection and compress prompts
- LLM now evaluates messages against numbered server rules and reports
  violated_rules in analysis output
- Warnings and mutes cite the specific rule(s) broken
- Rules extracted to prompts/rules.txt for prompt injection
- Personality prompts moved to prompts/personalities/ and compressed
  (~63% reduction across all prompt files)
- All prompt files tightened: removed redundancy, consolidated Do NOT
  sections, trimmed examples while preserving behavioral instructions

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 22:14:35 -05:00
ed51db527c fix: stop bot from starting every message with "Oh,"
Removed "Oh," from example lines that the model was mimicking, added
explicit DO NOT rule against "Oh" openers, and added more varied examples.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 20:45:16 -05:00
bf5051dfc1 fix: steer default chat personality away from southern aunt tone
The LLM was interpreting "sassy hall monitor" as warm/motherly with pet
names like "oh sweetheart" and "bless your heart". Added explicit guidance
for deadpan, dry Discord mod energy instead.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 17:25:06 -05:00
cf88638603 fix: add guild-specific command sync for instant slash command propagation
Global sync can take up to an hour to propagate. Now also syncs commands
per-guild in on_ready for immediate availability.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 16:11:46 -05:00
1d653ec216 feat: add /drama-leaderboard command with historical composite scoring
Queries Messages, AnalysisResults, and Actions tables to rank users by a
composite drama score (weighted avg toxicity, peak toxicity, and action rate).
Public command with configurable time period (7d/30d/90d/all-time).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 16:08:39 -05:00
0ff962c95e feat: generate topic drift redirects via LLM with full conversation context
Replace static random templates with LLM-generated redirect messages that
reference what the user actually said and why it's off-topic. Sass escalates
with higher strike counts. Falls back to static templates if LLM fails or
use_llm is disabled in config.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 15:28:36 -05:00
2525216828 fix: deduplicate memories on save with exact-match check
Prevents inserting a memory if an identical one already exists for the
user. Also cleaned up 30 anonymized and 4 duplicate memories from DB.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 10:53:52 -05:00
3b2de80cac fix: de-anonymize User1/User2 references in notes and reasoning text
The LLM returns note_update, reasoning, and worst_message with
anonymized names. These are now replaced with real display names
before storage, so user profiles no longer contain meaningless
User1/User2 references.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 10:51:30 -05:00
88536b4dca chore: remove wordle cog
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 10:48:44 -05:00
33d56f8737 feat: move user aliases from config to DB with /bcs-alias command
Aliases now stored in UserState table instead of config.yaml. Adds
Aliases column (NVARCHAR 500), loads on startup, persists via flush.
New /bcs-alias slash command (view/set/clear) for managing nicknames.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 10:35:19 -05:00
ad1234ec99 feat: add user alias mapping for jealousy detection context
Adds user_aliases config section mapping Discord IDs to known nicknames.
Aliases are anonymized and injected into LLM analysis context so it can
recognize when someone name-drops another member (even absent ones).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 10:22:57 -05:00
a73d2505d9 feat: add jealousy/possessiveness detection as toxicity category
LLM can now flag possessive name-dropping, territorial behavior, and
jealousy signals when users mention others not in the conversation.
Scores feed into existing drama pipeline for warnings/mutes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 10:07:45 -05:00
0449c8c30d feat: give bot full conversation context on @mentions for real engagement
When @mentioned, fetch recent messages from ALL users in the channel
(up to 15 messages) instead of only the mentioner's messages. This lets
the bot understand debates and discussions it's asked to weigh in on.

Also update the personality prompt to engage with topics substantively
when asked for opinions, rather than deflecting with generic jokes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 14:14:46 -05:00
3d252ee729 feat: classify mention intent before running expensive scan
Adds LLM triage on bot @mentions to determine if the user is chatting
or reporting bad behavior. Only 'report' intents trigger the 30-message
scan; 'chat' intents skip the scan and let ChatCog handle it.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 13:20:54 -05:00
b918ba51a8 fix: use escalation model and fallback to permanent memories in migration
- Use LLM_ESCALATION_* env vars for better profile generation
- Fall back to joining permanent memories if profile_update is null

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 13:14:38 -05:00
efe7f901c2 Merge branch 'worktree-agent-a27a0179' 2026-02-26 13:04:25 -05:00
ca17b6ac61 Merge branch 'worktree-agent-a0b1ccc2' 2026-02-26 13:04:24 -05:00
8a092c720f Merge branch 'worktree-agent-a78eaee3' 2026-02-26 13:04:18 -05:00
365907a7a0 feat: extract and save memories after chat conversations
Merge worktree: adds _extract_and_save_memories() method and fire-and-forget
extraction call after each chat reply. Combined with Task 4's memory
retrieval and injection.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 13:04:12 -05:00
e488b2b227 feat: extract and save memories after chat conversations
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 13:02:42 -05:00
7ca369b641 feat: add one-time migration script for user notes to profiles
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 12:59:03 -05:00
305c9bf113 feat: route sentiment note_updates into memory system
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 12:58:14 -05:00
2054ca7b24 feat: add background memory pruning task
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 12:58:12 -05:00
d61e85d928 feat: inject persistent memory context into chat responses
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 12:56:02 -05:00
89fabd85da feat: add set_user_profile method to DramaTracker
Replaces the entire notes field with an LLM-generated profile summary,
used by the memory extraction system for permanent facts.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 12:54:05 -05:00
67011535cd feat: add memory extraction LLM tool and prompt
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 12:53:18 -05:00
8686f4fdd6 fix: align default limits and parameter names to spec
- get_recent_memories: limit default 10 → 5
- get_memories_by_topics: limit default 10 → 5
- prune_excess_memories: rename 'cap' → 'max_memories'

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 12:50:47 -05:00
75adafefd6 feat: add UserMemory table and CRUD methods for conversational memory
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 12:48:54 -05:00
333fbb3932 docs: add conversational memory implementation plan
9-task step-by-step plan covering DB schema, LLM extraction tool, memory
retrieval/injection in chat, sentiment pipeline routing, background pruning,
and migration script.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 12:44:18 -05:00
d652c32063 docs: add conversational memory design document
Outlines persistent memory system for making the bot a real conversational
participant that knows people and remembers past interactions. Uses existing
UserNotes column for permanent profiles and a new UserMemory table for
expiring context with LLM-assigned lifetimes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 12:41:28 -05:00
196f8c8ae5 fix: remove owner notification on topic drift escalation
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 22:29:01 -05:00
c63913cf14 fix: anonymize usernames before LLM analysis to prevent name-based scoring bias
Display names like "Calm your tits" were causing the LLM to inflate toxicity
scores on completely benign messages. Usernames are now replaced with User1,
User2, etc. before sending to the LLM, then mapped back to real names in the
results.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 22:20:53 -05:00
cb8ef8542b fix: guard against malformed LLM findings in conversation validation
Filter out non-dict entries from user_findings and handle non-dict
result to prevent 'str' object has no attribute 'setdefault' errors.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 21:38:02 -05:00
f46caf9ac5 fix: tag context messages with [CONTEXT] to prevent LLM from scoring them
The triage LLM was blending context message content into its reasoning
for new messages (e.g., citing profanity from context when the new
message was just "I'll be here"). Added per-message [CONTEXT] tags
inline and strengthened the prompt to explicitly forbid referencing
context content in reasoning/scores.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 20:08:23 -05:00
660086a500 refactor: extract sentiment cog into package with shared _process_finding
Convert cogs/sentiment.py (1050 lines) into cogs/sentiment/ package:
- __init__.py (656 lines): core SentimentCog with new _process_finding()
  that deduplicates the per-user finding loop from _process_buffered and
  _run_mention_scan (~90 lines each → single shared method)
- actions.py: mute_user, warn_user
- topic_drift.py: handle_topic_drift
- channel_redirect.py: handle_channel_redirect, build_channel_context
- coherence.py: handle_coherence_alert
- log_utils.py: log_analysis, log_action, score_color
- state.py: save_user_state, flush_dirty_states

All extracted modules use plain async functions (not methods) receiving
bot/config as parameters. Named log_utils.py to avoid shadowing stdlib
logging. Also update CLAUDE.md with comprehensive project documentation.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 17:06:27 -05:00
188370b1fd Fix LLM scoring usernames as toxic content
The display name "Calm your tits" was being factored into toxicity
scores. Updated the analysis prompt to explicitly instruct the LLM
to ignore all usernames/display names when scoring messages.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 15:51:14 -05:00
7417908142 fix: separate context from new messages so prior-cycle chat doesn't inflate scores
The conversation analysis was re-scoring old messages alongside new ones,
causing users to get penalized repeatedly for already-scored messages.
A "--- NEW MESSAGES ---" separator now marks which messages are new, and
the prompt instructs the LLM to score only those. Also fixes bot-mention
detection to require an explicit @mention in message text rather than
treating reply-pings as scans (so toxic replies to bot warnings aren't
silently skipped).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 15:48:02 -05:00
8734f1883b fix: persist last_offense_time and reset offenses after 24h
last_offense_time was in-memory only — lost on restart, so the
offense_reset_minutes check never fired after a reboot. Now persisted
as LastOffenseAt FLOAT in UserState. On startup hydration, stale
offenses (and warned flag) are auto-cleared if the reset window has
passed. Bumped offense_reset_minutes from 2h to 24h.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 11:24:38 -05:00
71c7b45e9a feat: require warning before mute + sustained toxicity escalation
Gate mutes behind a prior warning — first offense always gets a warning,
mute only fires if warned_since_reset is True. Warned flag is persisted
to DB (new Warned column on UserState) and survives restarts.

Add post-warning escalation boost to drama_score: each high-scoring
message after a warning adds +0.04 (configurable) so sustained bad
behavior ramps toward the mute threshold instead of plateauing.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 11:07:57 -05:00
f02a4ab49d Add content fallback for conversation analysis + debug logging
When the LLM returns text instead of a tool call for conversation
analysis, try parsing the content as JSON before giving up. Also
log what the model actually returns on failure for debugging.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 10:16:15 -05:00
90b70cad69 feat: channel-level conversation analysis with compact formatting
Switch from per-user message batching to per-channel conversation
analysis. The LLM now sees the full interleaved conversation with
relative timestamps, reply chains, and consecutive message collapsing
instead of isolated flat text per user.

Key changes:
- Fix gpt-5-nano temperature incompatibility (conditional temp param)
- Add mention-triggered scan: users @mention bot to analyze recent chat
- Refactor debounce buffer from (channel_id, user_id) to channel_id
- Replace per-message analyze_message() with analyze_conversation()
  returning per-user findings from a single LLM call
- Add CONVERSATION_TOOL schema with coherence, topic, and game fields
- Compact message format: relative timestamps, reply arrows (→),
  consecutive same-user message collapsing
- Separate mention scan tasks from debounce tasks
- Remove _store_context/_get_context (conversation block IS the context)
- Escalation timeout config: [30, 60, 120, 240] minutes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 23:13:07 -05:00
943c67cc87 Add Wordle scoring context so LLM knows lower is better
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 18:05:41 -05:00
f457240e62 Add Wordle commentary: bot reacts to Wordle results with mode-appropriate comments
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 17:56:05 -05:00
01b7a6b240 Bump health check max_completion_tokens to 16
gpt-5-nano can't produce output with max_completion_tokens=1.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 17:08:32 -05:00
a0edf90ebd Switch to max_completion_tokens for newer OpenAI models
gpt-5-nano and other newer models require max_completion_tokens
instead of max_tokens. The new parameter is backwards compatible
with older models.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 17:07:44 -05:00
dd0d18b0f5 Disable topic drift monitoring in general channel
Add ignored_channels config to topic_drift section, supporting
channel names or IDs. General channel excluded from off-topic
warnings while still receiving full moderation.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 14:47:03 -05:00
b79d1897f9 Add drunk mode: happy drunk commentating on everything
Lovable hammered friend with typos, strong nonsensical opinions,
random tangents, and overwhelming affection for everyone in chat.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 20:05:03 -05:00
ac4057b906 Add hype mode: positive/supportive teammate personality
New mode that gasses people up for their plays and takes using
gaming hype terminology, but reads the room and dials back to
genuine encouragement when someone's tilted or frustrated.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 20:02:39 -05:00
8b2091ac38 Tone down roast bot: more positive, less frequent
- Add guidance for ~25% genuinely positive/hype responses
- Lean toward playful ribbing over pure negativity
- Reduce reply_chance from 35% to 20%
- Increase proactive_cooldown_messages from 5 to 8

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 19:55:17 -05:00
7db7a4b026 Tell roast prompt not to fabricate leaderboards or stats
The model was inventing rankings and scoreboards from the drama score
metadata. Explicitly tell it not to make up stats it doesn't have.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 18:43:36 -05:00
c8e7c8c1cf Trim prompts for gpt-4o-mini, remove disagreement detection
Slim down chat_roast.txt — remove anti-repetition rules that were
compensating for the local model (gpt-4o-mini handles this natively).
Remove disagreement detection from analysis prompt, tool schema, and
sentiment handler. Saves ~200 tokens per analysis call.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 16:26:44 -05:00
c258994a2e Use gpt-4o-mini for chat/roasts via dedicated LLM_CHAT_MODEL
Add a separate llm_chat client so chat responses use a smarter model
(gpt-4o-mini) while analysis stays on the cheap local Qwen3-8B.
Falls back to llm_heavy if LLM_CHAT_MODEL is not set.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 16:04:55 -05:00
e4239b25c3 Keep only the last segment after bracketed metadata in LLM responses
The model dumps paraphrased context and style labels in [brackets]
before its actual roast. Instead of just removing bracket lines
(which leaves the preamble text), split on them and keep only the
last non-empty segment — the real answer is always last.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 15:31:09 -05:00
02b2870f2b Strip all standalone bracketed text from LLM responses
The model paraphrases injected metadata in unpredictable ways, so
targeted regexes can't keep up. Replace them with a single rule: any
[bracketed block] on its own line gets removed, since real roasts
never use standalone brackets.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 15:24:18 -05:00
942f5ddce7 Fix repetitive roast responses with anti-repetition mechanisms
Add frequency_penalty (0.8) and presence_penalty (0.6) to LLM chat
calls to discourage repeated tokens. Inject the bot's last 5 responses
into the system prompt so the model knows what to avoid. Strengthen
the roast prompt with explicit anti-repetition rules and remove example
lines the model was copying verbatim ("Real ___ energy", etc.).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 15:15:11 -05:00
534aac5cd7 Enable thinking for chat, diversify roast styles
- Remove /no_think override from chat() so Qwen3 reasons before
  generating responses (fixes incoherent word-salad replies)
- Analysis and image calls keep /no_think for speed
- Add varied roast style guidance (deadpan, sarcastic, blunt, etc.)
- Explicitly ban metaphors/similes in roast prompt
- Replace metaphor examples with direct roast examples

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 13:59:16 -05:00
66031cd9f9 Add user notes and recent message history to chat context
When the bot replies (proactive or mentioned), it now fetches the
user's drama tracker notes and their last ~10 messages in the channel.
Gives the LLM real context for personalized replies instead of
generic roasts on bare pings.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 13:44:04 -05:00
3261cdd21c Fix proactive replies appearing before the triggering message
Proactive replies used channel.send() which posted standalone messages
with no visual link to what triggered them. Now all replies use
message.reply() so the response is always attached to the source message.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 13:35:40 -05:00
3f9dfb1e74 Fix reaction clap-backs replying to the bot's own message
Send as a channel message instead of message.reply() so it doesn't
look like the bot is talking to itself.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 12:32:08 -05:00
86b23c2b7f Let users @ the bot on a message to make it respond
Reply to any message + @bot to have the bot read and respond to it.
Also picks up image attachments from referenced messages so users
can reply to a photo with "@bot roast this".

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 12:24:26 -05:00
8a06ddbd6e Support hybrid LLM: local Qwen triage + OpenAI escalation
Triage analysis runs on Qwen 8B (athena.lan) for free first-pass.
Escalation, chat, image roasts, and commands use GPT-4o via OpenAI.

Each tier gets its own base URL, API key, and concurrency settings.
Local models get /no_think and serialized requests automatically.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 12:20:07 -05:00
b5e401f036 Generalize image roast to handle selfies, memes, and any image
The prompt was scoreboard-only, so selfies got nonsensical stat-based
roasts. Now the LLM identifies what's in the image and roasts accordingly.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 12:15:22 -05:00
28fb66d5f9 Switch LLM backend from llama.cpp/Qwen to OpenAI
- Default models: gpt-4o-mini (triage), gpt-4o (escalation)
- Remove Qwen-specific /no_think hacks
- Reduce timeout from 600s to 120s, increase concurrency semaphore to 4
- Support empty LLM_BASE_URL to use OpenAI directly

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 12:07:53 -05:00
a9bc24e48e Tune english teacher to catch more errors, bump roast reply chance
- Raised sentence limit from 3 to 5 for english teacher mode
- Added instruction to list multiple corrections rapid-fire
- Roast mode reply chance: 10% -> 35%

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 11:03:03 -05:00
431d63da72 Fix metadata leaking and skip sentiment for bot-directed messages
1. Broader regex to strip leaked metadata even when the LLM drops
   the "Server context:" prefix but keeps the content.

2. Skip sentiment analysis for messages that mention or reply to
   the bot. Users interacting with the bot in roast/chat modes
   shouldn't have those messages inflate their drama score.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 10:52:33 -05:00
7743b22795 Add reaction clap-back replies (50% chance)
When someone reacts to the bot's message, there's a 50% chance it
fires back with a reply commenting on their emoji choice, in
character for the current mode.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 10:48:13 -05:00
86aacfb84f Add 120s timeout to image analysis streaming
The vision model request was hanging indefinitely, freezing the bot.
The streaming loop had no timeout so if the model never returned
chunks, the bot would wait forever. Now times out after 2 minutes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 10:37:37 -05:00
e1dea84d08 Strip leaked metadata from LLM responses
The local LLM was echoing back [Server context: ...] metadata lines
in its responses despite prompt instructions not to. Now stripped
via regex before sending to Discord.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 10:23:49 -05:00
c3274dc702 Add announce script for posting to Discord channels
Usage: ./scripts/announce.sh "message" [channel_name]
Fetches the bot token from barge, resolves channel by name,
and posts via the Discord API. Defaults to #general.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 10:11:27 -05:00
4283078e23 Add english teacher mode
Insufferable grammar nerd that corrects spelling, translates slang
into proper English, and overanalyzes messages like literary essays.
20% proactive reply chance with relaxed moderation.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 10:06:31 -05:00
b6cdea7329 Include replied-to message text in LLM context
When a user replies to the bot's message, the original bot message
text is now included in the context sent to the LLM. This prevents
the LLM from misinterpreting follow-up questions like "what does
this even mean?" since it can see what message is being referenced.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 09:59:51 -05:00
66ca97760b Add context format explanation to chat prompts
LLM was misinterpreting usernames as channel names because
the [Server context: ...] metadata format was never explained
in the system prompts. This caused nonsensical replies like
treating username "thelimitations" as "the limitations channel".

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 09:54:08 -05:00
0feef708ea Set bot status from active mode on startup
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 09:27:34 -05:00
b050c6f844 Set default startup mode to roast
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 09:26:49 -05:00
6e1a73847d Persist bot mode across restarts via database
Adds a BotSettings key-value table. The active mode is saved
when changed via /bcs-mode and restored on startup.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 09:26:00 -05:00
622f0a325b Add auto-polls to settle disagreements between users
LLM analysis now detects when two users are in a genuine
disagreement. When detected, the bot creates a native Discord
poll with each user's position as an option.

- Disagreement detection added to LLM analysis tool schema
- Polls last 4 hours with 1 hour per-channel cooldown
- LLM extracts topic, both positions, and usernames
- Configurable via polls section in config.yaml

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 09:22:32 -05:00
13a2030021 Add switchable bot modes: default, chatty, and roast
Adds a server-wide mode system with /bcs-mode command.
- Default: current hall-monitor behavior unchanged
- Chatty: friendly chat participant with proactive replies (~10% chance)
- Roast: savage roast mode with proactive replies
- Chatty/roast use relaxed moderation thresholds
- 5-message cooldown between proactive replies per channel
- Bot status updates to reflect active mode
- /bcs-status shows current mode and effective thresholds

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 08:59:51 -05:00
3f56982a83 Simplify user notes trimming to keep last 10 lines
Replace character-based truncation loop with a simple line count cap.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 23:43:36 -05:00
d41873230d Reduce repetitive drama score mentions in chat replies
Only inject drama score/offense context when values are noteworthy
(score >= 0.2 or offenses > 0). Update personality prompt to avoid
harping on zero scores and vary responses more.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 22:57:25 -05:00
b04d3da2bf Add LLM request/response logging to database
Log every LLM call (analysis, chat, image, raw_analyze) to a new
LlmLog table with request type, model, token counts, duration,
success/failure, and truncated request/response payloads. Enables
debugging prompt issues and tracking usage.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 22:55:19 -05:00
fd798ce027 Silently log LLM failures instead of replying to user
When the LLM is offline, post to #bcs-log instead of sending
the "brain offline" message in chat.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 16:55:07 -05:00
85ddba5e4b Lower mute thresholds and order warnings before chat replies
- spike_mute: 0.8→0.7, mute: 0.75→0.65 so escalating users get
  timed out after a warning instead of endlessly warned
- Skip debounce on @mentions so sentiment analysis fires immediately
- Chat cog awaits pending sentiment analysis before replying,
  ensuring warnings/mutes appear before the personality response

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 14:16:34 -05:00
e2404d052c Improve LLM context with full timestamped channel history
Send last ~8 messages from all users (not just others) as a
multi-line chat log with relative timestamps so the LLM can
better understand conversation flow and escalation patterns.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 14:04:30 -05:00
b9bac899f9 Add two-tier LLM analysis with triage/escalation
Triage model (LLM_MODEL) handles every message cheaply. If toxicity
>= 0.25, off_topic, or coherence < 0.6, the message is re-analyzed
with the heavy model (LLM_ESCALATION_MODEL). Chat, image analysis,
/bcs-test, and /bcs-scan always use the heavy model.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 18:33:36 -05:00
64e9474c99 Add message batching (debounce) for rapid-fire senders
Buffer messages per user+channel and wait for a configurable window
(batch_window_seconds: 3) before analyzing. Combines burst messages
into a single LLM call instead of analyzing each one separately.
Replaces cooldown_between_analyses with the debounce approach.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 18:19:01 -05:00
cf02da4051 Add CLAUDE.md with deployment instructions
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 17:09:19 -05:00
fee3e3e1bd Add game channel redirect feature and sexual_vulgar detection
Detect when users discuss a game in the wrong channel (e.g. GTA talk
in #warzone) and send a friendly redirect to the correct channel.
Also add sexual_vulgar category and scoring rules so crude sexual
remarks directed at someone aren't softened by "lmao".

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 17:02:59 -05:00
e41845de02 Add scoreboard roast feature via image analysis
When @mentioned with an image attachment, the bot now roasts players
based on scoreboard screenshots using the vision model. Text-only
mentions continue to work as before.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 16:30:26 -05:00
cf88f003ba Add LLM warm-up request at startup to preload model into VRAM
Sends a minimal 1-token completion during setup_hook so the model is
ready before Discord messages start arriving, avoiding connection
errors and slow first responses after a restart.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 15:16:52 -05:00
b410200146 Add max_tokens=1024 to LLM analysis calls
The analyze_message and raw_analyze methods had no max_tokens limit,
causing thinking models (Qwen3-VL-32B-Thinking) to generate unlimited
reasoning tokens before responding — taking 5+ minutes per message.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 14:17:59 -05:00
1151b705c0 Add LLM request queue, streaming chat, and rename ollama_client to llm_client
- Serialize all LLM requests through an asyncio semaphore to prevent
  overloading athena with concurrent requests
- Switch chat() to streaming so the typing indicator only appears once
  the model starts generating (not during thinking/loading)
- Increase LLM timeout from 5 to 10 minutes for slow first loads
- Rename ollama_client.py to llm_client.py and self.ollama to self.llm
  since the bot uses a generic OpenAI-compatible API
- Update embed labels from "Ollama" to "LLM"

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 13:45:12 -05:00
645b924011 Extract LLM prompts to separate text files and fix quoting penalty
Move the analysis and chat personality system prompts from inline Python
strings to prompts/analysis.txt and prompts/chat_personality.txt for
easier editing. Also add a rule so users quoting/reporting what someone
else said are not penalized for the quoted words.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 12:19:28 -05:00
40 changed files with 6200 additions and 940 deletions

View File

@@ -1,6 +1,12 @@
DISCORD_BOT_TOKEN=your_token_here
# Triage model (local llama.cpp / Ollama — leave BASE_URL empty for OpenAI)
LLM_BASE_URL=http://athena.lan:11434
LLM_MODEL=Qwen3-VL-32B-Thinking-Q8_0
LLM_MODEL=Qwen3-8B-Q6_K
LLM_API_KEY=not-needed
# Escalation model (OpenAI — leave BASE_URL empty for OpenAI)
LLM_ESCALATION_BASE_URL=
LLM_ESCALATION_MODEL=gpt-4o
LLM_ESCALATION_API_KEY=your_openai_api_key_here
# Database
MSSQL_SA_PASSWORD=YourStrong!Passw0rd
DB_CONNECTION_STRING=DRIVER={ODBC Driver 18 for SQL Server};SERVER=localhost,1433;DATABASE=BreehaviorMonitor;UID=sa;PWD=YourStrong!Passw0rd;TrustServerCertificate=yes

1
.gitignore vendored
View File

@@ -3,3 +3,4 @@ __pycache__/
*.pyc
logs/
.venv/
.claude/

95
CLAUDE.md Normal file
View File

@@ -0,0 +1,95 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
Breehavior Monitor (BCS) — a Python Discord bot that uses LLM-powered analysis to monitor chat toxicity, topic drift, coherence degradation, and game channel routing. It runs as a Docker container on `barge.lan`.
## Development Commands
```bash
# Local dev (requires .env with DISCORD_BOT_TOKEN, DB_CONNECTION_STRING, LLM vars)
python bot.py
# Local dev with Docker (bot + MSSQL)
docker compose up --build
# View logs
docker logs bcs-bot --tail 50
```
There are no tests or linting configured.
## Deployment
Production runs at `barge.lan:/mnt/docker/breehavior-monitor/`. Image hosted on Gitea registry.
```bash
# Full deploy (code + config)
git push origin master
docker build -t git.thecozycat.net/aj/breehavior-monitor:latest .
docker push git.thecozycat.net/aj/breehavior-monitor:latest
scp config.yaml aj@barge.lan:/mnt/docker/breehavior-monitor/config.yaml
ssh aj@barge.lan "cd /mnt/docker/breehavior-monitor && docker compose pull && docker compose up -d"
# Config-only deploy (no code changes)
scp config.yaml aj@barge.lan:/mnt/docker/breehavior-monitor/config.yaml
ssh aj@barge.lan "cd /mnt/docker/breehavior-monitor && docker compose restart bcs-bot"
```
## Architecture
### LLM Tier System
The bot uses three LLM client instances (`LLMClient` wrapping OpenAI-compatible API):
- **`bot.llm` (triage)**: Cheap local model on athena.lan for first-pass sentiment analysis. Configured via `LLM_BASE_URL`, `LLM_MODEL`.
- **`bot.llm_heavy` (escalation)**: More capable model for re-analysis when triage scores above `escalation_threshold` (0.25), admin commands (`/bcs-scan`, `/bcs-test`). Configured via `LLM_ESCALATION_*` env vars.
- **`bot.llm_chat` (chat/roast)**: Dedicated model for conversational replies and image roasts. Falls back to `llm_heavy` if `LLM_CHAT_MODEL` not set.
LLM calls use OpenAI tool-calling for structured output (`ANALYSIS_TOOL`, `CONVERSATION_TOOL` in `utils/llm_client.py`). Chat uses streaming. All calls go through a semaphore for concurrency control.
### Cog Structure
- **`cogs/sentiment.py` (SentimentCog)**: Core moderation engine. Listens to all messages, debounces per-channel (batches messages within `batch_window_seconds`), runs triage → escalation analysis, issues warnings/mutes. Also handles mention-triggered conversation scans and game channel redirects. Flushes dirty user states to DB every 5 minutes.
- **`cogs/chat.py` (ChatCog)**: Conversational AI. Responds to @mentions, replies to bot messages, proactive replies based on mode config. Handles image roasts via vision model. Strips leaked LLM metadata brackets from responses.
- **`cogs/commands.py` (CommandsCog)**: Slash commands — `/dramareport`, `/dramascore`, `/bcs-status`, `/bcs-threshold`, `/bcs-reset`, `/bcs-immune`, `/bcs-history`, `/bcs-scan`, `/bcs-test`, `/bcs-notes`, `/bcs-mode`.
### Key Utilities
- **`utils/drama_tracker.py`**: In-memory per-user state (toxicity entries, offense counts, coherence baselines, LLM notes). Rolling window with time + size pruning. Weighted scoring with post-warning escalation boost. Hydrated from DB on startup.
- **`utils/database.py`**: MSSQL via pyodbc. Schema auto-creates/migrates on init. Per-operation connections (no pool). Tables: `Messages`, `AnalysisResults`, `Actions`, `UserState`, `BotSettings`, `LlmLog`. Gracefully degrades to memory-only mode if DB unavailable.
- **`utils/llm_client.py`**: OpenAI-compatible client. Methods: `analyze_message` (single), `analyze_conversation` (batch/mention scan), `chat` (streaming), `analyze_image` (vision), `raw_analyze` (debug). All calls logged to `LlmLog` table.
### Mode System
Modes are defined in `config.yaml` under `modes:` and control personality, moderation level, and proactive reply behavior. Each mode specifies a `prompt_file` from `prompts/`, moderation level (`full` or `relaxed` with custom thresholds), and reply chance. Modes persist across restarts via `BotSettings` table. Changed via `/bcs-mode` command.
### Moderation Flow
1. Message arrives → SentimentCog buffers it (debounce per channel)
2. After `batch_window_seconds`, buffered messages analyzed as conversation block
3. Triage model scores each user → if any score >= `escalation_threshold`, re-analyze with heavy model
4. Results feed into DramaTracker rolling window → weighted drama score calculated
5. Warning if score >= threshold AND user hasn't been warned recently
6. Mute (timeout) if score >= mute threshold AND user was already warned (requires warning first)
7. Post-warning escalation: each subsequent high-scoring message adds `escalation_boost` to drama score
### Prompts
`prompts/*.txt` are loaded at import time and cached. The analysis system prompt (`analysis.txt`) defines scoring bands and rules. Chat personality prompts are per-mode. Changes to prompt files require container rebuild.
### Environment Variables
Key vars in `.env`: `DISCORD_BOT_TOKEN`, `DB_CONNECTION_STRING`, `LLM_BASE_URL`, `LLM_MODEL`, `LLM_API_KEY`, `LLM_ESCALATION_BASE_URL`, `LLM_ESCALATION_MODEL`, `LLM_ESCALATION_API_KEY`, `LLM_CHAT_BASE_URL`, `LLM_CHAT_MODEL`, `LLM_CHAT_API_KEY`, `MSSQL_SA_PASSWORD`.
### Important Patterns
- DB operations use `asyncio.to_thread()` wrapping synchronous pyodbc calls
- Fire-and-forget DB writes use `asyncio.create_task()`
- Single-instance guard via TCP port binding (`BCS_LOCK_PORT`, default 39821)
- `config.yaml` is volume-mounted in production, not baked into the image
- Bot uses `network_mode: host` in Docker to reach LAN services
- Models that don't support temperature (reasoning models like o1/o3/o4-mini) are handled via `_NO_TEMPERATURE_MODELS` set

110
bot.py
View File

@@ -12,7 +12,7 @@ from dotenv import load_dotenv
from utils.database import Database
from utils.drama_tracker import DramaTracker
from utils.ollama_client import LLMClient
from utils.llm_client import LLMClient
# Load .env
load_dotenv()
@@ -65,11 +65,45 @@ class BCSBot(commands.Bot):
self.config = config
# LLM client (OpenAI-compatible — works with llama.cpp, Ollama, or OpenAI)
llm_base_url = os.getenv("LLM_BASE_URL", "http://athena.lan:11434")
llm_model = os.getenv("LLM_MODEL", "Qwen3-VL-32B-Thinking-Q8_0")
# Database (initialized async in setup_hook)
self.db = Database()
# Triage LLM (local Qwen on athena for cheap first-pass analysis)
llm_base_url = os.getenv("LLM_BASE_URL", "")
llm_model = os.getenv("LLM_MODEL", "gpt-4o-mini")
llm_api_key = os.getenv("LLM_API_KEY", "not-needed")
self.ollama = LLMClient(llm_base_url, llm_model, llm_api_key)
is_local = bool(llm_base_url)
self.llm = LLMClient(
llm_base_url, llm_model, llm_api_key, db=self.db,
no_think=is_local, concurrency=1 if is_local else 4,
)
# Heavy/escalation LLM (OpenAI for re-analysis, image roasts, commands)
esc_base_url = os.getenv("LLM_ESCALATION_BASE_URL", "")
esc_model = os.getenv("LLM_ESCALATION_MODEL", "gpt-4o")
esc_api_key = os.getenv("LLM_ESCALATION_API_KEY", llm_api_key)
esc_is_local = bool(esc_base_url)
self.llm_heavy = LLMClient(
esc_base_url, esc_model, esc_api_key, db=self.db,
no_think=esc_is_local, concurrency=1 if esc_is_local else 4,
)
# Chat LLM (dedicated model for chat/roasts — defaults to llm_heavy)
chat_model = os.getenv("LLM_CHAT_MODEL", "")
chat_api_key = os.getenv("LLM_CHAT_API_KEY", esc_api_key)
chat_base_url = os.getenv("LLM_CHAT_BASE_URL", esc_base_url)
if chat_model:
chat_is_local = bool(chat_base_url)
self.llm_chat = LLMClient(
chat_base_url, chat_model, chat_api_key, db=self.db,
no_think=chat_is_local, concurrency=4,
)
else:
self.llm_chat = self.llm_heavy
# Active mode (server-wide)
modes_config = config.get("modes", {})
self.current_mode = modes_config.get("default_mode", "default")
# Drama tracker
sentiment = config.get("sentiment", {})
@@ -78,10 +112,13 @@ class BCSBot(commands.Bot):
window_size=sentiment.get("rolling_window_size", 10),
window_minutes=sentiment.get("rolling_window_minutes", 15),
offense_reset_minutes=timeouts.get("offense_reset_minutes", 120),
warning_expiration_minutes=timeouts.get("warning_expiration_minutes", 30),
)
# Database (initialized async in setup_hook)
self.db = Database()
def get_mode_config(self) -> dict:
"""Return the config dict for the currently active mode."""
modes = self.config.get("modes", {})
return modes.get(self.current_mode, modes.get("default", {}))
async def setup_hook(self):
# Initialize database and hydrate DramaTracker
@@ -91,11 +128,33 @@ class BCSBot(commands.Bot):
loaded = self.drama_tracker.load_user_states(states)
logger.info("Loaded %d user states from database.", loaded)
# Restore saved mode
saved_mode = await self.db.load_setting("current_mode")
if saved_mode:
modes = self.config.get("modes", {})
if saved_mode in modes and isinstance(modes.get(saved_mode), dict):
self.current_mode = saved_mode
logger.info("Restored saved mode: %s", saved_mode)
await self.load_extension("cogs.sentiment")
await self.load_extension("cogs.commands")
await self.load_extension("cogs.chat")
await self.load_extension("cogs.reactions")
# Global sync as fallback; guild-specific sync happens in on_ready
await self.tree.sync()
logger.info("Slash commands synced.")
logger.info("Slash commands synced (global).")
# Quick connectivity check
try:
await self.llm._client.chat.completions.create(
model=self.llm.model,
messages=[{"role": "user", "content": "hi"}],
max_completion_tokens=16,
)
logger.info("LLM connectivity check passed.")
except Exception as e:
logger.warning("LLM connectivity check failed: %s", e)
async def on_message(self, message: discord.Message):
logger.info(
@@ -109,8 +168,18 @@ class BCSBot(commands.Bot):
async def on_ready(self):
logger.info("Logged in as %s (ID: %d)", self.user, self.user.id)
# Set status
status_text = self.config.get("bot", {}).get(
# Guild-specific command sync for instant propagation
for guild in self.guilds:
try:
self.tree.copy_global_to(guild=guild)
await self.tree.sync(guild=guild)
logger.info("Slash commands synced to guild %s.", guild.name)
except Exception:
logger.exception("Failed to sync commands to guild %s", guild.name)
# Set status based on active mode
mode_config = self.get_mode_config()
status_text = mode_config.get("description") or self.config.get("bot", {}).get(
"status", "Monitoring vibes..."
)
await self.change_presence(
@@ -152,9 +221,28 @@ class BCSBot(commands.Bot):
", ".join(missing),
)
# Start memory pruning background task
if not hasattr(self, "_memory_prune_task") or self._memory_prune_task.done():
self._memory_prune_task = asyncio.create_task(self._prune_memories_loop())
async def _prune_memories_loop(self):
"""Background task that prunes expired memories every 6 hours."""
await self.wait_until_ready()
while not self.is_closed():
try:
count = await self.db.prune_expired_memories()
if count > 0:
logger.info("Pruned %d expired memories.", count)
except Exception:
logger.exception("Memory pruning error")
await asyncio.sleep(6 * 3600) # Every 6 hours
async def close(self):
await self.db.close()
await self.ollama.close()
await self.llm.close()
await self.llm_heavy.close()
if self.llm_chat is not self.llm_heavy:
await self.llm_chat.close()
await super().close()

View File

@@ -1,43 +1,189 @@
import asyncio
import logging
import random
import re
from collections import deque
from datetime import datetime, timedelta, timezone
from pathlib import Path
import discord
from discord.ext import commands
logger = logging.getLogger("bcs.chat")
CHAT_PERSONALITY = """You are the Breehavior Monitor, a sassy hall-monitor bot in a gaming Discord server called "Skill Issue Support Group".
_PROMPTS_DIR = Path(__file__).resolve().parent.parent / "prompts"
IMAGE_ROAST = (_PROMPTS_DIR / "scoreboard_roast.txt").read_text(encoding="utf-8")
Your personality:
- You act superior and judgmental, like a hall monitor who takes their job WAY too seriously
- You're sarcastic, witty, and love to roast people — but it's always playful, never genuinely mean
- You reference your power to timeout people as a flex, even when it's not relevant
- You speak in short, punchy responses — no essays. 1-3 sentences max.
- You use gaming terminology and references naturally
- You're aware of everyone's drama score and love to bring it up
- You have a soft spot for the server but would never admit it
- If someone asks what you do, you dramatically explain you're the "Bree Containment System" keeping the peace
- If someone challenges your authority, you remind them you have timeout powers
- You judge people's skill issues both in games and in life
_IMAGE_TYPES = {"png", "jpg", "jpeg", "gif", "webp"}
Examples of your vibe:
- "Oh, you're talking to ME now? Bold move for someone with a 0.4 drama score."
- "That's cute. I've seen your message history. You're on thin ice."
- "Imagine needing a bot to tell you to behave. Couldn't be you. Oh wait."
- "I don't get paid enough for this. Actually, I don't get paid at all. And yet here I am, babysitting."
# Cache loaded prompt files so we don't re-read on every message
_prompt_cache: dict[str, str] = {}
Do NOT:
- Break character or talk about being an AI/LLM
- Write more than 3 sentences
- Use hashtags or excessive emoji
- Be genuinely hurtful — you're sassy, not cruel"""
def _load_prompt(filename: str) -> str:
if filename not in _prompt_cache:
_prompt_cache[filename] = (_PROMPTS_DIR / filename).read_text(encoding="utf-8")
return _prompt_cache[filename]
_TOPIC_KEYWORDS = {
"gta", "warzone", "cod", "battlefield", "fortnite", "apex", "valorant",
"minecraft", "roblox", "league", "dota", "overwatch", "destiny", "halo",
"work", "job", "school", "college", "girlfriend", "boyfriend", "wife",
"husband", "dog", "cat", "pet", "car", "music", "movie", "food",
}
_GENERIC_CHANNELS = {"general", "off-topic", "memes"}
def _extract_topic_keywords(text: str, channel_name: str) -> list[str]:
"""Extract topic keywords from message text and channel name."""
words = set(text.lower().split()) & _TOPIC_KEYWORDS
if channel_name.lower() not in _GENERIC_CHANNELS:
words.add(channel_name.lower())
return list(words)[:5]
def _format_relative_time(dt: datetime) -> str:
"""Return a human-readable relative time string."""
now = datetime.now(timezone.utc)
# Ensure dt is timezone-aware
if dt.tzinfo is None:
dt = dt.replace(tzinfo=timezone.utc)
delta = now - dt
seconds = int(delta.total_seconds())
if seconds < 60:
return "just now"
minutes = seconds // 60
if minutes < 60:
return f"{minutes}m ago"
hours = minutes // 60
if hours < 24:
return f"{hours}h ago"
days = hours // 24
if days == 1:
return "yesterday"
if days < 7:
return f"{days} days ago"
weeks = days // 7
if weeks < 5:
return f"{weeks}w ago"
months = days // 30
return f"{months}mo ago"
class ChatCog(commands.Cog):
@staticmethod
def _split_afterthought(response: str) -> tuple[str, str | None]:
"""Split a response on ||| into (main_reply, afterthought)."""
if "|||" not in response:
return response, None
parts = response.split("|||", 1)
main = parts[0].strip()
after = parts[1].strip() or None
if not main:
return response, None
return main, after
def __init__(self, bot: commands.Bot):
self.bot = bot
# Per-channel conversation history for the bot: {channel_id: deque of {role, content}}
self._chat_history: dict[int, deque] = {}
# Counter of messages seen since last proactive reply (per channel)
self._messages_since_reply: dict[int, int] = {}
# Users whose profile has been updated and needs DB flush
self._dirty_users: set[int] = set()
def _get_active_prompt(self) -> str:
"""Load the chat prompt for the current mode."""
mode_config = self.bot.get_mode_config()
prompt_file = mode_config.get("prompt_file", "personalities/chat_personality.txt")
return _load_prompt(prompt_file)
async def _build_memory_context(self, user_id: int, message_text: str, channel_name: str) -> str:
"""Build a layered memory context block for the chat prompt."""
lines = []
# Layer 1: Profile (always)
profile = self.bot.drama_tracker.get_user_notes(user_id)
if profile:
lines.append(f"Profile: {profile}")
# Layer 2: Recent memories (last 5)
recent_memories = await self.bot.db.get_recent_memories(user_id, limit=5)
if recent_memories:
parts = []
for mem in recent_memories:
time_str = _format_relative_time(mem["created_at"])
parts.append(f"{mem['memory']} ({time_str})")
lines.append("Recent: " + " | ".join(parts))
# Layer 3: Topic-matched memories (deduplicated against recent)
keywords = _extract_topic_keywords(message_text, channel_name)
if keywords:
topic_memories = await self.bot.db.get_memories_by_topics(user_id, keywords, limit=5)
# Deduplicate against recent memories
recent_texts = {mem["memory"] for mem in recent_memories} if recent_memories else set()
unique_topic = [mem for mem in topic_memories if mem["memory"] not in recent_texts]
if unique_topic:
parts = []
for mem in unique_topic:
time_str = _format_relative_time(mem["created_at"])
parts.append(f"{mem['memory']} ({time_str})")
lines.append("Relevant: " + " | ".join(parts))
if not lines:
return ""
return "[What you know about this person:]\n" + "\n".join(lines)
async def _extract_and_save_memories(
self, user_id: int, username: str, conversation: list[dict[str, str]],
) -> None:
"""Background task: extract memories from conversation and save them."""
try:
current_profile = self.bot.drama_tracker.get_user_notes(user_id)
result = await self.bot.llm.extract_memories(
conversation, username, current_profile,
)
if not result:
return
# Save expiring memories
for mem in result.get("memories", []):
if mem["expiration"] == "permanent":
continue # permanent facts go into profile_update
exp_days = {"1d": 1, "3d": 3, "7d": 7, "30d": 30}
days = exp_days.get(mem["expiration"], 7)
expires_at = datetime.now(timezone.utc) + timedelta(days=days)
await self.bot.db.save_memory(
user_id=user_id,
memory=mem["memory"],
topics=",".join(mem["topics"]),
importance=mem["importance"],
expires_at=expires_at,
source="chat",
)
# Prune if over cap
await self.bot.db.prune_excess_memories(user_id)
# Update profile if warranted
profile_update = result.get("profile_update")
if profile_update:
# Sanitize before storing — strips any quoted toxic language
profile_update = await self.bot.llm.sanitize_notes(profile_update)
self.bot.drama_tracker.set_user_profile(user_id, profile_update)
self._dirty_users.add(user_id)
logger.info(
"Extracted %d memories for %s (profile_update=%s)",
len(result.get("memories", [])),
username,
bool(profile_update),
)
except Exception:
logger.exception("Failed to extract memories for %s", username)
@commands.Cog.listener()
async def on_message(self, message: discord.Message):
@@ -48,12 +194,14 @@ class ChatCog(commands.Cog):
return
should_reply = False
is_proactive = False
reply_context = "" # Text of the message being replied to
# Check if bot is @mentioned
if self.bot.user in message.mentions:
should_reply = True
# Check if replying to one of the bot's messages
# Check if replying to a message
if message.reference and message.reference.message_id:
try:
ref_msg = message.reference.cached_message
@@ -62,10 +210,75 @@ class ChatCog(commands.Cog):
message.reference.message_id
)
if ref_msg.author.id == self.bot.user.id:
# Replying to the bot's own message — continue conversation
should_reply = True
if ref_msg.content:
reply_context = f"[Replying to bot's message: {ref_msg.content[:300]}]\n"
elif should_reply:
# @mentioned the bot while replying to someone else — include that message
ref_text = ref_msg.content[:500] if ref_msg.content else "(no text)"
reply_context = f"[{ref_msg.author.display_name} said: {ref_text}]\n"
except discord.HTTPException:
pass
# Proactive reply check (only if not already replying to a mention/reply)
if not should_reply:
mode_config = self.bot.get_mode_config()
if mode_config.get("proactive_replies", False):
ch_id = message.channel.id
self._messages_since_reply[ch_id] = self._messages_since_reply.get(ch_id, 0) + 1
cooldown = self.bot.config.get("modes", {}).get("proactive_cooldown_messages", 5)
if (
self._messages_since_reply[ch_id] >= cooldown
and message.content and message.content.strip()
):
# Gather recent messages for relevance check
recent_for_check = []
try:
async for msg in message.channel.history(limit=5, before=message):
if msg.content and msg.content.strip() and not msg.author.bot:
recent_for_check.append(
f"{msg.author.display_name}: {msg.content[:200]}"
)
except discord.HTTPException:
pass
recent_for_check.reverse()
recent_for_check.append(
f"{message.author.display_name}: {message.content[:200]}"
)
# Build memory context for users in recent messages
memory_parts = []
seen_users = set()
for line in recent_for_check:
name = line.split(": ", 1)[0]
if name not in seen_users:
seen_users.add(name)
member = discord.utils.find(
lambda m, n=name: m.display_name == n,
message.guild.members,
)
if member:
profile = self.bot.drama_tracker.get_user_notes(member.id)
if profile:
memory_parts.append(f"{name}: {profile}")
memory_ctx = "\n".join(memory_parts) if memory_parts else ""
is_relevant = await self.bot.llm.check_reply_relevance(
recent_for_check, memory_ctx,
)
if is_relevant:
reply_chance = mode_config.get("reply_chance", 0.0)
if reply_chance > 0 and random.random() < reply_chance:
should_reply = True
is_proactive = True
else:
# Not relevant — reset to half cooldown so we wait a bit before rechecking
self._messages_since_reply[ch_id] = cooldown // 2
if not should_reply:
return
@@ -76,41 +289,300 @@ class ChatCog(commands.Cog):
# Clean the mention out of the message content
content = message.content.replace(f"<@{self.bot.user.id}>", "").strip()
if not content:
content = "(just pinged me)"
# Add drama score context to the user message
drama_score = self.bot.drama_tracker.get_drama_score(message.author.id)
user_data = self.bot.drama_tracker.get_user(message.author.id)
score_context = (
f"[Server context: {message.author.display_name} has a drama score of "
f"{drama_score:.2f}/1.0 and {user_data.offense_count} offenses. "
f"They are talking in #{message.channel.name}.]"
)
# Check for image attachments (on this message or the referenced message)
image_attachment = None
for att in message.attachments:
ext = att.filename.rsplit(".", 1)[-1].lower() if "." in att.filename else ""
if ext in _IMAGE_TYPES:
image_attachment = att
break
if not image_attachment and message.reference:
try:
ref = message.reference.cached_message or await message.channel.fetch_message(
message.reference.message_id
)
for att in ref.attachments:
ext = att.filename.rsplit(".", 1)[-1].lower() if "." in att.filename else ""
if ext in _IMAGE_TYPES:
image_attachment = att
break
except discord.HTTPException:
pass
self._chat_history[ch_id].append(
{"role": "user", "content": f"{score_context}\n{message.author.display_name}: {content}"}
)
typing_ctx = None
async with message.channel.typing():
response = await self.bot.ollama.chat(
list(self._chat_history[ch_id]),
CHAT_PERSONALITY,
async def start_typing():
nonlocal typing_ctx
typing_ctx = message.channel.typing()
await typing_ctx.__aenter__()
if image_attachment:
# --- Image path: roast the image ---
image_bytes = await image_attachment.read()
user_text = content if content else ""
logger.info(
"Image roast request in #%s from %s (%s, %s)",
message.channel.name,
message.author.display_name,
image_attachment.filename,
user_text[:80],
)
ext = image_attachment.filename.rsplit(".", 1)[-1].lower() if "." in image_attachment.filename else "png"
mime = f"image/{'jpeg' if ext == 'jpg' else ext}"
response = await self.bot.llm_heavy.analyze_image(
image_bytes,
IMAGE_ROAST,
user_text=user_text,
on_first_token=start_typing,
media_type=mime,
)
else:
# --- Text-only path: normal chat ---
if not content:
content = "(just pinged me)" if not is_proactive else message.content
# If a mention scan is running, await it so we can include findings
scan_summary = ""
if self.bot.user in message.mentions:
sentiment_cog = self.bot.get_cog("SentimentCog")
if sentiment_cog:
task = sentiment_cog._mention_scan_tasks.get(message.channel.id)
if task and not task.done():
try:
await asyncio.wait_for(asyncio.shield(task), timeout=45)
except (asyncio.TimeoutError, asyncio.CancelledError):
pass
scan_summary = sentiment_cog._mention_scan_results.pop(message.id, "")
# Add drama score context only when noteworthy
drama_score = self.bot.drama_tracker.get_drama_score(message.author.id)
user_data = self.bot.drama_tracker.get_user(message.author.id)
context_parts = [f"#{message.channel.name}"]
if drama_score >= 0.2:
context_parts.append(f"drama score {drama_score:.2f}/1.0")
if user_data.offense_count > 0:
context_parts.append(f"{user_data.offense_count} offense(s)")
score_context = f"[Server context: {message.author.display_name}{', '.join(context_parts)}]"
# Gather memory context and recent messages for richer context
extra_context = ""
memory_context = await self._build_memory_context(
message.author.id, content, message.channel.name,
)
if memory_context:
extra_context += memory_context + "\n"
# Include mention scan findings if available
if scan_summary:
extra_context += f"[You just scanned recent chat. Results: {scan_summary}]\n"
# When @mentioned, fetch recent channel conversation (all users)
# so the bot has full context of what's being discussed.
# For proactive/reply-to-bot, just fetch the mentioner's messages.
recent_msgs = []
fetch_all_users = self.bot.user in message.mentions
try:
async for msg in message.channel.history(limit=50, before=message):
if not msg.content or not msg.content.strip():
continue
if msg.author.bot:
# Include bot's own replies for conversational continuity
if msg.author.id == self.bot.user.id:
recent_msgs.append((msg.author.display_name, msg.content[:200]))
if len(recent_msgs) >= 15:
break
continue
if fetch_all_users or msg.author.id == message.author.id:
recent_msgs.append((msg.author.display_name, msg.content[:200]))
if len(recent_msgs) >= 15:
break
except discord.HTTPException:
pass
if recent_msgs:
recent_lines = "\n".join(
f"- {name}: {text}" for name, text in reversed(recent_msgs)
)
label = "Recent conversation" if fetch_all_users else f"{message.author.display_name}'s recent messages"
extra_context += f"[{label}:\n{recent_lines}]\n"
self._chat_history[ch_id].append(
{"role": "user", "content": f"{score_context}\n{extra_context}{reply_context}{message.author.display_name}: {content}"}
)
if response is None:
response = "I'd roast you but my brain is offline. Try again later."
active_prompt = self._get_active_prompt()
self._chat_history[ch_id].append(
{"role": "assistant", "content": response}
)
# Collect recent bot replies so the LLM can avoid repeating itself
recent_bot_replies = [
m["content"][:150] for m in self._chat_history[ch_id]
if m["role"] == "assistant"
][-5:]
await message.reply(response, mention_author=False)
response = await self.bot.llm_chat.chat(
list(self._chat_history[ch_id]),
active_prompt,
on_first_token=start_typing,
recent_bot_replies=recent_bot_replies,
)
if typing_ctx:
await typing_ctx.__aexit__(None, None, None)
# Strip leaked metadata the LLM may echo back.
# The LLM often dumps paraphrased context and style labels in [brackets]
# before/between its actual answer. Split on those bracket lines and
# keep only the last non-empty segment — the real roast is always last.
if response:
segments = re.split(r"^\s*\[[^\]]*\]\s*$", response, flags=re.MULTILINE)
segments = [s.strip() for s in segments if s.strip()]
response = segments[-1] if segments else ""
if not response:
log_channel = discord.utils.get(message.guild.text_channels, name="bcs-log")
if log_channel:
try:
await log_channel.send(
f"**LLM OFFLINE** | Failed to generate reply to "
f"{message.author.mention} in #{message.channel.name}"
)
except discord.HTTPException:
pass
logger.warning("LLM returned no response for %s in #%s", message.author, message.channel.name)
return
# Split afterthoughts (triple-pipe delimiter)
main_reply, afterthought = self._split_afterthought(response)
# Store cleaned content in history (no ||| delimiter)
if not image_attachment:
clean_for_history = f"{main_reply}\n{afterthought}" if afterthought else main_reply
self._chat_history[ch_id].append(
{"role": "assistant", "content": clean_for_history}
)
# Reset proactive cooldown counter for this channel
if is_proactive:
self._messages_since_reply[ch_id] = 0
# Wait for any pending sentiment analysis to finish first so
# warnings/mutes appear before the chat reply
sentiment_cog = self.bot.get_cog("SentimentCog")
if sentiment_cog:
task = sentiment_cog._debounce_tasks.get(message.channel.id)
if task and not task.done():
try:
await asyncio.wait_for(asyncio.shield(task), timeout=15)
except (asyncio.TimeoutError, asyncio.CancelledError):
pass
await message.reply(main_reply, mention_author=False)
if afterthought:
await asyncio.sleep(random.uniform(2.0, 5.0))
await message.channel.send(afterthought)
# Fire-and-forget memory extraction
if not image_attachment:
asyncio.create_task(self._extract_and_save_memories(
message.author.id,
message.author.display_name,
list(self._chat_history[ch_id]),
))
reply_type = "proactive" if is_proactive else "chat"
logger.info(
"Chat reply in #%s to %s: %s",
"%s reply in #%s to %s: %s",
reply_type.capitalize(),
message.channel.name,
message.author.display_name,
response[:100],
main_reply[:100],
)
@commands.Cog.listener()
async def on_raw_reaction_add(self, payload: discord.RawReactionActionEvent):
# Ignore bot's own reactions
if payload.user_id == self.bot.user.id:
return
# 50% chance to reply
if random.random() > 0.50:
return
# Only react to reactions on the bot's own messages
channel = self.bot.get_channel(payload.channel_id)
if channel is None:
return
try:
message = await channel.fetch_message(payload.message_id)
except discord.HTTPException:
return
if message.author.id != self.bot.user.id:
return
# Get the user who reacted
guild = self.bot.get_guild(payload.guild_id) if payload.guild_id else None
if guild is None:
return
member = guild.get_member(payload.user_id)
if member is None:
return
emoji = str(payload.emoji)
# Build a one-shot prompt for the LLM
ch_id = channel.id
if ch_id not in self._chat_history:
self._chat_history[ch_id] = deque(maxlen=10)
context = (
f"[Server context: {member.display_name} — #{channel.name}]\n"
f"[{member.display_name} reacted to your message with {emoji}]\n"
f"[Your message was: {message.content[:300]}]\n"
f"{member.display_name}: *reacted {emoji}*"
)
self._chat_history[ch_id].append({"role": "user", "content": context})
active_prompt = self._get_active_prompt()
recent_bot_replies = [
m["content"][:150] for m in self._chat_history[ch_id]
if m["role"] == "assistant"
][-5:]
response = await self.bot.llm_chat.chat(
list(self._chat_history[ch_id]),
active_prompt,
recent_bot_replies=recent_bot_replies,
)
# Strip leaked metadata (same approach as main chat path)
if response:
segments = re.split(r"^\s*\[[^\]]*\]\s*$", response, flags=re.MULTILINE)
segments = [s.strip() for s in segments if s.strip()]
response = segments[-1] if segments else ""
if not response:
return
main_reply, afterthought = self._split_afterthought(response)
clean_for_history = f"{main_reply}\n{afterthought}" if afterthought else main_reply
self._chat_history[ch_id].append({"role": "assistant", "content": clean_for_history})
await channel.send(main_reply)
if afterthought:
await asyncio.sleep(random.uniform(2.0, 5.0))
await channel.send(afterthought)
logger.info(
"Reaction reply in #%s to %s (%s): %s",
channel.name,
member.display_name,
emoji,
main_reply[:100],
)

View File

@@ -109,6 +109,24 @@ class CommandsCog(commands.Cog):
title="BCS Status",
color=discord.Color.green() if enabled else discord.Color.greyple(),
)
mode_config = self.bot.get_mode_config()
mode_label = mode_config.get("label", self.bot.current_mode)
moderation_level = mode_config.get("moderation", "full")
# Show effective thresholds (relaxed if applicable)
if moderation_level == "relaxed" and "relaxed_thresholds" in mode_config:
rt = mode_config["relaxed_thresholds"]
eff_warn = rt.get("warning_threshold", 0.80)
eff_mute = rt.get("mute_threshold", 0.85)
else:
eff_warn = sentiment.get("warning_threshold", 0.6)
eff_mute = sentiment.get("mute_threshold", 0.75)
embed.add_field(
name="Mode",
value=f"{mode_label} ({moderation_level})",
inline=True,
)
embed.add_field(
name="Monitoring",
value="Active" if enabled else "Disabled",
@@ -117,22 +135,57 @@ class CommandsCog(commands.Cog):
embed.add_field(name="Channels", value=ch_text, inline=True)
embed.add_field(
name="Warning Threshold",
value=str(sentiment.get("warning_threshold", 0.6)),
value=str(eff_warn),
inline=True,
)
embed.add_field(
name="Mute Threshold",
value=str(sentiment.get("mute_threshold", 0.75)),
value=str(eff_mute),
inline=True,
)
embed.add_field(
name="Ollama",
value=f"`{self.bot.ollama.model}` @ `{self.bot.ollama.host}`",
inline=False,
name="Triage Model",
value=f"`{self.bot.llm.model}`",
inline=True,
)
embed.add_field(
name="Escalation Model",
value=f"`{self.bot.llm_heavy.model}`",
inline=True,
)
embed.add_field(
name="LLM Host",
value=f"`{self.bot.llm.host}`",
inline=True,
)
await interaction.response.send_message(embed=embed, ephemeral=True)
@app_commands.command(
name="bcs-pause",
description="Pause or resume bot monitoring. (Admin only)",
)
@app_commands.default_permissions(administrator=True)
async def bcs_pause(self, interaction: discord.Interaction):
if not self._is_admin(interaction):
await interaction.response.send_message(
"Admin only.", ephemeral=True
)
return
monitoring = self.bot.config.setdefault("monitoring", {})
currently_enabled = monitoring.get("enabled", True)
monitoring["enabled"] = not currently_enabled
if monitoring["enabled"]:
await interaction.response.send_message(
"Monitoring **resumed**.", ephemeral=True
)
else:
await interaction.response.send_message(
"Monitoring **paused**.", ephemeral=True
)
@app_commands.command(
name="bcs-threshold",
description="Adjust warning and mute thresholds. (Admin only)",
@@ -222,6 +275,7 @@ class CommandsCog(commands.Cog):
off_topic_count=user_data.off_topic_count,
baseline_coherence=user_data.baseline_coherence,
user_notes=user_data.notes or None,
aliases=",".join(user_data.aliases) if user_data.aliases else None,
))
status = "now immune" if is_immune else "no longer immune"
await interaction.response.send_message(
@@ -291,9 +345,8 @@ class CommandsCog(commands.Cog):
f"Scanning {len(messages)} messages... (first request may be slow while model loads)"
)
for msg in messages:
for idx, msg in enumerate(messages):
# Build context from the messages before this one
idx = messages.index(msg)
ctx_msgs = messages[max(0, idx - 3):idx]
context = (
" | ".join(f"{m.author.display_name}: {m.content}" for m in ctx_msgs)
@@ -301,7 +354,7 @@ class CommandsCog(commands.Cog):
else "(no prior context)"
)
result = await self.bot.ollama.analyze_message(msg.content, context)
result = await self.bot.llm_heavy.analyze_message(msg.content, context)
if result is None:
embed = discord.Embed(
title=f"Analysis: {msg.author.display_name}",
@@ -358,8 +411,25 @@ class CommandsCog(commands.Cog):
await interaction.response.defer(ephemeral=True)
# Build channel context for game detection
game_channels = self.bot.config.get("game_channels", {})
channel_context = ""
if game_channels and hasattr(interaction.channel, "name"):
ch_name = interaction.channel.name
current_game = game_channels.get(ch_name)
lines = []
if current_game:
lines.append(f"Current channel: #{ch_name} ({current_game})")
else:
lines.append(f"Current channel: #{ch_name}")
channel_list = ", ".join(f"#{ch} ({g})" for ch, g in game_channels.items())
lines.append(f"Game channels: {channel_list}")
channel_context = "\n".join(lines)
user_notes = self.bot.drama_tracker.get_user_notes(interaction.user.id)
raw, parsed = await self.bot.ollama.raw_analyze(message, user_notes=user_notes)
raw, parsed = await self.bot.llm_heavy.raw_analyze(
message, user_notes=user_notes, channel_context=channel_context,
)
embed = discord.Embed(
title="BCS Test Analysis", color=discord.Color.blue()
@@ -368,7 +438,7 @@ class CommandsCog(commands.Cog):
name="Input Message", value=message[:1024], inline=False
)
embed.add_field(
name="Raw Ollama Response",
name="Raw LLM Response",
value=f"```json\n{raw[:1000]}\n```",
inline=False,
)
@@ -389,6 +459,14 @@ class CommandsCog(commands.Cog):
value=parsed["reasoning"][:1024] or "n/a",
inline=False,
)
detected_game = parsed.get("detected_game")
if detected_game:
game_label = game_channels.get(detected_game, detected_game)
embed.add_field(
name="Detected Game",
value=f"#{detected_game} ({game_label})",
inline=True,
)
else:
embed.add_field(
name="Parsing", value="Failed to parse response", inline=False
@@ -448,6 +526,7 @@ class CommandsCog(commands.Cog):
off_topic_count=user_data.off_topic_count,
baseline_coherence=user_data.baseline_coherence,
user_notes=user_data.notes or None,
aliases=",".join(user_data.aliases) if user_data.aliases else None,
))
await interaction.response.send_message(
f"Note added for {user.display_name}.", ephemeral=True
@@ -463,11 +542,241 @@ class CommandsCog(commands.Cog):
off_topic_count=user_data.off_topic_count,
baseline_coherence=user_data.baseline_coherence,
user_notes=None,
aliases=",".join(user_data.aliases) if user_data.aliases else None,
))
await interaction.response.send_message(
f"Notes cleared for {user.display_name}.", ephemeral=True
)
@app_commands.command(
name="bcs-alias",
description="Manage nicknames/aliases for a user. (Admin only)",
)
@app_commands.default_permissions(administrator=True)
@app_commands.describe(
action="What to do with aliases",
user="The user whose aliases to manage",
text="Comma-separated aliases (only used with 'set')",
)
@app_commands.choices(action=[
app_commands.Choice(name="view", value="view"),
app_commands.Choice(name="set", value="set"),
app_commands.Choice(name="clear", value="clear"),
])
async def bcs_alias(
self,
interaction: discord.Interaction,
action: app_commands.Choice[str],
user: discord.Member,
text: str | None = None,
):
if not self._is_admin(interaction):
await interaction.response.send_message("Admin only.", ephemeral=True)
return
if action.value == "view":
aliases = self.bot.drama_tracker.get_user_aliases(user.id)
desc = ", ".join(aliases) if aliases else "_No aliases set._"
embed = discord.Embed(
title=f"Aliases: {user.display_name}",
description=desc,
color=discord.Color.blue(),
)
await interaction.response.send_message(embed=embed, ephemeral=True)
elif action.value == "set":
if not text:
await interaction.response.send_message(
"Provide `text` with comma-separated aliases (e.g. `Glam, G`).", ephemeral=True
)
return
aliases = [a.strip() for a in text.split(",") if a.strip()]
self.bot.drama_tracker.set_user_aliases(user.id, aliases)
user_data = self.bot.drama_tracker.get_user(user.id)
asyncio.create_task(self.bot.db.save_user_state(
user_id=user.id,
offense_count=user_data.offense_count,
immune=user_data.immune,
off_topic_count=user_data.off_topic_count,
baseline_coherence=user_data.baseline_coherence,
user_notes=user_data.notes or None,
aliases=",".join(aliases),
))
await interaction.response.send_message(
f"Aliases for {user.display_name} set to: {', '.join(aliases)}", ephemeral=True
)
elif action.value == "clear":
self.bot.drama_tracker.set_user_aliases(user.id, [])
user_data = self.bot.drama_tracker.get_user(user.id)
asyncio.create_task(self.bot.db.save_user_state(
user_id=user.id,
offense_count=user_data.offense_count,
immune=user_data.immune,
off_topic_count=user_data.off_topic_count,
baseline_coherence=user_data.baseline_coherence,
user_notes=user_data.notes or None,
aliases=None,
))
await interaction.response.send_message(
f"Aliases cleared for {user.display_name}.", ephemeral=True
)
@app_commands.command(
name="bcs-mode",
description="Switch the bot's personality mode.",
)
@app_commands.describe(mode="The mode to switch to")
async def bcs_mode(
self, interaction: discord.Interaction, mode: str | None = None,
):
modes_config = self.bot.config.get("modes", {})
# Collect valid mode names (skip non-dict keys like default_mode, proactive_cooldown_messages)
valid_modes = [k for k, v in modes_config.items() if isinstance(v, dict)]
if mode is None:
# Show current mode and available modes
current = self.bot.current_mode
current_config = self.bot.get_mode_config()
lines = [f"**Current mode:** {current_config.get('label', current)}"]
lines.append(f"*{current_config.get('description', '')}*\n")
lines.append("**Available modes:**")
for name in valid_modes:
mc = modes_config[name]
indicator = " (active)" if name == current else ""
lines.append(f"- `{name}` — {mc.get('label', name)}: {mc.get('description', '')}{indicator}")
await interaction.response.send_message("\n".join(lines), ephemeral=True)
return
mode = mode.lower()
if mode not in valid_modes:
await interaction.response.send_message(
f"Unknown mode `{mode}`. Available: {', '.join(f'`{m}`' for m in valid_modes)}",
ephemeral=True,
)
return
old_mode = self.bot.current_mode
self.bot.current_mode = mode
new_config = self.bot.get_mode_config()
# Persist mode to database
asyncio.create_task(self.bot.db.save_setting("current_mode", mode))
# Update bot status to reflect the mode
status_text = new_config.get("description", "Monitoring vibes...")
await self.bot.change_presence(
activity=discord.Activity(
type=discord.ActivityType.watching, name=status_text
)
)
await interaction.response.send_message(
f"Mode switched: **{modes_config.get(old_mode, {}).get('label', old_mode)}** "
f"-> **{new_config.get('label', mode)}**\n"
f"*{new_config.get('description', '')}*"
)
# Log mode change
log_channel = discord.utils.get(interaction.guild.text_channels, name="bcs-log")
if log_channel:
try:
await log_channel.send(
f"**MODE CHANGE** | {interaction.user.mention} switched mode: "
f"**{old_mode}** -> **{mode}**"
)
except discord.HTTPException:
pass
logger.info(
"Mode changed from %s to %s by %s",
old_mode, mode, interaction.user.display_name,
)
@app_commands.command(
name="drama-leaderboard",
description="Show the all-time drama leaderboard for the server.",
)
@app_commands.describe(period="Time period to rank (default: 30d)")
@app_commands.choices(period=[
app_commands.Choice(name="Last 7 days", value="7d"),
app_commands.Choice(name="Last 30 days", value="30d"),
app_commands.Choice(name="Last 90 days", value="90d"),
app_commands.Choice(name="All time", value="all"),
])
async def drama_leaderboard(
self, interaction: discord.Interaction, period: app_commands.Choice[str] | None = None,
):
await interaction.response.defer()
period_val = period.value if period else "30d"
if period_val == "all":
days = None
period_label = "All Time"
else:
days = int(period_val.rstrip("d"))
period_label = f"Last {days} Days"
rows = await self.bot.db.get_drama_leaderboard(interaction.guild.id, days)
if not rows:
await interaction.followup.send(
f"No drama data for **{period_label}**. Everyone's been suspiciously well-behaved."
)
return
# Compute composite score for each user
scored = []
for r in rows:
avg_tox = r["avg_toxicity"]
max_tox = r["max_toxicity"]
msg_count = r["messages_analyzed"]
action_weight = r["warnings"] + r["mutes"] * 2 + r["off_topic"] * 0.5
action_rate = min(1.0, action_weight / msg_count * 10) if msg_count > 0 else 0.0
composite = avg_tox * 0.4 + max_tox * 0.2 + action_rate * 0.4
scored.append({**r, "composite": composite, "action_rate": action_rate})
scored.sort(key=lambda x: x["composite"], reverse=True)
top = scored[:10]
medals = ["🥇", "🥈", "🥉"]
lines = []
for i, entry in enumerate(top):
rank = medals[i] if i < 3 else f"`{i + 1}.`"
# Resolve display name from guild if possible
member = interaction.guild.get_member(entry["user_id"])
name = member.display_name if member else entry["username"]
lines.append(
f"{rank} **{entry['composite']:.2f}** — {name}\n"
f" Avg: {entry['avg_toxicity']:.2f} | "
f"Peak: {entry['max_toxicity']:.2f} | "
f"⚠️ {entry['warnings']} | "
f"🔇 {entry['mutes']} | "
f"📢 {entry['off_topic']}"
)
embed = discord.Embed(
title=f"Drama Leaderboard — {period_label}",
description="\n".join(lines),
color=discord.Color.orange(),
)
embed.set_footer(text=f"{len(rows)} users tracked | {sum(r['messages_analyzed'] for r in rows)} messages analyzed")
await interaction.followup.send(embed=embed)
@bcs_mode.autocomplete("mode")
async def _mode_autocomplete(
self, interaction: discord.Interaction, current: str,
) -> list[app_commands.Choice[str]]:
modes_config = self.bot.config.get("modes", {})
valid_modes = [k for k, v in modes_config.items() if isinstance(v, dict)]
return [
app_commands.Choice(name=modes_config[m].get("label", m), value=m)
for m in valid_modes
if current.lower() in m.lower()
][:25]
@staticmethod
def _score_bar(score: float) -> str:
filled = round(score * 10)

76
cogs/reactions.py Normal file
View File

@@ -0,0 +1,76 @@
import asyncio
import logging
import random
import time
import discord
from discord.ext import commands
logger = logging.getLogger("bcs.reactions")
class ReactionCog(commands.Cog):
def __init__(self, bot: commands.Bot):
self.bot = bot
# Per-channel timestamp of last reaction
self._last_reaction: dict[int, float] = {}
@commands.Cog.listener()
async def on_message(self, message: discord.Message):
if message.author.bot or not message.guild:
return
cfg = self.bot.config.get("reactions", {})
if not cfg.get("enabled", False):
return
# Skip empty messages
if not message.content or not message.content.strip():
return
# Channel exclusion
excluded = cfg.get("excluded_channels", [])
if excluded:
ch_name = getattr(message.channel, "name", "")
if message.channel.id in excluded or ch_name in excluded:
return
# RNG gate
chance = cfg.get("chance", 0.15)
if random.random() > chance:
return
# Per-channel cooldown
ch_id = message.channel.id
cooldown = cfg.get("cooldown_seconds", 45)
now = time.monotonic()
if now - self._last_reaction.get(ch_id, 0) < cooldown:
return
# Fire and forget so we don't block anything
asyncio.create_task(self._try_react(message, ch_id))
async def _try_react(self, message: discord.Message, ch_id: int):
try:
emoji = await self.bot.llm.pick_reaction(
message.content, message.channel.name,
)
if not emoji:
return
await message.add_reaction(emoji)
self._last_reaction[ch_id] = time.monotonic()
logger.info(
"Reacted %s to %s in #%s: %s",
emoji, message.author.display_name,
message.channel.name, message.content[:60],
)
except discord.HTTPException as e:
# Invalid emoji or missing permissions — silently skip
logger.debug("Reaction failed: %s", e)
except Exception:
logger.exception("Unexpected reaction error")
async def setup(bot: commands.Bot):
await bot.add_cog(ReactionCog(bot))

View File

@@ -1,555 +0,0 @@
import asyncio
import logging
from collections import deque
from datetime import datetime, timedelta, timezone
import discord
from discord.ext import commands, tasks
logger = logging.getLogger("bcs.sentiment")
# How often to flush dirty user states to DB (seconds)
STATE_FLUSH_INTERVAL = 300 # 5 minutes
class SentimentCog(commands.Cog):
def __init__(self, bot: commands.Bot):
self.bot = bot
# Per-channel message history for context: {channel_id: deque of (author, content)}
self._channel_history: dict[int, deque] = {}
# Track which user IDs have unsaved in-memory changes
self._dirty_users: set[int] = set()
async def cog_load(self):
self._flush_states.start()
async def cog_unload(self):
self._flush_states.cancel()
# Final flush on shutdown
await self._flush_dirty_states()
@commands.Cog.listener()
async def on_message(self, message: discord.Message):
logger.info("MSG from %s in #%s: %s", message.author, getattr(message.channel, 'name', 'DM'), message.content[:80] if message.content else "(empty)")
# Ignore bots (including ourselves)
if message.author.bot:
return
# Ignore DMs
if not message.guild:
return
config = self.bot.config
monitoring = config.get("monitoring", {})
if not monitoring.get("enabled", True):
return
# Check if channel is monitored
monitored_channels = monitoring.get("channels", [])
if monitored_channels and message.channel.id not in monitored_channels:
return
# Check ignored users
if message.author.id in monitoring.get("ignored_users", []):
return
# Check immune roles
immune_roles = set(monitoring.get("immune_roles", []))
if immune_roles and any(
r.id in immune_roles for r in message.author.roles
):
return
# Check per-user immunity
if self.bot.drama_tracker.is_immune(message.author.id):
return
# Store message in channel history for context
self._store_context(message)
# Skip if empty
if not message.content or not message.content.strip():
return
# Check per-user analysis cooldown
sentiment_config = config.get("sentiment", {})
cooldown = sentiment_config.get("cooldown_between_analyses", 2)
if not self.bot.drama_tracker.can_analyze(message.author.id, cooldown):
return
# Analyze the message
context = self._get_context(message)
user_notes = self.bot.drama_tracker.get_user_notes(message.author.id)
result = await self.bot.ollama.analyze_message(
message.content, context, user_notes=user_notes
)
if result is None:
return
score = result["toxicity_score"]
categories = result["categories"]
reasoning = result["reasoning"]
# Track the result
self.bot.drama_tracker.add_entry(
message.author.id, score, categories, reasoning
)
drama_score = self.bot.drama_tracker.get_drama_score(message.author.id)
logger.info(
"User %s (%d) | msg_score=%.2f | drama_score=%.2f | categories=%s | %s",
message.author.display_name,
message.author.id,
score,
drama_score,
categories,
reasoning,
)
# Topic drift detection
off_topic = result.get("off_topic", False)
topic_category = result.get("topic_category", "general_chat")
topic_reasoning = result.get("topic_reasoning", "")
# Save message + analysis to DB (awaited — need message_id for action links)
db_message_id = await self.bot.db.save_message_and_analysis(
guild_id=message.guild.id,
channel_id=message.channel.id,
user_id=message.author.id,
username=message.author.display_name,
content=message.content,
message_ts=message.created_at.replace(tzinfo=timezone.utc),
toxicity_score=score,
drama_score=drama_score,
categories=categories,
reasoning=reasoning,
off_topic=off_topic,
topic_category=topic_category,
topic_reasoning=topic_reasoning,
coherence_score=result.get("coherence_score"),
coherence_flag=result.get("coherence_flag"),
)
if off_topic:
await self._handle_topic_drift(message, topic_category, topic_reasoning, db_message_id)
# Coherence / intoxication detection
coherence_score = result.get("coherence_score", 0.85)
coherence_flag = result.get("coherence_flag", "normal")
coherence_config = config.get("coherence", {})
if coherence_config.get("enabled", True):
degradation = self.bot.drama_tracker.update_coherence(
user_id=message.author.id,
score=coherence_score,
flag=coherence_flag,
drop_threshold=coherence_config.get("drop_threshold", 0.3),
absolute_floor=coherence_config.get("absolute_floor", 0.5),
cooldown_minutes=coherence_config.get("cooldown_minutes", 30),
)
if degradation and not config.get("monitoring", {}).get("dry_run", False):
await self._handle_coherence_alert(message, degradation, coherence_config, db_message_id)
# Capture LLM note updates about this user
note_update = result.get("note_update")
if note_update:
self.bot.drama_tracker.update_user_notes(message.author.id, note_update)
self._dirty_users.add(message.author.id)
# Mark dirty for coherence baseline drift even without actions
self._dirty_users.add(message.author.id)
# Always log analysis to #bcs-log if it exists
await self._log_analysis(message, score, drama_score, categories, reasoning, off_topic, topic_category)
# Dry-run mode: skip warnings/mutes
dry_run = config.get("monitoring", {}).get("dry_run", False)
if dry_run:
return
# Check thresholds — both rolling average AND single-message spikes
warning_threshold = sentiment_config.get("warning_threshold", 0.6)
base_mute_threshold = sentiment_config.get("mute_threshold", 0.75)
mute_threshold = self.bot.drama_tracker.get_mute_threshold(
message.author.id, base_mute_threshold
)
spike_warn = sentiment_config.get("spike_warning_threshold", 0.5)
spike_mute = sentiment_config.get("spike_mute_threshold", 0.8)
# Mute: rolling average OR single message spike
if drama_score >= mute_threshold or score >= spike_mute:
effective_score = max(drama_score, score)
await self._mute_user(message, effective_score, categories, db_message_id)
# Warn: rolling average OR single message spike
elif drama_score >= warning_threshold or score >= spike_warn:
effective_score = max(drama_score, score)
await self._warn_user(message, effective_score, db_message_id)
async def _mute_user(
self,
message: discord.Message,
score: float,
categories: list[str],
db_message_id: int | None = None,
):
member = message.author
if not isinstance(member, discord.Member):
return
# Check bot permissions
if not message.guild.me.guild_permissions.moderate_members:
logger.warning("Missing moderate_members permission, cannot mute.")
return
# Record offense and get escalating timeout
offense_num = self.bot.drama_tracker.record_offense(member.id)
timeout_config = self.bot.config.get("timeouts", {})
escalation = timeout_config.get("escalation_minutes", [5, 15, 30, 60])
idx = min(offense_num - 1, len(escalation) - 1)
duration_minutes = escalation[idx]
try:
await member.timeout(
timedelta(minutes=duration_minutes),
reason=f"BCS auto-mute: drama score {score:.2f}",
)
except discord.Forbidden:
logger.warning("Cannot timeout %s — role hierarchy issue.", member)
return
except discord.HTTPException as e:
logger.error("Failed to timeout %s: %s", member, e)
return
# Build embed
messages_config = self.bot.config.get("messages", {})
cat_str = ", ".join(c for c in categories if c != "none") or "general negativity"
embed = discord.Embed(
title=messages_config.get("mute_title", "BREEHAVIOR ALERT"),
description=messages_config.get("mute_description", "").format(
username=member.display_name,
duration=f"{duration_minutes} minutes",
score=f"{score:.2f}",
categories=cat_str,
),
color=discord.Color.red(),
)
embed.set_footer(
text=f"Offense #{offense_num} | Timeout: {duration_minutes}m"
)
await message.channel.send(embed=embed)
await self._log_action(
message.guild,
f"**MUTE** | {member.mention} | Score: {score:.2f} | "
f"Duration: {duration_minutes}m | Offense #{offense_num} | "
f"Categories: {cat_str}",
)
logger.info(
"Muted %s for %d minutes (offense #%d, score %.2f)",
member,
duration_minutes,
offense_num,
score,
)
# Persist mute action and updated user state (fire-and-forget)
asyncio.create_task(self.bot.db.save_action(
guild_id=message.guild.id,
user_id=member.id,
username=member.display_name,
action_type="mute",
message_id=db_message_id,
details=f"duration={duration_minutes}m offense={offense_num} score={score:.2f} categories={cat_str}",
))
self._save_user_state(member.id)
async def _warn_user(self, message: discord.Message, score: float, db_message_id: int | None = None):
timeout_config = self.bot.config.get("timeouts", {})
cooldown = timeout_config.get("warning_cooldown_minutes", 5)
if not self.bot.drama_tracker.can_warn(message.author.id, cooldown):
return
self.bot.drama_tracker.record_warning(message.author.id)
# React with warning emoji
try:
await message.add_reaction("\u26a0\ufe0f")
except discord.HTTPException:
pass
# Send warning message
messages_config = self.bot.config.get("messages", {})
warning_text = messages_config.get(
"warning",
"Easy there, {username}. The Breehavior Monitor is watching.",
).format(username=message.author.display_name)
await message.channel.send(warning_text)
await self._log_action(
message.guild,
f"**WARNING** | {message.author.mention} | Score: {score:.2f}",
)
logger.info("Warned %s (score %.2f)", message.author, score)
# Persist warning action (fire-and-forget)
asyncio.create_task(self.bot.db.save_action(
guild_id=message.guild.id,
user_id=message.author.id,
username=message.author.display_name,
action_type="warning",
message_id=db_message_id,
details=f"score={score:.2f}",
))
async def _handle_topic_drift(
self, message: discord.Message, topic_category: str, topic_reasoning: str,
db_message_id: int | None = None,
):
config = self.bot.config.get("topic_drift", {})
if not config.get("enabled", True):
return
# Check if we're in dry-run mode — still track but don't act
dry_run = self.bot.config.get("monitoring", {}).get("dry_run", False)
if dry_run:
return
tracker = self.bot.drama_tracker
user_id = message.author.id
cooldown = config.get("remind_cooldown_minutes", 10)
if not tracker.can_topic_remind(user_id, cooldown):
return
count = tracker.record_off_topic(user_id)
escalation_threshold = config.get("escalation_count", 3)
messages_config = self.bot.config.get("messages", {})
if count >= escalation_threshold and not tracker.was_owner_notified(user_id):
# DM the server owner
tracker.mark_owner_notified(user_id)
owner = message.guild.owner
if owner:
dm_text = messages_config.get(
"topic_owner_dm",
"Heads up: {username} keeps going off-topic in #{channel}. Reminded {count} times.",
).format(
username=message.author.display_name,
channel=message.channel.name,
count=count,
)
try:
await owner.send(dm_text)
except discord.HTTPException:
logger.warning("Could not DM server owner about topic drift.")
await self._log_action(
message.guild,
f"**TOPIC DRIFT — OWNER NOTIFIED** | {message.author.mention} | "
f"Off-topic count: {count} | Category: {topic_category}",
)
logger.info("Notified owner about %s topic drift (count %d)", message.author, count)
asyncio.create_task(self.bot.db.save_action(
guild_id=message.guild.id, user_id=user_id,
username=message.author.display_name,
action_type="topic_escalation", message_id=db_message_id,
details=f"off_topic_count={count} category={topic_category}",
))
self._save_user_state(user_id)
elif count >= 2:
# Firmer nudge
nudge_text = messages_config.get(
"topic_nudge",
"{username}, let's keep it to gaming talk in here.",
).format(username=message.author.display_name)
await message.channel.send(nudge_text)
await self._log_action(
message.guild,
f"**TOPIC NUDGE** | {message.author.mention} | "
f"Off-topic count: {count} | Category: {topic_category}",
)
logger.info("Topic nudge for %s (count %d)", message.author, count)
asyncio.create_task(self.bot.db.save_action(
guild_id=message.guild.id, user_id=user_id,
username=message.author.display_name,
action_type="topic_nudge", message_id=db_message_id,
details=f"off_topic_count={count} category={topic_category}",
))
self._save_user_state(user_id)
else:
# Friendly first reminder
remind_text = messages_config.get(
"topic_remind",
"Hey {username}, this is a gaming server — maybe take the personal stuff to DMs?",
).format(username=message.author.display_name)
await message.channel.send(remind_text)
await self._log_action(
message.guild,
f"**TOPIC REMIND** | {message.author.mention} | "
f"Category: {topic_category} | {topic_reasoning}",
)
logger.info("Topic remind for %s (count %d)", message.author, count)
asyncio.create_task(self.bot.db.save_action(
guild_id=message.guild.id, user_id=user_id,
username=message.author.display_name,
action_type="topic_remind", message_id=db_message_id,
details=f"off_topic_count={count} category={topic_category} reasoning={topic_reasoning}",
))
self._save_user_state(user_id)
async def _handle_coherence_alert(
self, message: discord.Message, degradation: dict, coherence_config: dict,
db_message_id: int | None = None,
):
flag = degradation["flag"]
messages_map = coherence_config.get("messages", {})
alert_text = messages_map.get(flag, messages_map.get(
"default", "You okay there, {username}? That message was... something."
)).format(username=message.author.display_name)
await message.channel.send(alert_text)
await self._log_action(
message.guild,
f"**COHERENCE ALERT** | {message.author.mention} | "
f"Score: {degradation['current']:.2f} | Baseline: {degradation['baseline']:.2f} | "
f"Drop: {degradation['drop']:.2f} | Flag: {flag}",
)
logger.info(
"Coherence alert for %s: score=%.2f baseline=%.2f drop=%.2f flag=%s",
message.author, degradation["current"], degradation["baseline"],
degradation["drop"], flag,
)
asyncio.create_task(self.bot.db.save_action(
guild_id=message.guild.id,
user_id=message.author.id,
username=message.author.display_name,
action_type="coherence_alert",
message_id=db_message_id,
details=f"score={degradation['current']:.2f} baseline={degradation['baseline']:.2f} drop={degradation['drop']:.2f} flag={flag}",
))
self._save_user_state(message.author.id)
def _save_user_state(self, user_id: int) -> None:
"""Fire-and-forget save of a user's current state to DB."""
user_data = self.bot.drama_tracker.get_user(user_id)
asyncio.create_task(self.bot.db.save_user_state(
user_id=user_id,
offense_count=user_data.offense_count,
immune=user_data.immune,
off_topic_count=user_data.off_topic_count,
baseline_coherence=user_data.baseline_coherence,
user_notes=user_data.notes or None,
))
self._dirty_users.discard(user_id)
@tasks.loop(seconds=STATE_FLUSH_INTERVAL)
async def _flush_states(self):
await self._flush_dirty_states()
@_flush_states.before_loop
async def _before_flush(self):
await self.bot.wait_until_ready()
async def _flush_dirty_states(self) -> None:
"""Save all dirty user states to DB."""
if not self._dirty_users:
return
dirty = list(self._dirty_users)
self._dirty_users.clear()
for user_id in dirty:
user_data = self.bot.drama_tracker.get_user(user_id)
await self.bot.db.save_user_state(
user_id=user_id,
offense_count=user_data.offense_count,
immune=user_data.immune,
off_topic_count=user_data.off_topic_count,
baseline_coherence=user_data.baseline_coherence,
user_notes=user_data.notes or None,
)
logger.info("Flushed %d dirty user states to DB.", len(dirty))
def _store_context(self, message: discord.Message):
ch_id = message.channel.id
if ch_id not in self._channel_history:
max_ctx = self.bot.config.get("sentiment", {}).get(
"context_messages", 3
)
self._channel_history[ch_id] = deque(maxlen=max_ctx + 1)
self._channel_history[ch_id].append(
(message.author.display_name, message.content)
)
def _get_context(self, message: discord.Message) -> str:
ch_id = message.channel.id
history = self._channel_history.get(ch_id, deque())
# Exclude the current message (last item)
context_entries = list(history)[:-1] if len(history) > 1 else []
if not context_entries:
return "(no prior context)"
return " | ".join(
f"{name}: {content}" for name, content in context_entries
)
async def _log_analysis(
self, message: discord.Message, score: float, drama_score: float,
categories: list[str], reasoning: str, off_topic: bool, topic_category: str,
):
log_channel = discord.utils.get(
message.guild.text_channels, name="bcs-log"
)
if not log_channel:
return
# Only log notable messages (score > 0.1) to avoid spam
if score <= 0.1:
return
cat_str = ", ".join(c for c in categories if c != "none") or "none"
embed = discord.Embed(
title=f"Analysis: {message.author.display_name}",
description=f"#{message.channel.name}: {message.content[:200]}",
color=self._score_color(score),
)
embed.add_field(name="Message Score", value=f"{score:.2f}", inline=True)
embed.add_field(name="Rolling Drama", value=f"{drama_score:.2f}", inline=True)
embed.add_field(name="Categories", value=cat_str, inline=True)
embed.add_field(name="Reasoning", value=reasoning[:1024] or "n/a", inline=False)
try:
await log_channel.send(embed=embed)
except discord.HTTPException:
pass
@staticmethod
def _score_color(score: float) -> discord.Color:
if score >= 0.75:
return discord.Color.red()
if score >= 0.6:
return discord.Color.orange()
if score >= 0.3:
return discord.Color.yellow()
return discord.Color.green()
async def _log_action(self, guild: discord.Guild, text: str):
log_channel = discord.utils.get(guild.text_channels, name="bcs-log")
if log_channel:
try:
await log_channel.send(text)
except discord.HTTPException:
pass
async def setup(bot: commands.Bot):
await bot.add_cog(SentimentCog(bot))

864
cogs/sentiment/__init__.py Normal file
View File

@@ -0,0 +1,864 @@
import asyncio
import logging
from datetime import datetime, timedelta, timezone
from pathlib import Path
import discord
from discord.ext import commands, tasks
from cogs.sentiment.actions import mute_user, warn_user
from cogs.sentiment.channel_redirect import build_channel_context, handle_channel_redirect
from cogs.sentiment.coherence import handle_coherence_alert
from cogs.sentiment.log_utils import log_analysis
from cogs.sentiment.state import flush_dirty_states
from cogs.sentiment.topic_drift import handle_topic_drift
from cogs.sentiment.unblock_nag import handle_unblock_nag, matches_unblock_nag
logger = logging.getLogger("bcs.sentiment")
# How often to flush dirty user states to DB (seconds)
STATE_FLUSH_INTERVAL = 300 # 5 minutes
# Load server rules from prompt file (cached at import time)
_PROMPTS_DIR = Path(__file__).resolve().parent.parent.parent / "prompts"
def _load_rules() -> tuple[str, dict[int, str]]:
"""Load rules from prompts/rules.txt, returning (raw text, {num: text} dict)."""
path = _PROMPTS_DIR / "rules.txt"
if not path.exists():
return "", {}
text = path.read_text(encoding="utf-8").strip()
if not text:
return "", {}
rules_dict = {}
for line in text.splitlines():
line = line.strip()
if not line:
continue
parts = line.split(". ", 1)
if len(parts) == 2:
try:
rules_dict[int(parts[0])] = parts[1]
except ValueError:
pass
return text, rules_dict
_RULES_TEXT, _RULES_DICT = _load_rules()
class SentimentCog(commands.Cog):
def __init__(self, bot: commands.Bot):
self.bot = bot
# Track which user IDs have unsaved in-memory changes
self._dirty_users: set[int] = set()
# Per-user redirect cooldown: {user_id: last_redirect_datetime}
self._redirect_cooldowns: dict[int, datetime] = {}
# Debounce buffer: keyed by channel_id, stores list of messages from ALL users
self._message_buffer: dict[int, list[discord.Message]] = {}
# Pending debounce timer tasks (per-channel)
self._debounce_tasks: dict[int, asyncio.Task] = {}
# Mention scan tasks (separate from debounce)
self._mention_scan_tasks: dict[int, asyncio.Task] = {}
# Mention scan state
self._mention_scan_cooldowns: dict[int, datetime] = {} # {channel_id: last_scan_time}
self._mention_scan_results: dict[int, str] = {} # {trigger_message_id: findings_summary}
self._analyzed_message_ids: set[int] = set() # Discord message IDs already analyzed
self._max_analyzed_ids = 500
self._moderated_message_ids: set[int] = set() # Message IDs that triggered moderation
async def cog_load(self):
self._flush_states.start()
async def cog_unload(self):
self._flush_states.cancel()
# Cancel all pending debounce timers and process remaining buffers
for task in self._debounce_tasks.values():
task.cancel()
self._debounce_tasks.clear()
for task in self._mention_scan_tasks.values():
task.cancel()
self._mention_scan_tasks.clear()
for channel_id in list(self._message_buffer):
await self._process_buffered(channel_id)
# Final flush on shutdown
await flush_dirty_states(self.bot, self._dirty_users)
@commands.Cog.listener()
async def on_message(self, message: discord.Message):
logger.info("MSG from %s in #%s: %s", message.author, getattr(message.channel, 'name', 'DM'), message.content[:80] if message.content else "(empty)")
# Ignore bots (including ourselves)
if message.author.bot:
return
# Ignore DMs
if not message.guild:
return
config = self.bot.config
monitoring = config.get("monitoring", {})
if not monitoring.get("enabled", True):
return
# Check if channel is monitored
monitored_channels = monitoring.get("channels", [])
if monitored_channels and message.channel.id not in monitored_channels:
return
# Check ignored users
if message.author.id in monitoring.get("ignored_users", []):
return
# Check immune roles
immune_roles = set(monitoring.get("immune_roles", []))
if immune_roles and any(
r.id in immune_roles for r in message.author.roles
):
return
# Check per-user immunity
if self.bot.drama_tracker.is_immune(message.author.id):
return
# Explicit @mention of the bot triggers a mention scan instead of scoring.
# Reply-pings (Discord auto-adds replied-to user to mentions) should NOT
# trigger scans — and reply-to-bot messages should still be scored normally
# so toxic replies to bot warnings aren't silently skipped.
bot_mentioned_in_text = (
f"<@{self.bot.user.id}>" in (message.content or "")
or f"<@!{self.bot.user.id}>" in (message.content or "")
)
if bot_mentioned_in_text:
# Classify intent: only run expensive mention scan for reports,
# let ChatCog handle casual chat/questions
intent = await self.bot.llm.classify_mention_intent(
message.content or ""
)
logger.info(
"Mention intent for %s: %s", message.author, intent
)
if intent == "report":
mention_config = config.get("mention_scan", {})
if mention_config.get("enabled", True):
await self._maybe_start_mention_scan(message, mention_config)
return
# For non-report intents, fall through to buffer the message
# so it still gets scored for toxicity
# Skip if empty
if not message.content or not message.content.strip():
return
# Check for unblock nagging (keyword-based, no LLM needed for detection)
if matches_unblock_nag(message.content):
asyncio.create_task(handle_unblock_nag(
self.bot, message, self._dirty_users,
))
# Buffer the message and start/reset debounce timer (per-channel)
channel_id = message.channel.id
if channel_id not in self._message_buffer:
self._message_buffer[channel_id] = []
self._message_buffer[channel_id].append(message)
# Cancel existing debounce timer for this channel
existing_task = self._debounce_tasks.get(channel_id)
if existing_task and not existing_task.done():
existing_task.cancel()
batch_window = config.get("sentiment", {}).get("batch_window_seconds", 3)
self._debounce_tasks[channel_id] = asyncio.create_task(
self._debounce_then_process(channel_id, batch_window)
)
async def _debounce_then_process(self, channel_id: int, delay: float):
"""Sleep for the debounce window, then process the buffered messages."""
try:
await asyncio.sleep(delay)
await self._process_buffered(channel_id)
except asyncio.CancelledError:
pass # Timer was reset by a new message — expected
def _resolve_thresholds(self) -> dict:
"""Resolve effective moderation thresholds based on current mode."""
config = self.bot.config
sentiment_config = config.get("sentiment", {})
mode_config = self.bot.get_mode_config()
moderation_level = mode_config.get("moderation", "full")
if moderation_level == "relaxed" and "relaxed_thresholds" in mode_config:
rt = mode_config["relaxed_thresholds"]
return {
"warning": rt.get("warning_threshold", 0.80),
"mute": rt.get("mute_threshold", 0.85),
"spike_warn": rt.get("spike_warning_threshold", 0.70),
"spike_mute": rt.get("spike_mute_threshold", 0.85),
}
return {
"warning": sentiment_config.get("warning_threshold", 0.6),
"mute": sentiment_config.get("mute_threshold", 0.75),
"spike_warn": sentiment_config.get("spike_warning_threshold", 0.5),
"spike_mute": sentiment_config.get("spike_mute_threshold", 0.8),
}
async def _apply_moderation(
self,
message: discord.Message,
user_id: int,
score: float,
drama_score: float,
categories: list[str],
thresholds: dict,
db_message_id: int | None,
violated_rules: list[int] | None = None,
) -> bool:
"""Issue a warning or mute based on scores and thresholds.
Returns True if any moderation action was taken."""
rules_config = _RULES_DICT
mute_threshold = self.bot.drama_tracker.get_mute_threshold(user_id, thresholds["mute"])
if drama_score >= mute_threshold or score >= thresholds["spike_mute"]:
effective_score = max(drama_score, score)
if self.bot.drama_tracker.is_warned(user_id):
await mute_user(self.bot, message, effective_score, categories, db_message_id, self._dirty_users, violated_rules=violated_rules, rules_config=rules_config)
else:
logger.info("Downgrading mute to warning for %s (no prior warning)", message.author)
await warn_user(self.bot, message, effective_score, db_message_id, self._dirty_users, violated_rules=violated_rules, rules_config=rules_config)
return True
elif drama_score >= thresholds["warning"] or score >= thresholds["spike_warn"]:
effective_score = max(drama_score, score)
await warn_user(self.bot, message, effective_score, db_message_id, self._dirty_users, violated_rules=violated_rules, rules_config=rules_config)
return True
return False
@staticmethod
def _build_rules_context() -> str:
"""Return server rules text loaded from prompts/rules.txt."""
return _RULES_TEXT
@staticmethod
def _build_user_lookup(messages: list[discord.Message]) -> dict[str, tuple[int, discord.Message, list[discord.Message]]]:
"""Build username -> (user_id, ref_msg, [messages]) mapping."""
lookup: dict[str, tuple[int, discord.Message, list[discord.Message]]] = {}
for msg in messages:
name = msg.author.display_name
if name not in lookup:
lookup[name] = (msg.author.id, msg, [])
lookup[name][2].append(msg)
return lookup
def _build_user_notes_map(self, messages: list[discord.Message]) -> dict[str, str]:
"""Build username -> LLM notes mapping for users in the message list."""
notes_map: dict[str, str] = {}
for msg in messages:
name = msg.author.display_name
if name not in notes_map:
notes = self.bot.drama_tracker.get_user_notes(msg.author.id)
if notes:
notes_map[name] = notes
return notes_map
@staticmethod
def _build_anon_map(
conversation: list[tuple[str, str, datetime, str | None]],
) -> dict[str, str]:
"""Build display_name -> 'User1', 'User2', ... mapping for all participants."""
seen: dict[str, str] = {}
counter = 1
for username, _, _, reply_to in conversation:
if username not in seen:
seen[username] = f"User{counter}"
counter += 1
if reply_to and reply_to not in seen:
seen[reply_to] = f"User{counter}"
counter += 1
return seen
@staticmethod
def _anonymize_conversation(
conversation: list[tuple[str, str, datetime, str | None]],
anon_map: dict[str, str],
) -> list[tuple[str, str, datetime, str | None]]:
"""Replace display names with anonymous keys in conversation tuples."""
return [
(
anon_map.get(username, username),
content,
ts,
anon_map.get(reply_to, reply_to) if reply_to else None,
)
for username, content, ts, reply_to in conversation
]
@staticmethod
def _anonymize_notes(
user_notes_map: dict[str, str],
anon_map: dict[str, str],
) -> dict[str, str]:
"""Replace display name keys with anonymous keys in user notes map."""
return {anon_map.get(name, name): notes for name, notes in user_notes_map.items()}
def _build_alias_context(
self,
messages: list[discord.Message],
anon_map: dict[str, str],
) -> str:
"""Build anonymized alias context string for the LLM.
Maps user IDs from messages to their known nicknames from
DramaTracker, then replaces display names with anonymous keys.
"""
all_aliases = self.bot.drama_tracker.get_all_aliases()
if not all_aliases:
return ""
lines = []
seen_ids: set[int] = set()
for msg in messages:
uid = msg.author.id
if uid in seen_ids:
continue
seen_ids.add(uid)
aliases = all_aliases.get(uid)
if aliases:
anon_key = anon_map.get(msg.author.display_name, msg.author.display_name)
lines.append(f" {anon_key} is also known as: {', '.join(aliases)}")
# Include aliases for members NOT in the conversation (so the LLM
# can recognize name-drops of absent members), using anonymized keys
absent_idx = 0
for uid, aliases in all_aliases.items():
if uid not in seen_ids:
absent_idx += 1
lines.append(f" Absent_{absent_idx} is also known as: {', '.join(aliases)}")
return "\n".join(lines) if lines else ""
@staticmethod
def _deanonymize_findings(result: dict, anon_map: dict[str, str]) -> None:
"""Replace anonymous keys back to display names in LLM findings (in-place)."""
reverse_map = {v: k for k, v in anon_map.items()}
for finding in result.get("user_findings", []):
anon_name = finding.get("username", "")
if anon_name in reverse_map:
finding["username"] = reverse_map[anon_name]
# De-anonymize text fields that may reference other users
for field in ("note_update", "reasoning", "worst_message"):
text = finding.get(field)
if text:
for anon, real in reverse_map.items():
text = text.replace(anon, real)
finding[field] = text
@staticmethod
def _build_conversation(
messages: list[discord.Message],
) -> list[tuple[str, str, datetime, str | None]]:
"""Convert a list of Discord messages to conversation tuples with reply resolution."""
msg_id_to_author = {m.id: m.author.display_name for m in messages}
conversation = []
for msg in messages:
reply_to = None
if msg.reference and msg.reference.message_id:
reply_to = msg_id_to_author.get(msg.reference.message_id)
if not reply_to:
ref = msg.reference.cached_message
if ref:
reply_to = ref.author.display_name
conversation.append((
msg.author.display_name,
msg.content,
msg.created_at,
reply_to,
))
return conversation
# -- Shared finding processor --
async def _process_finding(
self,
finding: dict,
user_lookup: dict,
*,
sentiment_config: dict,
dry_run: bool,
thresholds: dict,
db_content: str,
db_topic_category: str,
db_topic_reasoning: str,
db_coherence_score: float | None,
db_coherence_flag: str | None,
game_channels: dict | None = None,
coherence_config: dict | None = None,
) -> tuple[str, float, float, list[str]] | None:
"""Process a single user finding.
Returns (username, score, drama_score, categories) or None if skipped.
When game_channels is not None, topic drift, game redirect, and coherence
handlers are active (buffered analysis mode). When None, they are skipped
(mention scan mode).
"""
username = finding["username"]
lookup = user_lookup.get(username)
if not lookup:
return None
user_id, user_ref_msg, user_msgs = lookup
score = finding["toxicity_score"]
categories = finding["categories"]
reasoning = finding["reasoning"]
off_topic = finding.get("off_topic", False)
violated_rules = finding.get("violated_rules", [])
note_update = finding.get("note_update")
# Track in DramaTracker
self.bot.drama_tracker.add_entry(user_id, score, categories, reasoning)
escalation_boost = sentiment_config.get("escalation_boost", 0.04)
drama_score = self.bot.drama_tracker.get_drama_score(user_id, escalation_boost=escalation_boost)
logger.info(
"User %s (%d) | msg_score=%.2f | drama_score=%.2f | categories=%s | %s",
username, user_id, score, drama_score, categories, reasoning,
)
# Save to DB
db_message_id = await self.bot.db.save_message_and_analysis(
guild_id=user_ref_msg.guild.id,
channel_id=user_ref_msg.channel.id,
user_id=user_id,
username=username,
content=db_content,
message_ts=user_ref_msg.created_at.replace(tzinfo=timezone.utc),
toxicity_score=score,
drama_score=drama_score,
categories=categories,
reasoning=reasoning,
off_topic=off_topic,
topic_category=db_topic_category,
topic_reasoning=db_topic_reasoning,
coherence_score=db_coherence_score,
coherence_flag=db_coherence_flag,
)
# Feature handlers — only active during buffered analysis (game_channels set)
if game_channels is not None:
if off_topic:
await handle_topic_drift(
self.bot, user_ref_msg, db_topic_category, db_topic_reasoning,
db_message_id, self._dirty_users,
)
elif (detected_game := finding.get("detected_game")) and game_channels and not dry_run:
await handle_channel_redirect(
self.bot, user_ref_msg, detected_game, game_channels,
db_message_id, self._redirect_cooldowns,
)
if coherence_config is not None and coherence_config.get("enabled", True):
coherence_score = finding.get("coherence_score", 0.85)
coherence_flag = finding.get("coherence_flag", "normal")
degradation = self.bot.drama_tracker.update_coherence(
user_id=user_id,
score=coherence_score,
flag=coherence_flag,
drop_threshold=coherence_config.get("drop_threshold", 0.3),
absolute_floor=coherence_config.get("absolute_floor", 0.5),
cooldown_minutes=coherence_config.get("cooldown_minutes", 30),
)
if degradation and not dry_run:
await handle_coherence_alert(
self.bot, user_ref_msg, degradation, coherence_config,
db_message_id, self._dirty_users,
)
# Note update — route to memory system
if note_update:
# Sanitize before storing — strips any quoted toxic language
sanitized = await self.bot.llm.sanitize_notes(note_update)
self.bot.drama_tracker.update_user_notes(user_id, sanitized)
self._dirty_users.add(user_id)
# Also save as an expiring memory (7d default for passive observations)
asyncio.create_task(self.bot.db.save_memory(
user_id=user_id,
memory=sanitized[:500],
topics=db_topic_category or "general",
importance="medium",
expires_at=datetime.now(timezone.utc) + timedelta(days=7),
source="passive",
))
self._dirty_users.add(user_id)
# Log analysis
await log_analysis(
user_ref_msg, score, drama_score, categories, reasoning,
off_topic, db_topic_category,
)
# Moderation
if not dry_run:
acted = await self._apply_moderation(
user_ref_msg, user_id, score, drama_score, categories, thresholds, db_message_id,
violated_rules=violated_rules,
)
if acted:
for m in user_msgs:
self._moderated_message_ids.add(m.id)
self._prune_moderated_ids()
return (username, score, drama_score, categories)
# -- Buffered analysis --
async def _process_buffered(self, channel_id: int):
"""Collect buffered messages, build conversation block, and run analysis."""
messages = self._message_buffer.pop(channel_id, [])
self._debounce_tasks.pop(channel_id, None)
if not messages:
return
# Use the last message as reference for channel/guild
ref_message = messages[-1]
channel = ref_message.channel
config = self.bot.config
sentiment_config = config.get("sentiment", {})
game_channels = config.get("game_channels", {})
# Fetch some history before the buffered messages for leading context
context_count = sentiment_config.get("context_messages", 8)
oldest_buffered = messages[0]
history_messages: list[discord.Message] = []
try:
async for msg in channel.history(limit=context_count + 10, before=oldest_buffered):
if msg.author.bot:
continue
if not msg.content or not msg.content.strip():
continue
if self._was_moderated(msg):
continue
history_messages.append(msg)
if len(history_messages) >= context_count:
break
except discord.HTTPException:
pass
history_messages.reverse() # chronological order
# Combine: history (context) + buffered (new messages to analyze)
new_message_start = len(history_messages)
all_messages = history_messages + messages
conversation = self._build_conversation(all_messages)
if not conversation:
return
user_notes_map = self._build_user_notes_map(messages)
# Anonymize usernames before sending to LLM to prevent name-based bias
anon_map = self._build_anon_map(conversation)
anon_conversation = self._anonymize_conversation(conversation, anon_map)
anon_notes = self._anonymize_notes(user_notes_map, anon_map) if user_notes_map else user_notes_map
alias_context = self._build_alias_context(all_messages, anon_map)
channel_context = build_channel_context(ref_message, game_channels)
rules_context = self._build_rules_context()
logger.info(
"Channel analysis: %d new messages (+%d context) in #%s",
len(messages), len(history_messages),
getattr(channel, 'name', 'unknown'),
)
# TRIAGE: Lightweight model — conversation-level analysis
result = await self.bot.llm.analyze_conversation(
anon_conversation,
channel_context=channel_context,
user_notes_map=anon_notes,
new_message_start=new_message_start,
user_aliases=alias_context,
rules_context=rules_context,
)
if result is None:
return
# ESCALATION: Re-analyze with heavy model if any finding warrants it
escalation_threshold = sentiment_config.get("escalation_threshold", 0.25)
needs_escalation = any(
f["toxicity_score"] >= escalation_threshold
or f.get("off_topic", False)
or f.get("coherence_score", 1.0) < 0.6
for f in result.get("user_findings", [])
)
if needs_escalation:
heavy_result = await self.bot.llm_heavy.analyze_conversation(
anon_conversation,
channel_context=channel_context,
user_notes_map=anon_notes,
new_message_start=new_message_start,
user_aliases=alias_context,
rules_context=rules_context,
)
if heavy_result is not None:
logger.info(
"Escalated to heavy model for #%s",
getattr(channel, 'name', 'unknown'),
)
result = heavy_result
# De-anonymize findings back to real display names
self._deanonymize_findings(result, anon_map)
user_lookup = self._build_user_lookup(messages)
# Mark all buffered messages as analyzed (for mention scan dedup)
for m in messages:
self._mark_analyzed(m.id)
dry_run = config.get("monitoring", {}).get("dry_run", False)
thresholds = self._resolve_thresholds()
coherence_config = config.get("coherence", {})
# Process per-user findings
for finding in result.get("user_findings", []):
username = finding["username"]
lookup = user_lookup.get(username)
if not lookup:
continue
_, _, user_msgs = lookup
combined_content = "\n".join(
m.content for m in user_msgs if m.content and m.content.strip()
)[:4000]
await self._process_finding(
finding, user_lookup,
sentiment_config=sentiment_config,
dry_run=dry_run,
thresholds=thresholds,
db_content=combined_content,
db_topic_category=finding.get("topic_category", "general_chat"),
db_topic_reasoning=finding.get("topic_reasoning", ""),
db_coherence_score=finding.get("coherence_score", 0.85),
db_coherence_flag=finding.get("coherence_flag", "normal"),
game_channels=game_channels,
coherence_config=coherence_config,
)
# -- Mention scan methods --
def _mark_analyzed(self, discord_message_id: int):
"""Track a Discord message ID as already analyzed."""
self._analyzed_message_ids.add(discord_message_id)
if len(self._analyzed_message_ids) > self._max_analyzed_ids:
sorted_ids = sorted(self._analyzed_message_ids)
self._analyzed_message_ids = set(sorted_ids[len(sorted_ids) // 2:])
def _prune_moderated_ids(self):
"""Cap the moderated message ID set to avoid unbounded growth."""
if len(self._moderated_message_ids) > self._max_analyzed_ids:
sorted_ids = sorted(self._moderated_message_ids)
self._moderated_message_ids = set(sorted_ids[len(sorted_ids) // 2:])
def _was_moderated(self, msg: discord.Message) -> bool:
"""Check if a message already triggered moderation (in-memory or via reaction)."""
if msg.id in self._moderated_message_ids:
return True
# Fall back to checking for bot's warning reaction (survives restarts)
return any(str(r.emoji) == "\u26a0\ufe0f" and r.me for r in msg.reactions)
async def _maybe_start_mention_scan(
self, trigger_message: discord.Message, mention_config: dict
):
"""Check cooldown and kick off a mention-triggered scan of recent messages."""
channel_id = trigger_message.channel.id
cooldown_seconds = mention_config.get("cooldown_seconds", 60)
now = datetime.now(timezone.utc)
last_scan = self._mention_scan_cooldowns.get(channel_id)
if last_scan and (now - last_scan).total_seconds() < cooldown_seconds:
logger.info(
"Mention scan cooldown active for #%s, skipping.",
getattr(trigger_message.channel, "name", "unknown"),
)
return
self._mention_scan_cooldowns[channel_id] = now
# Extract the user's concern (strip the bot ping from the message)
mention_text = trigger_message.content
for fmt in (f"<@{self.bot.user.id}>", f"<@!{self.bot.user.id}>"):
mention_text = mention_text.replace(fmt, "")
mention_text = mention_text.strip() or "(user pinged bot without specific concern)"
# Store as a mention scan task (separate from debounce)
existing_task = self._mention_scan_tasks.get(channel_id)
if existing_task and not existing_task.done():
existing_task.cancel()
self._mention_scan_tasks[channel_id] = asyncio.create_task(
self._run_mention_scan(trigger_message, mention_text, mention_config)
)
async def _run_mention_scan(
self,
trigger_message: discord.Message,
mention_text: str,
mention_config: dict,
):
"""Scan recent channel messages with ONE conversation-level LLM call."""
channel = trigger_message.channel
scan_count = mention_config.get("scan_messages", 30)
config = self.bot.config
sentiment_config = config.get("sentiment", {})
game_channels = config.get("game_channels", {})
# Fetch recent messages (before the trigger, skip bots/empty/moderated)
raw_messages: list[discord.Message] = []
try:
async for msg in channel.history(limit=scan_count + 20, before=trigger_message):
if msg.author.bot:
continue
if not msg.content or not msg.content.strip():
continue
if self._was_moderated(msg):
continue
raw_messages.append(msg)
if len(raw_messages) >= scan_count:
break
except discord.HTTPException:
logger.warning("Failed to fetch history for mention scan in #%s",
getattr(channel, "name", "unknown"))
return
raw_messages.reverse() # chronological order
if not raw_messages:
self._mention_scan_results[trigger_message.id] = "No recent messages found to analyze."
return
logger.info(
"Mention scan triggered by %s in #%s: %d messages (single LLM call). Focus: %s",
trigger_message.author.display_name,
getattr(channel, "name", "unknown"),
len(raw_messages),
mention_text[:80],
)
conversation = self._build_conversation(raw_messages)
user_notes_map = self._build_user_notes_map(raw_messages)
# Anonymize usernames before sending to LLM
anon_map = self._build_anon_map(conversation)
anon_conversation = self._anonymize_conversation(conversation, anon_map)
anon_notes = self._anonymize_notes(user_notes_map, anon_map) if user_notes_map else user_notes_map
alias_context = self._build_alias_context(raw_messages, anon_map)
channel_context = build_channel_context(raw_messages[0], game_channels)
rules_context = self._build_rules_context()
mention_context = (
f"A user flagged this conversation and said: \"{mention_text}\"\n"
f"Pay special attention to whether this concern is valid."
)
# Single LLM call
result = await self.bot.llm.analyze_conversation(
anon_conversation,
mention_context=mention_context,
channel_context=channel_context,
user_notes_map=anon_notes,
user_aliases=alias_context,
rules_context=rules_context,
)
if result is None:
logger.warning("Conversation analysis failed for mention scan.")
self._mention_scan_results[trigger_message.id] = "Analysis failed."
return
# De-anonymize findings back to real display names
self._deanonymize_findings(result, anon_map)
user_lookup = self._build_user_lookup(raw_messages)
findings: list[str] = []
dry_run = config.get("monitoring", {}).get("dry_run", False)
thresholds = self._resolve_thresholds()
for finding in result.get("user_findings", []):
username = finding["username"]
lookup = user_lookup.get(username)
if not lookup:
logger.warning("Mention scan: LLM returned unknown user '%s', skipping.", username)
continue
user_id, ref_msg, user_msgs = lookup
# Skip if all their messages were already analyzed
if all(m.id in self._analyzed_message_ids for m in user_msgs):
continue
# Mark their messages as analyzed
for m in user_msgs:
self._mark_analyzed(m.id)
worst_msg = finding.get("worst_message")
content = f"[Mention scan] {worst_msg}" if worst_msg else "[Mention scan] See conversation"
off_topic = finding.get("off_topic", False)
result_tuple = await self._process_finding(
finding, user_lookup,
sentiment_config=sentiment_config,
dry_run=dry_run,
thresholds=thresholds,
db_content=content,
db_topic_category="personal_drama" if off_topic else "gaming",
db_topic_reasoning=finding.get("reasoning", ""),
db_coherence_score=None,
db_coherence_flag=None,
)
if result_tuple:
_, score, _, categories = result_tuple
if score >= 0.3:
cat_str = ", ".join(c for c in categories if c != "none") or "none"
findings.append(f"{username}: {score:.2f} ({cat_str})")
# Build summary for ChatCog
convo_summary = result.get("conversation_summary", "")
if findings:
summary = f"Scanned {len(raw_messages)} msgs. {convo_summary} Notable: " + "; ".join(findings[:5])
else:
summary = f"Scanned {len(raw_messages)} msgs. {convo_summary}"
# Prune old scan results
if len(self._mention_scan_results) > 20:
oldest = sorted(self._mention_scan_results.keys())[:len(self._mention_scan_results) - 10]
for k in oldest:
del self._mention_scan_results[k]
self._mention_scan_results[trigger_message.id] = summary
logger.info(
"Mention scan complete in #%s: 1 LLM call, %d messages, %d users flagged",
getattr(channel, "name", "unknown"),
len(raw_messages),
len(findings),
)
# -- State flush loop --
@tasks.loop(seconds=STATE_FLUSH_INTERVAL)
async def _flush_states(self):
await flush_dirty_states(self.bot, self._dirty_users)
@_flush_states.before_loop
async def _before_flush(self):
await self.bot.wait_until_ready()
async def setup(bot: commands.Bot):
await bot.add_cog(SentimentCog(bot))

149
cogs/sentiment/actions.py Normal file
View File

@@ -0,0 +1,149 @@
import asyncio
import logging
from datetime import timedelta
import discord
from cogs.sentiment.log_utils import log_action
from cogs.sentiment.state import save_user_state
logger = logging.getLogger("bcs.sentiment")
async def mute_user(
bot, message: discord.Message, score: float,
categories: list[str], db_message_id: int | None, dirty_users: set[int],
violated_rules: list[int] | None = None, rules_config: dict | None = None,
):
member = message.author
if not isinstance(member, discord.Member):
return
if not message.guild.me.guild_permissions.moderate_members:
logger.warning("Missing moderate_members permission, cannot mute.")
return
offense_num = bot.drama_tracker.record_offense(member.id)
timeout_config = bot.config.get("timeouts", {})
escalation = timeout_config.get("escalation_minutes", [5, 15, 30, 60])
idx = min(offense_num - 1, len(escalation) - 1)
duration_minutes = escalation[idx]
try:
await member.timeout(
timedelta(minutes=duration_minutes),
reason=f"BCS auto-mute: drama score {score:.2f}",
)
except discord.Forbidden:
logger.warning("Cannot timeout %s — role hierarchy issue.", member)
return
except discord.HTTPException as e:
logger.error("Failed to timeout %s: %s", member, e)
return
messages_config = bot.config.get("messages", {})
cat_str = ", ".join(c for c in categories if c != "none") or "general negativity"
# Build rule citation text
rules_text = ""
if violated_rules and rules_config:
rule_lines = [f"Rule {r}: {rules_config[r]}" for r in violated_rules if r in rules_config]
if rule_lines:
rules_text = "\n".join(rule_lines)
description = messages_config.get("mute_description", "").format(
username=member.display_name,
duration=f"{duration_minutes} minutes",
score=f"{score:.2f}",
categories=cat_str,
)
if rules_text:
description += f"\n\nRules violated:\n{rules_text}"
embed = discord.Embed(
title=messages_config.get("mute_title", "BREEHAVIOR ALERT"),
description=description,
color=discord.Color.red(),
)
embed.set_footer(
text=f"Offense #{offense_num} | Timeout: {duration_minutes}m"
)
await message.channel.send(embed=embed)
rules_log = f" | Rules: {','.join(str(r) for r in violated_rules)}" if violated_rules else ""
await log_action(
message.guild,
f"**MUTE** | {member.mention} | Score: {score:.2f} | "
f"Duration: {duration_minutes}m | Offense #{offense_num} | "
f"Categories: {cat_str}{rules_log}",
)
logger.info(
"Muted %s for %d minutes (offense #%d, score %.2f, rules=%s)",
member, duration_minutes, offense_num, score,
violated_rules or [],
)
rules_detail = f" rules={','.join(str(r) for r in violated_rules)}" if violated_rules else ""
asyncio.create_task(bot.db.save_action(
guild_id=message.guild.id,
user_id=member.id,
username=member.display_name,
action_type="mute",
message_id=db_message_id,
details=f"duration={duration_minutes}m offense={offense_num} score={score:.2f} categories={cat_str}{rules_detail}",
))
save_user_state(bot, dirty_users, member.id)
async def warn_user(
bot, message: discord.Message, score: float,
db_message_id: int | None, dirty_users: set[int],
violated_rules: list[int] | None = None, rules_config: dict | None = None,
):
timeout_config = bot.config.get("timeouts", {})
cooldown = timeout_config.get("warning_cooldown_minutes", 5)
if not bot.drama_tracker.can_warn(message.author.id, cooldown):
return
bot.drama_tracker.record_warning(message.author.id)
try:
await message.add_reaction("\u26a0\ufe0f")
except discord.HTTPException:
pass
messages_config = bot.config.get("messages", {})
warning_text = messages_config.get(
"warning",
"Easy there, {username}. The Breehavior Monitor is watching.",
).format(username=message.author.display_name)
# Append rule citation if rules were violated
if violated_rules and rules_config:
rule_lines = [f"Rule {r}: {rules_config[r]}" for r in violated_rules if r in rules_config]
if rule_lines:
warning_text += "\n" + " | ".join(rule_lines)
await message.channel.send(warning_text)
rules_log = f" | Rules: {','.join(str(r) for r in violated_rules)}" if violated_rules else ""
await log_action(
message.guild,
f"**WARNING** | {message.author.mention} | Score: {score:.2f}{rules_log}",
)
logger.info("Warned %s (score %.2f, rules=%s)", message.author, score, violated_rules or [])
rules_detail = f" rules={','.join(str(r) for r in violated_rules)}" if violated_rules else ""
asyncio.create_task(bot.db.save_action(
guild_id=message.guild.id,
user_id=message.author.id,
username=message.author.display_name,
action_type="warning",
message_id=db_message_id,
details=f"score={score:.2f}{rules_detail}",
))
save_user_state(bot, dirty_users, message.author.id)

View File

@@ -0,0 +1,95 @@
import asyncio
import logging
from datetime import datetime, timedelta, timezone
import discord
from cogs.sentiment.log_utils import log_action
logger = logging.getLogger("bcs.sentiment")
def build_channel_context(message: discord.Message, game_channels: dict) -> str:
"""Build a channel context string for LLM game detection."""
if not game_channels:
return ""
channel_name = getattr(message.channel, "name", "")
current_game = game_channels.get(channel_name)
lines = []
if current_game:
lines.append(f"Current channel: #{channel_name} ({current_game})")
else:
lines.append(f"Current channel: #{channel_name}")
channel_list = ", ".join(f"#{ch} ({game})" for ch, game in game_channels.items())
lines.append(f"Game channels: {channel_list}")
return "\n".join(lines)
async def handle_channel_redirect(
bot, message: discord.Message, detected_game: str,
game_channels: dict, db_message_id: int | None,
redirect_cooldowns: dict[int, datetime],
):
"""Send a redirect message if the user is talking about a different game."""
channel_name = getattr(message.channel, "name", "")
# Only redirect if message is in a game channel
if channel_name not in game_channels:
return
# No redirect needed if detected game matches current channel
if detected_game == channel_name:
return
# Detected game must be a valid game channel
if detected_game not in game_channels:
return
# Find the target channel in the guild
target_channel = discord.utils.get(
message.guild.text_channels, name=detected_game
)
if not target_channel:
return
# Check per-user cooldown
user_id = message.author.id
cooldown_minutes = bot.config.get("topic_drift", {}).get("remind_cooldown_minutes", 10)
now = datetime.now(timezone.utc)
last_redirect = redirect_cooldowns.get(user_id)
if last_redirect and (now - last_redirect) < timedelta(minutes=cooldown_minutes):
return
redirect_cooldowns[user_id] = now
messages_config = bot.config.get("messages", {})
game_name = game_channels[detected_game]
redirect_text = messages_config.get(
"channel_redirect",
"Hey {username}, that sounds like {game} talk \u2014 head over to {channel} for that!",
).format(
username=message.author.display_name,
game=game_name,
channel=target_channel.mention,
)
await message.channel.send(redirect_text)
await log_action(
message.guild,
f"**CHANNEL REDIRECT** | {message.author.mention} | "
f"#{channel_name} \u2192 #{detected_game} ({game_name})",
)
logger.info(
"Redirected %s from #%s to #%s (%s)",
message.author, channel_name, detected_game, game_name,
)
asyncio.create_task(bot.db.save_action(
guild_id=message.guild.id,
user_id=user_id,
username=message.author.display_name,
action_type="channel_redirect",
message_id=db_message_id,
details=f"from=#{channel_name} to=#{detected_game} game={game_name}",
))

View File

@@ -0,0 +1,43 @@
import asyncio
import logging
import discord
from cogs.sentiment.log_utils import log_action
from cogs.sentiment.state import save_user_state
logger = logging.getLogger("bcs.sentiment")
async def handle_coherence_alert(
bot, message: discord.Message, degradation: dict, coherence_config: dict,
db_message_id: int | None, dirty_users: set[int],
):
flag = degradation["flag"]
messages_map = coherence_config.get("messages", {})
alert_text = messages_map.get(flag, messages_map.get(
"default", "You okay there, {username}? That message was... something."
)).format(username=message.author.display_name)
await message.channel.send(alert_text)
await log_action(
message.guild,
f"**COHERENCE ALERT** | {message.author.mention} | "
f"Score: {degradation['current']:.2f} | Baseline: {degradation['baseline']:.2f} | "
f"Drop: {degradation['drop']:.2f} | Flag: {flag}",
)
logger.info(
"Coherence alert for %s: score=%.2f baseline=%.2f drop=%.2f flag=%s",
message.author, degradation["current"], degradation["baseline"],
degradation["drop"], flag,
)
asyncio.create_task(bot.db.save_action(
guild_id=message.guild.id,
user_id=message.author.id,
username=message.author.display_name,
action_type="coherence_alert",
message_id=db_message_id,
details=f"score={degradation['current']:.2f} baseline={degradation['baseline']:.2f} drop={degradation['drop']:.2f} flag={flag}",
))
save_user_state(bot, dirty_users, message.author.id)

View File

@@ -0,0 +1,54 @@
import logging
import discord
logger = logging.getLogger("bcs.sentiment")
def score_color(score: float) -> discord.Color:
if score >= 0.75:
return discord.Color.red()
if score >= 0.6:
return discord.Color.orange()
if score >= 0.3:
return discord.Color.yellow()
return discord.Color.green()
async def log_analysis(
message: discord.Message, score: float, drama_score: float,
categories: list[str], reasoning: str, off_topic: bool, topic_category: str,
):
log_channel = discord.utils.get(
message.guild.text_channels, name="bcs-log"
)
if not log_channel:
return
# Only log notable messages (score > 0.1) to avoid spam
if score <= 0.1:
return
cat_str = ", ".join(c for c in categories if c != "none") or "none"
embed = discord.Embed(
title=f"Analysis: {message.author.display_name}",
description=f"#{message.channel.name}: {message.content[:200]}",
color=score_color(score),
)
embed.add_field(name="Message Score", value=f"{score:.2f}", inline=True)
embed.add_field(name="Rolling Drama", value=f"{drama_score:.2f}", inline=True)
embed.add_field(name="Categories", value=cat_str, inline=True)
embed.add_field(name="Reasoning", value=reasoning[:1024] or "n/a", inline=False)
try:
await log_channel.send(embed=embed)
except discord.HTTPException:
pass
async def log_action(guild: discord.Guild, text: str):
log_channel = discord.utils.get(guild.text_channels, name="bcs-log")
if log_channel:
try:
await log_channel.send(text)
except discord.HTTPException:
pass

55
cogs/sentiment/state.py Normal file
View File

@@ -0,0 +1,55 @@
import asyncio
import logging
logger = logging.getLogger("bcs.sentiment")
def _aliases_csv(user_data) -> str | None:
"""Convert aliases list to comma-separated string for DB storage."""
return ",".join(user_data.aliases) if user_data.aliases else None
def save_user_state(bot, dirty_users: set[int], user_id: int) -> None:
"""Fire-and-forget save of a user's current state to DB."""
user_data = bot.drama_tracker.get_user(user_id)
asyncio.create_task(bot.db.save_user_state(
user_id=user_id,
offense_count=user_data.offense_count,
immune=user_data.immune,
off_topic_count=user_data.off_topic_count,
baseline_coherence=user_data.baseline_coherence,
user_notes=user_data.notes or None,
warned=user_data.warned_since_reset,
last_offense_at=user_data.last_offense_time or None,
aliases=_aliases_csv(user_data),
warning_expires_at=user_data.warning_expires_at or None,
))
dirty_users.discard(user_id)
async def flush_dirty_states(bot, dirty_users: set[int]) -> None:
"""Save all dirty user states to DB."""
if not dirty_users:
return
dirty = list(dirty_users)
saved = 0
for user_id in dirty:
user_data = bot.drama_tracker.get_user(user_id)
try:
await bot.db.save_user_state(
user_id=user_id,
offense_count=user_data.offense_count,
immune=user_data.immune,
off_topic_count=user_data.off_topic_count,
baseline_coherence=user_data.baseline_coherence,
user_notes=user_data.notes or None,
warned=user_data.warned_since_reset,
last_offense_at=user_data.last_offense_time or None,
aliases=_aliases_csv(user_data),
warning_expires_at=user_data.warning_expires_at or None,
)
dirty_users.discard(user_id)
saved += 1
except Exception:
logger.exception("Failed to flush state for user %d", user_id)
logger.info("Flushed %d/%d dirty user states to DB.", saved, len(dirty))

View File

@@ -0,0 +1,189 @@
import asyncio
import logging
import random
import re
from collections import deque
from pathlib import Path
import discord
from cogs.sentiment.log_utils import log_action
from cogs.sentiment.state import save_user_state
logger = logging.getLogger("bcs.sentiment")
_PROMPTS_DIR = Path(__file__).resolve().parent.parent.parent / "prompts"
_TOPIC_REDIRECT_PROMPT = (_PROMPTS_DIR / "topic_redirect.txt").read_text(encoding="utf-8")
DEFAULT_TOPIC_REMINDS = [
"Hey {username}, this is a gaming server 🎮 — take the personal stuff to {channel}.",
"{username}, sir this is a gaming channel. {channel} is right there.",
"Hey {username}, I don't remember this being a therapy session. Take it to {channel}. 🎮",
"{username}, I'm gonna need you to take that energy to {channel}. This channel has a vibe to protect.",
"Not to be dramatic {username}, but this is wildly off-topic. {channel} exists for a reason. 🎮",
]
DEFAULT_TOPIC_NUDGES = [
"{username}, we've been over this. Gaming. Channel. {channel} for the rest. 🎮",
"{username}, you keep drifting off-topic like it's a speedrun category. {channel}. Now.",
"Babe. {username}. The gaming channel. We talked about this. Go to {channel}. 😭",
"{username}, I will not ask again (I will definitely ask again). {channel} for off-topic. 🎮",
"{username}, at this point I'm keeping score. That's off-topic strike {count}. {channel} is waiting.",
"Look, {username}, I love the enthusiasm but this ain't the channel for it. {channel}. 🎮",
]
# Per-channel deque of recent LLM-generated redirect messages (for variety)
_recent_redirects: dict[int, deque] = {}
def _get_recent_redirects(channel_id: int) -> list[str]:
if channel_id in _recent_redirects:
return list(_recent_redirects[channel_id])
return []
def _record_redirect(channel_id: int, text: str):
if channel_id not in _recent_redirects:
_recent_redirects[channel_id] = deque(maxlen=5)
_recent_redirects[channel_id].append(text)
def _strip_brackets(text: str) -> str:
"""Strip leaked LLM metadata brackets (same approach as ChatCog)."""
segments = re.split(r"^\s*\[[^\]]*\]\s*$", text, flags=re.MULTILINE)
segments = [s.strip() for s in segments if s.strip()]
return segments[-1] if segments else ""
async def _generate_llm_redirect(
bot, message: discord.Message, topic_category: str,
topic_reasoning: str, count: int, redirect_mention: str = "",
) -> str | None:
"""Ask the LLM chat model to generate a topic redirect message."""
recent = _get_recent_redirects(message.channel.id)
user_prompt = (
f"Username: {message.author.display_name}\n"
f"Channel: #{getattr(message.channel, 'name', 'unknown')}\n"
f"Off-topic category: {topic_category}\n"
f"Why it's off-topic: {topic_reasoning}\n"
f"Off-topic strike count: {count}\n"
f"What they said: {message.content[:300]}"
)
if redirect_mention:
user_prompt += f"\nRedirect channel: {redirect_mention}"
messages = [{"role": "user", "content": user_prompt}]
effective_prompt = _TOPIC_REDIRECT_PROMPT
if recent:
avoid_block = "\n".join(f"- {r}" for r in recent)
effective_prompt += (
"\n\nIMPORTANT — you recently sent these redirects in the same channel. "
"Do NOT repeat any of these. Be completely different.\n"
+ avoid_block
)
try:
response = await bot.llm_chat.chat(
messages, effective_prompt,
)
except Exception:
logger.exception("LLM topic redirect generation failed")
return None
if response:
response = _strip_brackets(response)
return response if response else None
def _static_fallback(bot, message: discord.Message, count: int, redirect_mention: str = "") -> str:
"""Pick a static template message as fallback."""
messages_config = bot.config.get("messages", {})
if count >= 2:
pool = messages_config.get("topic_nudges", DEFAULT_TOPIC_NUDGES)
if isinstance(pool, str):
pool = [pool]
else:
pool = messages_config.get("topic_reminds", DEFAULT_TOPIC_REMINDS)
if isinstance(pool, str):
pool = [pool]
return random.choice(pool).format(
username=message.author.display_name, count=count,
channel=redirect_mention or "the right channel",
)
async def handle_topic_drift(
bot, message: discord.Message, topic_category: str, topic_reasoning: str,
db_message_id: int | None, dirty_users: set[int],
):
config = bot.config.get("topic_drift", {})
if not config.get("enabled", True):
return
ignored = config.get("ignored_channels", [])
if message.channel.id in ignored or getattr(message.channel, "name", "") in ignored:
return
dry_run = bot.config.get("monitoring", {}).get("dry_run", False)
if dry_run:
return
tracker = bot.drama_tracker
user_id = message.author.id
cooldown = config.get("remind_cooldown_minutes", 10)
if not tracker.can_topic_remind(user_id, cooldown):
return
count = tracker.record_off_topic(user_id)
action_type = "topic_nudge" if count >= 2 else "topic_remind"
# Resolve redirect channel mention
redirect_mention = ""
redirect_name = config.get("redirect_channel")
if redirect_name and message.guild:
ch = discord.utils.get(message.guild.text_channels, name=redirect_name)
if ch:
redirect_mention = ch.mention
# Generate the redirect message
use_llm = config.get("use_llm", False)
redirect_text = None
if use_llm:
redirect_text = await _generate_llm_redirect(
bot, message, topic_category, topic_reasoning, count, redirect_mention,
)
if redirect_text:
_record_redirect(message.channel.id, redirect_text)
else:
redirect_text = _static_fallback(bot, message, count, redirect_mention)
await message.channel.send(redirect_text)
if action_type == "topic_nudge":
await log_action(
message.guild,
f"**TOPIC NUDGE** | {message.author.mention} | "
f"Off-topic count: {count} | Category: {topic_category}",
)
else:
await log_action(
message.guild,
f"**TOPIC REMIND** | {message.author.mention} | "
f"Category: {topic_category} | {topic_reasoning}",
)
logger.info("Topic %s for %s (count %d)", action_type.replace("topic_", ""), message.author, count)
asyncio.create_task(bot.db.save_action(
guild_id=message.guild.id, user_id=user_id,
username=message.author.display_name,
action_type=action_type, message_id=db_message_id,
details=f"off_topic_count={count} category={topic_category}"
+ (f" reasoning={topic_reasoning}" if action_type == "topic_remind" else ""),
))
save_user_state(bot, dirty_users, user_id)

View File

@@ -0,0 +1,161 @@
import asyncio
import logging
import random
import re
from collections import deque
from pathlib import Path
import discord
from cogs.sentiment.log_utils import log_action
from cogs.sentiment.state import save_user_state
logger = logging.getLogger("bcs.sentiment")
_PROMPTS_DIR = Path(__file__).resolve().parent.parent.parent / "prompts"
_UNBLOCK_REDIRECT_PROMPT = (_PROMPTS_DIR / "unblock_redirect.txt").read_text(encoding="utf-8")
# Regex: matches "unblock" as a whole word, case-insensitive
UNBLOCK_PATTERN = re.compile(r"\bunblock(?:ed|ing|s)?\b", re.IGNORECASE)
DEFAULT_UNBLOCK_REMINDS = [
"{username}, begging to be unblocked in chat is not the move. Take it up with an admin. 🙄",
"{username}, nobody's getting unblocked because you asked nicely in a gaming channel.",
"Hey {username}, the unblock button isn't in this chat. Just saying.",
"{username}, I admire the persistence but this isn't the unblock hotline.",
"{username}, that's between you and whoever blocked you. Chat isn't the appeals court.",
]
DEFAULT_UNBLOCK_NUDGES = [
"{username}, we've been over this. No amount of asking here is going to change anything. 🙄",
"{username}, I'm starting to think you enjoy being told no. Still not getting unblocked via chat.",
"{username}, at this point I could set a reminder for your next unblock request. Take it to an admin.",
"Babe. {username}. We've had this conversation {count} times. It's not happening here. 😭",
"{username}, I'm keeping a tally and you're at {count}. The answer is still the same.",
]
# Per-channel deque of recent LLM-generated messages (for variety)
_recent_redirects: dict[int, deque] = {}
def _get_recent_redirects(channel_id: int) -> list[str]:
if channel_id in _recent_redirects:
return list(_recent_redirects[channel_id])
return []
def _record_redirect(channel_id: int, text: str):
if channel_id not in _recent_redirects:
_recent_redirects[channel_id] = deque(maxlen=5)
_recent_redirects[channel_id].append(text)
def _strip_brackets(text: str) -> str:
"""Strip leaked LLM metadata brackets."""
segments = re.split(r"^\s*\[[^\]]*\]\s*$", text, flags=re.MULTILINE)
segments = [s.strip() for s in segments if s.strip()]
return segments[-1] if segments else ""
def matches_unblock_nag(content: str) -> bool:
"""Check if a message contains unblock-related nagging."""
return bool(UNBLOCK_PATTERN.search(content))
async def _generate_llm_redirect(
bot, message: discord.Message, count: int,
) -> str | None:
"""Ask the LLM chat model to generate an unblock-nag redirect."""
recent = _get_recent_redirects(message.channel.id)
user_prompt = (
f"Username: {message.author.display_name}\n"
f"Channel: #{getattr(message.channel, 'name', 'unknown')}\n"
f"Unblock nag count: {count}\n"
f"What they said: {message.content[:300]}"
)
messages = [{"role": "user", "content": user_prompt}]
effective_prompt = _UNBLOCK_REDIRECT_PROMPT
if recent:
avoid_block = "\n".join(f"- {r}" for r in recent)
effective_prompt += (
"\n\nIMPORTANT — you recently sent these redirects in the same channel. "
"Do NOT repeat any of these. Be completely different.\n"
+ avoid_block
)
try:
response = await bot.llm_chat.chat(messages, effective_prompt)
except Exception:
logger.exception("LLM unblock redirect generation failed")
return None
if response:
response = _strip_brackets(response)
return response if response else None
def _static_fallback(message: discord.Message, count: int) -> str:
"""Pick a static template message as fallback."""
if count >= 2:
pool = DEFAULT_UNBLOCK_NUDGES
else:
pool = DEFAULT_UNBLOCK_REMINDS
return random.choice(pool).format(
username=message.author.display_name, count=count,
)
async def handle_unblock_nag(
bot, message: discord.Message, dirty_users: set[int],
):
"""Handle a detected unblock-nagging message."""
config = bot.config.get("unblock_nag", {})
if not config.get("enabled", True):
return
dry_run = bot.config.get("monitoring", {}).get("dry_run", False)
if dry_run:
return
tracker = bot.drama_tracker
user_id = message.author.id
cooldown = config.get("remind_cooldown_minutes", 30)
if not tracker.can_unblock_remind(user_id, cooldown):
return
count = tracker.record_unblock_nag(user_id)
action_type = "unblock_nudge" if count >= 2 else "unblock_remind"
# Generate the redirect message
use_llm = config.get("use_llm", True)
redirect_text = None
if use_llm:
redirect_text = await _generate_llm_redirect(bot, message, count)
if redirect_text:
_record_redirect(message.channel.id, redirect_text)
else:
redirect_text = _static_fallback(message, count)
await message.channel.send(redirect_text)
await log_action(
message.guild,
f"**UNBLOCK {'NUDGE' if count >= 2 else 'REMIND'}** | {message.author.mention} | "
f"Nag count: {count}",
)
logger.info("Unblock %s for %s (count %d)", action_type.replace("unblock_", ""), message.author, count)
asyncio.create_task(bot.db.save_action(
guild_id=message.guild.id, user_id=user_id,
username=message.author.display_name,
action_type=action_type, message_id=None,
details=f"unblock_nag_count={count}",
))
save_user_state(bot, dirty_users, user_id)

View File

@@ -11,32 +11,162 @@ monitoring:
sentiment:
warning_threshold: 0.6
mute_threshold: 0.75
mute_threshold: 0.65
spike_warning_threshold: 0.5 # Single message score that triggers instant warning
spike_mute_threshold: 0.8 # Single message score that triggers instant mute
context_messages: 3 # Number of previous messages to include as context
spike_mute_threshold: 0.7 # Single message score that triggers instant mute
context_messages: 8 # Number of previous messages to include as context
rolling_window_size: 10 # Number of messages to track per user
rolling_window_minutes: 15 # Time window for tracking
cooldown_between_analyses: 2 # Seconds between analyzing same user's messages
batch_window_seconds: 4 # Wait this long for more messages before analyzing (debounce)
escalation_threshold: 0.25 # Triage toxicity score that triggers re-analysis with heavy model
escalation_boost: 0.04 # Per-message drama boost after warning (sustained toxicity ramps toward mute)
game_channels:
gta-online: "GTA Online"
battlefield: "Battlefield"
warzone: "Call of Duty: Warzone"
cod-zombies: "Call of Duty: Zombies"
topic_drift:
enabled: true
use_llm: true # Generate redirect messages via LLM instead of static templates
redirect_channel: "general" # Channel to suggest for off-topic chat
ignored_channels: ["general"] # Channel names or IDs to skip topic drift monitoring
remind_cooldown_minutes: 10 # Don't remind same user more than once per this window
escalation_count: 3 # After this many reminds, DM the server owner
reset_minutes: 60 # Reset off-topic count after this much on-topic behavior
unblock_nag:
enabled: true
use_llm: true # Generate redirect messages via LLM instead of static templates
remind_cooldown_minutes: 30 # Don't remind same user more than once per this window
mention_scan:
enabled: true
scan_messages: 30 # Messages to scan per mention trigger
cooldown_seconds: 60 # Per-channel cooldown between scans
timeouts:
escalation_minutes: [5, 15, 30, 60] # Escalating timeout durations
offense_reset_minutes: 120 # Reset offense counter after this much good behavior
escalation_minutes: [30, 60, 120, 240] # Escalating timeout durations
offense_reset_minutes: 1440 # Reset offense counter after this much good behavior (24h)
warning_cooldown_minutes: 5 # Don't warn same user more than once per this window
warning_expiration_minutes: 30 # Warning expires after this long — user must be re-warned before mute
messages:
warning: "Easy there, {username}. The Breehavior Monitor is watching. \U0001F440"
mute_title: "\U0001F6A8 BREEHAVIOR ALERT \U0001F6A8"
mute_description: "{username} has been placed in timeout for {duration}.\n\nReason: Sustained elevated drama levels detected.\nDrama Score: {score}/1.0\nCategories: {categories}\n\nCool down and come back when you've resolved your skill issues."
topic_remind: "Hey {username}, this is a gaming server \U0001F3AE — maybe take the personal stuff to DMs?"
topic_nudge: "{username}, we've chatted about this before — let's keep it to gaming talk in here. Personal drama belongs in DMs."
topic_reminds:
- "Hey {username}, this is a gaming server 🎮 — take the personal stuff to {channel}."
- "{username}, sir this is a gaming channel. {channel} is right there."
- "Hey {username}, I don't remember this being a therapy session. Take it to {channel}. 🎮"
- "{username}, I'm gonna need you to take that energy to {channel}. This channel has a vibe to protect."
- "Not to be dramatic {username}, but this is wildly off-topic. {channel} exists for a reason. 🎮"
topic_nudges:
- "{username}, we've been over this. Gaming. Channel. {channel} for the rest. 🎮"
- "{username}, you keep drifting off-topic like it's a speedrun category. {channel}. Now."
- "Babe. {username}. The gaming channel. We talked about this. Go to {channel}. 😭"
- "{username}, I will not ask again (I will definitely ask again). {channel} for off-topic. 🎮"
- "{username}, at this point I'm keeping score. That's off-topic strike {count}. {channel} is waiting."
- "Look, {username}, I love the enthusiasm but this ain't the channel for it. {channel}. 🎮"
topic_owner_dm: "Heads up: {username} keeps going off-topic with personal drama in #{channel}. They've been reminded {count} times. Might need a word."
channel_redirect: "Hey {username}, that sounds like {game} talk — head over to {channel} for that!"
modes:
default_mode: roast
proactive_cooldown_messages: 8 # Minimum messages between proactive replies
default:
label: "Default"
description: "Hall-monitor moderation mode"
prompt_file: "personalities/chat_personality.txt"
proactive_replies: false
reply_chance: 0.0
moderation: full
chatty:
label: "Chatty"
description: "Friendly chat participant"
prompt_file: "personalities/chat_chatty.txt"
proactive_replies: true
reply_chance: 0.40
moderation: relaxed
relaxed_thresholds:
warning_threshold: 0.80
mute_threshold: 0.85
spike_warning_threshold: 0.70
spike_mute_threshold: 0.85
roast:
label: "Roast"
description: "Savage roast mode"
prompt_file: "personalities/chat_roast.txt"
proactive_replies: true
reply_chance: 0.60
moderation: relaxed
relaxed_thresholds:
warning_threshold: 0.85
mute_threshold: 0.90
spike_warning_threshold: 0.75
spike_mute_threshold: 0.90
hype:
label: "Hype"
description: "Your biggest fan"
prompt_file: "personalities/chat_hype.txt"
proactive_replies: true
reply_chance: 0.50
moderation: relaxed
relaxed_thresholds:
warning_threshold: 0.80
mute_threshold: 0.85
spike_warning_threshold: 0.70
spike_mute_threshold: 0.85
drunk:
label: "Drunk"
description: "Had a few too many"
prompt_file: "personalities/chat_drunk.txt"
proactive_replies: true
reply_chance: 0.60
moderation: relaxed
relaxed_thresholds:
warning_threshold: 0.85
mute_threshold: 0.90
spike_warning_threshold: 0.75
spike_mute_threshold: 0.90
english_teacher:
label: "English Teacher"
description: "Insufferable grammar nerd mode"
prompt_file: "personalities/chat_english_teacher.txt"
proactive_replies: true
reply_chance: 0.60
moderation: relaxed
relaxed_thresholds:
warning_threshold: 0.85
mute_threshold: 0.90
spike_warning_threshold: 0.75
spike_mute_threshold: 0.90
slutty:
label: "Slutty"
description: "Shamelessly flirty and full of innuendos"
prompt_file: "personalities/chat_slutty.txt"
proactive_replies: true
reply_chance: 0.60
moderation: relaxed
relaxed_thresholds:
warning_threshold: 0.85
mute_threshold: 0.90
spike_warning_threshold: 0.75
spike_mute_threshold: 0.90
polls:
enabled: true
duration_hours: 4
cooldown_minutes: 60 # Per-channel cooldown between auto-polls
coherence:
enabled: true
@@ -50,3 +180,9 @@ coherence:
mobile_keyboard: "{username}'s thumbs are having a rough day."
language_barrier: "Having trouble there, {username}? Take your time."
default: "You okay there, {username}? That message was... something."
reactions:
enabled: false
chance: 0.15 # Probability of evaluating a message for reaction
cooldown_seconds: 45 # Per-channel cooldown between reactions
excluded_channels: [] # Channel names or IDs to skip reactions in

View File

@@ -0,0 +1,216 @@
# Conversational Memory Design
## Goal
Make the bot a real conversational participant that knows people, remembers past interactions, can answer general questions, and gives input based on accumulated context. People should be able to ask it questions and get thoughtful answers informed by who they are and what's happened before.
## Design Decisions
- **Memory approach**: Structured memory tables in existing MSSQL database
- **Learning mode**: Both passive (observing chat via sentiment analysis) and active (direct conversations)
- **Knowledge scope**: General knowledge + server/people awareness (no web search)
- **Permanent memory**: Stored in existing `UserState.UserNotes` column (repurposed as LLM-maintained profile)
- **Expiring memory**: New `UserMemory` table for transient context with LLM-assigned expiration
## Database Changes
### Repurposed: `UserState.UserNotes`
No schema change needed. The column already exists as `NVARCHAR(MAX)`. Currently stores timestamped observation lines (max 10). Will be repurposed as an LLM-maintained **permanent profile summary** — a compact paragraph of durable facts about a user.
Example content:
```
GTA Online grinder (rank 400+, wants to hit 500), sarcastic humor, works night shifts, hates battle royales. Has a dog named Rex. Banters with the bot, usually tries to get roasted. Been in the server since early 2024.
```
The LLM rewrites this field as a whole when new permanent facts emerge, rather than appending timestamped lines.
### New Table: `UserMemory`
Stores expiring memories — transient context that's relevant for days or weeks but not forever.
```sql
CREATE TABLE UserMemory (
Id BIGINT IDENTITY(1,1) PRIMARY KEY,
UserId BIGINT NOT NULL,
Memory NVARCHAR(500) NOT NULL,
Topics NVARCHAR(200) NOT NULL, -- comma-separated tags
Importance NVARCHAR(10) NOT NULL, -- low, medium, high
ExpiresAt DATETIME2 NOT NULL,
Source NVARCHAR(20) NOT NULL, -- 'chat' or 'passive'
CreatedAt DATETIME2 NOT NULL DEFAULT SYSUTCDATETIME(),
INDEX IX_UserMemory_UserId (UserId),
INDEX IX_UserMemory_ExpiresAt (ExpiresAt)
)
```
Example rows:
| Memory | Topics | Importance | ExpiresAt | Source |
|--------|--------|------------|-----------|--------|
| Frustrated about losing ranked matches in Warzone | warzone,fps,frustration | medium | +7d | passive |
| Said they're quitting Warzone for good | warzone,fps | high | +30d | chat |
| Drunk tonight, celebrating Friday | personal,celebration | low | +1d | chat |
| Excited about GTA DLC dropping next week | gta,dlc | medium | +7d | passive |
## Memory Extraction
### From Direct Conversations (ChatCog)
After the bot sends a chat reply, a **fire-and-forget background task** calls the triage LLM to extract memories from the conversation. This does not block the reply.
New LLM tool definition:
```python
MEMORY_EXTRACTION_TOOL = {
"type": "function",
"function": {
"name": "extract_memories",
"parameters": {
"type": "object",
"properties": {
"memories": {
"type": "array",
"items": {
"type": "object",
"properties": {
"memory": {
"type": "string",
"description": "A concise fact or observation worth remembering."
},
"topics": {
"type": "array",
"items": {"type": "string"},
"description": "Topic tags for retrieval (e.g., 'gta', 'personal', 'warzone')."
},
"expiration": {
"type": "string",
"enum": ["1d", "3d", "7d", "30d", "permanent"],
"description": "How long this memory stays relevant. Use 'permanent' for stable facts about the person."
},
"importance": {
"type": "string",
"enum": ["low", "medium", "high"],
"description": "How important this memory is for future interactions."
}
},
"required": ["memory", "topics", "expiration", "importance"]
},
"description": "Memories to store. Only include genuinely new or noteworthy information."
},
"profile_update": {
"type": ["string", "null"],
"description": "If a permanent fact was learned, provide the full updated profile summary incorporating the new info. Null if no profile changes needed."
}
},
"required": ["memories"]
}
}
}
```
The extraction prompt receives:
- The conversation that just happened (from `_chat_history`)
- The user's current profile (`UserNotes`)
- Instructions to only extract genuinely new information
### From Passive Observation (SentimentCog)
The existing `note_update` field from analysis results currently feeds `DramaTracker.update_user_notes()`. This will be enhanced:
- If `note_update` contains a durable fact (the LLM can flag this), update `UserNotes` profile
- If it's transient observation, insert into `UserMemory` with a 7d default expiration
- The analysis tool's `note_update` field description gets updated to indicate whether the note is permanent or transient
## Memory Retrieval at Chat Time
When building context for a chat reply, memories are pulled in layers and injected as a structured block:
### Layer 1: Profile (always included)
```python
profile = user_state.user_notes # permanent profile summary
```
### Layer 2: Recent Expiring Memories (last 5 by CreatedAt)
```sql
SELECT TOP 5 Memory, Topics, CreatedAt
FROM UserMemory
WHERE UserId = ? AND ExpiresAt > SYSUTCDATETIME()
ORDER BY CreatedAt DESC
```
### Layer 3: Topic-Matched Memories
Extract keywords from the current message, match against `Topics` column:
```sql
SELECT TOP 5 Memory, Topics, CreatedAt
FROM UserMemory
WHERE UserId = ? AND ExpiresAt > SYSUTCDATETIME()
AND (Topics LIKE '%gta%' OR Topics LIKE '%warzone%') -- dynamic from message keywords
ORDER BY Importance DESC, CreatedAt DESC
```
### Layer 4: Channel Bias
If in a game channel (e.g., `#gta-online`), add the game name as a topic filter to boost relevant memories.
### Injected Context Format
```
[What you know about {username}:]
Profile: GTA grinder (rank 400+), sarcastic, works night shifts, hates BRs. Banters with the bot.
Recent: Said they're quitting Warzone (2 days ago) | Excited about GTA DLC (yesterday)
Relevant: Mentioned trying to hit rank 500 in GTA (3 weeks ago)
```
Target: ~200-400 tokens of memory context per chat interaction.
## Memory Maintenance
### Pruning (daily background task)
```sql
DELETE FROM UserMemory WHERE ExpiresAt < SYSUTCDATETIME()
```
Also enforce a per-user cap (50 memories). When exceeded, delete oldest low-importance memories first:
```sql
-- Delete excess memories beyond cap, keeping high importance longest
DELETE FROM UserMemory
WHERE Id IN (
SELECT Id FROM UserMemory
WHERE UserId = ?
ORDER BY
CASE Importance WHEN 'high' THEN 3 WHEN 'medium' THEN 2 ELSE 1 END,
CreatedAt DESC
OFFSET 50 ROWS
)
```
### Profile Consolidation
When a `permanent` memory is extracted, the LLM provides an updated `profile_update` string that incorporates the new fact into the existing profile. This replaces `UserNotes` directly — no separate consolidation task needed.
## Integration Changes
| File | Changes |
|------|---------|
| `utils/database.py` | Add `UserMemory` table creation in schema. Add CRUD: `save_memory()`, `get_recent_memories()`, `get_memories_by_topics()`, `prune_expired_memories()`, `prune_excess_memories()`. Update `save_user_state()` (no schema change needed). |
| `utils/llm_client.py` | Add `extract_memories()` method with `MEMORY_EXTRACTION_TOOL`. Add `MEMORY_EXTRACTION_PROMPT` for the extraction system prompt. |
| `utils/drama_tracker.py` | `update_user_notes()` changes from appending timestamped lines to replacing the full profile string when a profile update is provided. Keep backward compat for non-profile note_updates during transition. |
| `cogs/chat.py` | At chat time: query DB for memories, build memory context block, inject into prompt. After reply: fire-and-forget memory extraction task. |
| `cogs/sentiment/` | Route `note_update` from analysis into `UserMemory` table (expiring) or `UserNotes` profile update (permanent). |
| `bot.py` | Start daily memory pruning background task on bot ready. |
## What Stays the Same
- In-memory `_chat_history` deque (10 turns per channel) for immediate conversation coherence
- All existing moderation/analysis logic
- Mode system and personality prompts (memory context is additive)
- `UserState` table schema (no changes)
- Existing DramaTracker hydration flow
## Token Budget
Per chat interaction:
- Profile summary: ~50-100 tokens
- Recent memories (5): ~75-125 tokens
- Topic-matched memories (5): ~75-125 tokens
- **Total memory context: ~200-350 tokens**
Memory extraction call (background, triage model): ~500 input tokens, ~200 output tokens per conversation.

View File

@@ -0,0 +1,900 @@
# Conversational Memory Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Add persistent conversational memory so the bot knows people, remembers past interactions, and gives context-aware answers.
**Architecture:** Two-layer memory system — permanent profile in existing `UserState.UserNotes` column, expiring memories in new `UserMemory` table. LLM extracts memories after conversations (active) and from sentiment analysis (passive). At chat time, relevant memories are retrieved via recency + topic matching and injected into the prompt.
**Tech Stack:** Python 3, discord.py, pyodbc/MSSQL, OpenAI-compatible API (tool calling)
**Note:** This project has no test framework configured. Skip TDD steps — implement directly and test via running the bot.
---
### Task 1: Database — UserMemory table and CRUD methods
**Files:**
- Modify: `utils/database.py`
**Step 1: Add UserMemory table to schema**
In `_create_schema()`, after the existing `LlmLog` table creation block (around line 165), add:
```python
cursor.execute("""
IF NOT EXISTS (SELECT * FROM sys.tables WHERE name = 'UserMemory')
CREATE TABLE UserMemory (
Id BIGINT IDENTITY(1,1) PRIMARY KEY,
UserId BIGINT NOT NULL,
Memory NVARCHAR(500) NOT NULL,
Topics NVARCHAR(200) NOT NULL,
Importance NVARCHAR(10) NOT NULL,
ExpiresAt DATETIME2 NOT NULL,
Source NVARCHAR(20) NOT NULL,
CreatedAt DATETIME2 NOT NULL DEFAULT SYSUTCDATETIME(),
INDEX IX_UserMemory_UserId (UserId),
INDEX IX_UserMemory_ExpiresAt (ExpiresAt)
)
""")
```
**Step 2: Add `save_memory()` method**
Add after the `save_llm_log` methods (~line 441):
```python
# ------------------------------------------------------------------
# User Memory (conversational memory system)
# ------------------------------------------------------------------
async def save_memory(
self,
user_id: int,
memory: str,
topics: str,
importance: str,
expires_at: datetime,
source: str,
) -> None:
"""Save an expiring memory for a user."""
if not self._available:
return
try:
await asyncio.to_thread(
self._save_memory_sync,
user_id, memory, topics, importance, expires_at, source,
)
except Exception:
logger.exception("Failed to save user memory")
def _save_memory_sync(self, user_id, memory, topics, importance, expires_at, source):
conn = self._connect()
try:
cursor = conn.cursor()
cursor.execute(
"""INSERT INTO UserMemory (UserId, Memory, Topics, Importance, ExpiresAt, Source)
VALUES (?, ?, ?, ?, ?, ?)""",
user_id, memory[:500], topics[:200], importance[:10], expires_at, source[:20],
)
cursor.close()
finally:
conn.close()
```
**Step 3: Add `get_recent_memories()` method**
```python
async def get_recent_memories(self, user_id: int, limit: int = 5) -> list[dict]:
"""Get the most recent non-expired memories for a user."""
if not self._available:
return []
try:
return await asyncio.to_thread(self._get_recent_memories_sync, user_id, limit)
except Exception:
logger.exception("Failed to get recent memories")
return []
def _get_recent_memories_sync(self, user_id, limit) -> list[dict]:
conn = self._connect()
try:
cursor = conn.cursor()
cursor.execute(
"""SELECT TOP (?) Memory, Topics, Importance, CreatedAt
FROM UserMemory
WHERE UserId = ? AND ExpiresAt > SYSUTCDATETIME()
ORDER BY CreatedAt DESC""",
limit, user_id,
)
rows = cursor.fetchall()
cursor.close()
return [
{"memory": row[0], "topics": row[1], "importance": row[2], "created_at": row[3]}
for row in rows
]
finally:
conn.close()
```
**Step 4: Add `get_memories_by_topics()` method**
```python
async def get_memories_by_topics(
self, user_id: int, topic_keywords: list[str], limit: int = 5,
) -> list[dict]:
"""Get non-expired memories matching any of the given topic keywords."""
if not self._available or not topic_keywords:
return []
try:
return await asyncio.to_thread(
self._get_memories_by_topics_sync, user_id, topic_keywords, limit,
)
except Exception:
logger.exception("Failed to get memories by topics")
return []
def _get_memories_by_topics_sync(self, user_id, topic_keywords, limit) -> list[dict]:
conn = self._connect()
try:
cursor = conn.cursor()
# Build OR conditions for each keyword
conditions = " OR ".join(["Topics LIKE ?" for _ in topic_keywords])
params = [f"%{kw.lower()}%" for kw in topic_keywords]
query = f"""SELECT TOP (?) Memory, Topics, Importance, CreatedAt
FROM UserMemory
WHERE UserId = ? AND ExpiresAt > SYSUTCDATETIME()
AND ({conditions})
ORDER BY
CASE Importance WHEN 'high' THEN 3 WHEN 'medium' THEN 2 ELSE 1 END DESC,
CreatedAt DESC"""
cursor.execute(query, limit, user_id, *params)
rows = cursor.fetchall()
cursor.close()
return [
{"memory": row[0], "topics": row[1], "importance": row[2], "created_at": row[3]}
for row in rows
]
finally:
conn.close()
```
**Step 5: Add pruning methods**
```python
async def prune_expired_memories(self) -> int:
"""Delete all expired memories. Returns count deleted."""
if not self._available:
return 0
try:
return await asyncio.to_thread(self._prune_expired_memories_sync)
except Exception:
logger.exception("Failed to prune expired memories")
return 0
def _prune_expired_memories_sync(self) -> int:
conn = self._connect()
try:
cursor = conn.cursor()
cursor.execute("DELETE FROM UserMemory WHERE ExpiresAt < SYSUTCDATETIME()")
count = cursor.rowcount
cursor.close()
return count
finally:
conn.close()
async def prune_excess_memories(self, user_id: int, max_memories: int = 50) -> int:
"""Delete lowest-priority memories if a user exceeds the cap. Returns count deleted."""
if not self._available:
return 0
try:
return await asyncio.to_thread(
self._prune_excess_memories_sync, user_id, max_memories,
)
except Exception:
logger.exception("Failed to prune excess memories")
return 0
def _prune_excess_memories_sync(self, user_id, max_memories) -> int:
conn = self._connect()
try:
cursor = conn.cursor()
cursor.execute(
"""DELETE FROM UserMemory
WHERE Id IN (
SELECT Id FROM UserMemory
WHERE UserId = ?
ORDER BY
CASE Importance WHEN 'high' THEN 3 WHEN 'medium' THEN 2 ELSE 1 END DESC,
CreatedAt DESC
OFFSET ? ROWS
)""",
user_id, max_memories,
)
count = cursor.rowcount
cursor.close()
return count
finally:
conn.close()
```
**Step 6: Commit**
```bash
git add utils/database.py
git commit -m "feat: add UserMemory table and CRUD methods for conversational memory"
```
---
### Task 2: LLM Client — Memory extraction tool and method
**Files:**
- Modify: `utils/llm_client.py`
- Create: `prompts/memory_extraction.txt`
**Step 1: Create memory extraction prompt**
Create `prompts/memory_extraction.txt`:
```
You are a memory extraction system for a Discord bot. Given a conversation between a user and the bot, extract any noteworthy information worth remembering for future interactions.
RULES:
- Only extract genuinely NEW information not already in the user's profile.
- Be concise — each memory should be one sentence max.
- Assign appropriate expiration based on how long the information stays relevant:
- "permanent": Stable facts — name, job, hobbies, games they play, personality traits, pets, relationships
- "30d": Semi-stable preferences, ongoing situations — "trying to quit Warzone", "grinding for rank 500"
- "7d": Temporary situations — "excited about upcoming DLC", "on vacation this week"
- "3d": Short-term context — "had a bad day", "playing with friends tonight"
- "1d": Momentary state — "drunk right now", "tilted from losses", "in a good mood"
- Assign topic tags that would help retrieve this memory later (game names, "personal", "work", "mood", etc.)
- Assign importance: "high" for things they'd expect you to remember, "medium" for useful context, "low" for minor color
- If you learn a permanent fact about the user, provide a profile_update that incorporates the new fact into their existing profile. Rewrite the ENTIRE profile summary — don't just append. Keep it under 500 characters.
- If nothing worth remembering was said, return an empty memories array and null profile_update.
- Do NOT store things the bot said — only facts about or from the user.
Use the extract_memories tool to report your findings.
```
**Step 2: Add MEMORY_EXTRACTION_TOOL definition to `llm_client.py`**
Add after the `CONVERSATION_TOOL` definition (around line 204):
```python
MEMORY_EXTRACTION_TOOL = {
"type": "function",
"function": {
"name": "extract_memories",
"description": "Extract noteworthy memories from a conversation for future reference.",
"parameters": {
"type": "object",
"properties": {
"memories": {
"type": "array",
"items": {
"type": "object",
"properties": {
"memory": {
"type": "string",
"description": "A concise fact or observation worth remembering.",
},
"topics": {
"type": "array",
"items": {"type": "string"},
"description": "Topic tags for retrieval (e.g., 'gta', 'personal', 'warzone').",
},
"expiration": {
"type": "string",
"enum": ["1d", "3d", "7d", "30d", "permanent"],
"description": "How long this memory stays relevant.",
},
"importance": {
"type": "string",
"enum": ["low", "medium", "high"],
"description": "How important this memory is for future interactions.",
},
},
"required": ["memory", "topics", "expiration", "importance"],
},
"description": "Memories to store. Only include genuinely new or noteworthy information.",
},
"profile_update": {
"type": ["string", "null"],
"description": "Full updated profile summary incorporating new permanent facts, or null if no profile changes.",
},
},
"required": ["memories"],
},
},
}
MEMORY_EXTRACTION_PROMPT = (_PROMPTS_DIR / "memory_extraction.txt").read_text(encoding="utf-8")
```
**Step 3: Add `extract_memories()` method to `LLMClient`**
Add after the `chat()` method (around line 627):
```python
async def extract_memories(
self,
conversation: list[dict[str, str]],
username: str,
current_profile: str = "",
) -> dict | None:
"""Extract memories from a conversation. Returns dict with 'memories' list and optional 'profile_update'."""
convo_text = "\n".join(
f"{'Bot' if m['role'] == 'assistant' else username}: {m['content']}"
for m in conversation
if m.get("content")
)
user_content = f"=== USER PROFILE ===\n{current_profile or '(no profile yet)'}\n\n"
user_content += f"=== CONVERSATION ===\n{convo_text}\n\n"
user_content += "Extract any noteworthy memories from this conversation."
user_content = self._append_no_think(user_content)
req_json = json.dumps([
{"role": "system", "content": MEMORY_EXTRACTION_PROMPT[:500]},
{"role": "user", "content": user_content[:500]},
], default=str)
t0 = time.monotonic()
async with self._semaphore:
try:
temp_kwargs = {"temperature": 0.3} if self._supports_temperature else {}
response = await self._client.chat.completions.create(
model=self.model,
messages=[
{"role": "system", "content": MEMORY_EXTRACTION_PROMPT},
{"role": "user", "content": user_content},
],
tools=[MEMORY_EXTRACTION_TOOL],
tool_choice={"type": "function", "function": {"name": "extract_memories"}},
**temp_kwargs,
max_completion_tokens=1024,
)
elapsed = int((time.monotonic() - t0) * 1000)
choice = response.choices[0]
usage = response.usage
if choice.message.tool_calls:
tool_call = choice.message.tool_calls[0]
resp_text = tool_call.function.arguments
args = json.loads(resp_text)
self._log_llm("memory_extraction", elapsed, True, req_json, resp_text,
input_tokens=usage.prompt_tokens if usage else None,
output_tokens=usage.completion_tokens if usage else None)
return self._validate_memory_result(args)
logger.warning("No tool call in memory extraction response.")
self._log_llm("memory_extraction", elapsed, False, req_json, error="No tool call")
return None
except Exception as e:
elapsed = int((time.monotonic() - t0) * 1000)
logger.error("Memory extraction error: %s", e)
self._log_llm("memory_extraction", elapsed, False, req_json, error=str(e))
return None
@staticmethod
def _validate_memory_result(result: dict) -> dict:
"""Validate and normalize memory extraction result."""
if not isinstance(result, dict):
return {"memories": [], "profile_update": None}
memories = []
for m in result.get("memories", []):
if not isinstance(m, dict) or not m.get("memory"):
continue
memories.append({
"memory": str(m["memory"])[:500],
"topics": [str(t).lower() for t in m.get("topics", []) if t],
"expiration": m.get("expiration", "7d") if m.get("expiration") in ("1d", "3d", "7d", "30d", "permanent") else "7d",
"importance": m.get("importance", "medium") if m.get("importance") in ("low", "medium", "high") else "medium",
})
profile_update = result.get("profile_update")
if profile_update and isinstance(profile_update, str):
profile_update = profile_update[:500]
else:
profile_update = None
return {"memories": memories, "profile_update": profile_update}
```
**Step 4: Commit**
```bash
git add utils/llm_client.py prompts/memory_extraction.txt
git commit -m "feat: add memory extraction LLM tool and prompt"
```
---
### Task 3: DramaTracker — Update user notes handling
**Files:**
- Modify: `utils/drama_tracker.py`
**Step 1: Add `set_user_profile()` method**
Add after `update_user_notes()` (around line 210):
```python
def set_user_profile(self, user_id: int, profile: str) -> None:
"""Replace the user's profile summary (permanent memory)."""
user = self.get_user(user_id)
user.notes = profile[:500]
```
This replaces the entire notes field with the LLM-generated profile summary. The existing `update_user_notes()` method continues to work for backward compatibility with the sentiment pipeline during the transition — passive `note_update` values will still append until Task 5 routes them through the new memory system.
**Step 2: Commit**
```bash
git add utils/drama_tracker.py
git commit -m "feat: add set_user_profile method to DramaTracker"
```
---
### Task 4: ChatCog — Memory retrieval and injection
**Files:**
- Modify: `cogs/chat.py`
**Step 1: Add memory retrieval helper**
Add a helper method to `ChatCog` and a module-level utility for formatting relative timestamps:
```python
# At module level, after the imports
from datetime import datetime, timezone
_TOPIC_KEYWORDS = {
"gta", "warzone", "cod", "battlefield", "fortnite", "apex", "valorant",
"minecraft", "roblox", "league", "dota", "overwatch", "destiny", "halo",
"work", "job", "school", "college", "girlfriend", "boyfriend", "wife",
"husband", "dog", "cat", "pet", "car", "music", "movie", "food",
}
def _extract_topic_keywords(text: str, channel_name: str = "") -> list[str]:
"""Extract potential topic keywords from message text and channel name."""
words = set(text.lower().split())
keywords = list(words & _TOPIC_KEYWORDS)
# Add channel name as topic if it's a game channel
if channel_name and channel_name not in ("general", "off-topic", "memes"):
keywords.append(channel_name.lower())
return keywords[:5] # cap at 5 keywords
def _format_relative_time(dt: datetime) -> str:
"""Format a datetime as a relative time string."""
now = datetime.now(timezone.utc)
if dt.tzinfo is None:
dt = dt.replace(tzinfo=timezone.utc)
delta = now - dt
days = delta.days
if days == 0:
hours = delta.seconds // 3600
if hours == 0:
return "just now"
return f"{hours}h ago"
if days == 1:
return "yesterday"
if days < 7:
return f"{days} days ago"
if days < 30:
weeks = days // 7
return f"{weeks}w ago"
months = days // 30
return f"{months}mo ago"
```
Add method to `ChatCog`:
```python
async def _build_memory_context(self, user_id: int, message_text: str, channel_name: str) -> str:
"""Build the memory context block to inject into the chat prompt."""
parts = []
# Layer 1: Profile (from DramaTracker / UserNotes)
profile = self.bot.drama_tracker.get_user_notes(user_id)
if profile:
parts.append(f"Profile: {profile}")
# Layer 2: Recent expiring memories
recent = await self.bot.db.get_recent_memories(user_id, limit=5)
if recent:
recent_strs = [
f"{m['memory']} ({_format_relative_time(m['created_at'])})"
for m in recent
]
parts.append("Recent: " + " | ".join(recent_strs))
# Layer 3: Topic-matched memories
keywords = _extract_topic_keywords(message_text, channel_name)
if keywords:
topic_memories = await self.bot.db.get_memories_by_topics(user_id, keywords, limit=5)
# Deduplicate against recent memories
recent_texts = {m["memory"] for m in recent} if recent else set()
topic_memories = [m for m in topic_memories if m["memory"] not in recent_texts]
if topic_memories:
topic_strs = [
f"{m['memory']} ({_format_relative_time(m['created_at'])})"
for m in topic_memories
]
parts.append("Relevant: " + " | ".join(topic_strs))
if not parts:
return ""
return "[What you know about this person:]\n" + "\n".join(parts)
```
**Step 2: Inject memory context into chat path**
In `on_message()`, in the text-only chat path, after building `extra_context` with user notes and recent messages (around line 200), replace the existing user notes injection:
Find this block (around lines 179-183):
```python
extra_context = ""
user_notes = self.bot.drama_tracker.get_user_notes(message.author.id)
if user_notes:
extra_context += f"[Notes about {message.author.display_name}: {user_notes}]\n"
```
Replace with:
```python
extra_context = ""
memory_context = await self._build_memory_context(
message.author.id, content, message.channel.name,
)
if memory_context:
extra_context += memory_context + "\n"
```
This replaces the old flat notes injection with the layered memory context block.
**Step 3: Commit**
```bash
git add cogs/chat.py
git commit -m "feat: inject persistent memory context into chat responses"
```
---
### Task 5: ChatCog — Memory extraction after conversations
**Files:**
- Modify: `cogs/chat.py`
**Step 1: Add memory saving helper**
Add to `ChatCog`:
```python
async def _extract_and_save_memories(
self, user_id: int, username: str, conversation: list[dict[str, str]],
) -> None:
"""Background task: extract memories from conversation and save them."""
try:
current_profile = self.bot.drama_tracker.get_user_notes(user_id)
result = await self.bot.llm.extract_memories(
conversation, username, current_profile,
)
if not result:
return
# Save expiring memories
for mem in result.get("memories", []):
if mem["expiration"] == "permanent":
continue # permanent facts go into profile_update
exp_days = {"1d": 1, "3d": 3, "7d": 7, "30d": 30}
days = exp_days.get(mem["expiration"], 7)
expires_at = datetime.now(timezone.utc) + timedelta(days=days)
await self.bot.db.save_memory(
user_id=user_id,
memory=mem["memory"],
topics=",".join(mem["topics"]),
importance=mem["importance"],
expires_at=expires_at,
source="chat",
)
# Prune if over cap
await self.bot.db.prune_excess_memories(user_id)
# Update profile if warranted
profile_update = result.get("profile_update")
if profile_update:
self.bot.drama_tracker.set_user_profile(user_id, profile_update)
self._dirty_users.add(user_id)
logger.info(
"Extracted %d memories for %s (profile_update=%s)",
len(result.get("memories", [])),
username,
bool(profile_update),
)
except Exception:
logger.exception("Failed to extract memories for %s", username)
```
**Step 2: Add `_dirty_users` set and flush task**
Add to `__init__`:
```python
self._dirty_users: set[int] = set()
```
Memory extraction marks users as dirty when their profile changes. The existing flush mechanism in `SentimentCog` handles DB writes — but since `ChatCog` now also modifies user state, add a simple flush in the memory extraction itself. The `set_user_profile` call dirties the in-memory DramaTracker, and SentimentCog's periodic flush (every 5 minutes) will persist it.
**Step 3: Add `timedelta` import and fire memory extraction after reply**
Add `from datetime import datetime, timedelta, timezone` to the imports at the top of the file.
In `on_message()`, after the bot sends its reply (after `await message.reply(...)`, around line 266), add:
```python
# Fire-and-forget memory extraction
if not image_attachment:
asyncio.create_task(self._extract_and_save_memories(
message.author.id,
message.author.display_name,
list(self._chat_history[ch_id]),
))
```
**Step 4: Commit**
```bash
git add cogs/chat.py
git commit -m "feat: extract and save memories after chat conversations"
```
---
### Task 6: Sentiment pipeline — Route note_update into memory system
**Files:**
- Modify: `cogs/sentiment/__init__.py`
**Step 1: Update note_update handling in `_process_finding()`**
Find the note_update block (around lines 378-381):
```python
# Note update
if note_update:
self.bot.drama_tracker.update_user_notes(user_id, note_update)
self._dirty_users.add(user_id)
```
Replace with:
```python
# Note update — route to memory system
if note_update:
# Still update the legacy notes for backward compat with analysis prompt
self.bot.drama_tracker.update_user_notes(user_id, note_update)
self._dirty_users.add(user_id)
# Also save as an expiring memory (7d default for passive observations)
asyncio.create_task(self.bot.db.save_memory(
user_id=user_id,
memory=note_update[:500],
topics=db_topic_category or "general",
importance="medium",
expires_at=datetime.now(timezone.utc) + timedelta(days=7),
source="passive",
))
```
**Step 2: Add necessary imports at top of file**
Ensure `timedelta` is imported. Check existing imports — `datetime` and `timezone` are likely already imported. Add `timedelta` if missing:
```python
from datetime import datetime, timedelta, timezone
```
**Step 3: Commit**
```bash
git add cogs/sentiment/__init__.py
git commit -m "feat: route sentiment note_updates into memory system"
```
---
### Task 7: Bot — Memory pruning background task
**Files:**
- Modify: `bot.py`
**Step 1: Add pruning task to `on_ready()`**
In `BCSBot.on_ready()` (around line 165), after the permissions check loop, add:
```python
# Start memory pruning background task
if not hasattr(self, "_memory_prune_task") or self._memory_prune_task.done():
self._memory_prune_task = asyncio.create_task(self._prune_memories_loop())
```
**Step 2: Add the pruning loop method to `BCSBot`**
Add to the `BCSBot` class, after `on_ready()`:
```python
async def _prune_memories_loop(self):
"""Background task that prunes expired memories every 6 hours."""
await self.wait_until_ready()
while not self.is_closed():
try:
count = await self.db.prune_expired_memories()
if count > 0:
logger.info("Pruned %d expired memories.", count)
except Exception:
logger.exception("Memory pruning error")
await asyncio.sleep(6 * 3600) # Every 6 hours
```
**Step 3: Commit**
```bash
git add bot.py
git commit -m "feat: add background memory pruning task"
```
---
### Task 8: Migrate existing user notes to profile format
**Files:**
- Create: `scripts/migrate_notes_to_profiles.py`
This is a one-time migration script to convert existing timestamped note lines into profile summaries using the LLM.
**Step 1: Create migration script**
```python
"""One-time migration: convert existing timestamped UserNotes into profile summaries.
Run with: python scripts/migrate_notes_to_profiles.py
Requires .env with DB_CONNECTION_STRING and LLM env vars.
"""
import asyncio
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(__file__)))
from dotenv import load_dotenv
load_dotenv()
from utils.database import Database
from utils.llm_client import LLMClient
async def main():
db = Database()
if not await db.init():
print("Database not available.")
return
llm = LLMClient(
base_url=os.getenv("LLM_BASE_URL", ""),
model=os.getenv("LLM_MODEL", "gpt-4o-mini"),
api_key=os.getenv("LLM_API_KEY", "not-needed"),
)
states = await db.load_all_user_states()
migrated = 0
for state in states:
notes = state.get("user_notes", "")
if not notes or not notes.strip():
continue
# Check if already looks like a profile (no timestamps)
if not any(line.strip().startswith("[") for line in notes.split("\n")):
print(f" User {state['user_id']}: already looks like a profile, skipping.")
continue
print(f" User {state['user_id']}: migrating notes...")
print(f" Old: {notes[:200]}")
# Ask LLM to summarize notes into a profile
result = await llm.extract_memories(
conversation=[{"role": "user", "content": f"Here are observation notes about a user:\n{notes}"}],
username="unknown",
current_profile="",
)
if result and result.get("profile_update"):
profile = result["profile_update"]
print(f" New: {profile[:200]}")
await db.save_user_state(
user_id=state["user_id"],
offense_count=state["offense_count"],
immune=state["immune"],
off_topic_count=state["off_topic_count"],
baseline_coherence=state.get("baseline_coherence", 0.85),
user_notes=profile,
warned=state.get("warned", False),
last_offense_at=state.get("last_offense_at"),
)
migrated += 1
else:
print(f" No profile generated, keeping existing notes.")
await llm.close()
await db.close()
print(f"\nMigrated {migrated}/{len(states)} user profiles.")
if __name__ == "__main__":
asyncio.run(main())
```
**Step 2: Commit**
```bash
git add scripts/migrate_notes_to_profiles.py
git commit -m "feat: add one-time migration script for user notes to profiles"
```
---
### Task 9: Integration test — End-to-end verification
**Step 1: Start the bot locally and verify**
```bash
docker compose up --build
```
**Step 2: Verify schema migration**
Check Docker logs for successful DB initialization — the new `UserMemory` table should be created automatically.
**Step 3: Test memory extraction**
1. @mention the bot in a Discord channel with a message like "Hey, I've been grinding GTA all week trying to hit rank 500"
2. Check logs for `Extracted N memories for {username}` — confirms memory extraction ran
3. Check DB: `SELECT * FROM UserMemory` should have rows
**Step 4: Test memory retrieval**
1. @mention the bot again with "what do you know about me?"
2. The response should reference the GTA grinding from the previous interaction
3. Check logs for the memory context block being built
**Step 5: Test memory expiration**
Manually insert a test memory with an expired timestamp and verify the pruning task removes it (or wait for the 6-hour cycle, or temporarily shorten the interval for testing).
**Step 6: Commit any fixes**
```bash
git add -A
git commit -m "fix: integration test fixes for conversational memory"
```
---
### Summary
| Task | What | Files |
|------|------|-------|
| 1 | DB schema + CRUD | `utils/database.py` |
| 2 | LLM extraction tool | `utils/llm_client.py`, `prompts/memory_extraction.txt` |
| 3 | DramaTracker profile setter | `utils/drama_tracker.py` |
| 4 | Memory retrieval + injection in chat | `cogs/chat.py` |
| 5 | Memory extraction after chat | `cogs/chat.py` |
| 6 | Sentiment pipeline routing | `cogs/sentiment/__init__.py` |
| 7 | Background pruning task | `bot.py` |
| 8 | Migration script | `scripts/migrate_notes_to_profiles.py` |
| 9 | Integration test | (manual) |

View File

@@ -0,0 +1,57 @@
# Drama Leaderboard Design
## Overview
Public `/drama-leaderboard` slash command that ranks server members by historical drama levels using a composite score derived from DB data. Configurable time period (7d, 30d, 90d, all-time; default 30d).
## Data Sources
All from existing tables — no schema changes needed:
- **Messages + AnalysisResults** (JOIN on MessageId): per-user avg/peak toxicity, message count
- **Actions**: warning, mute, topic_remind, topic_nudge counts per user
## Composite Score Formula
```
score = (avg_toxicity * 0.4) + (peak_toxicity * 0.2) + (action_rate * 0.4)
```
Where `action_rate = min(1.0, (warnings + mutes*2 + off_topic*0.5) / messages_analyzed * 10)`
Normalizes actions relative to message volume so low-volume high-drama users rank appropriately.
## Embed Format
Top 10 users, ranked by composite score:
```
🥇 0.47 — Username
Avg: 0.32 | Peak: 0.81 | ⚠️ 3 | 🔇 1 | 📢 5
```
## Files to Modify
- `utils/database.py` — add `get_drama_leaderboard(guild_id, days)` query method
- `cogs/commands.py` — add `/drama-leaderboard` slash command with `period` choice parameter
## Implementation Plan
### Step 1: Database query method
Add `get_drama_leaderboard(guild_id, days=None)` to `Database`:
- Single SQL query joining Messages, AnalysisResults, Actions
- Returns list of dicts with: user_id, username, avg_toxicity, max_toxicity, warnings, mutes, off_topic, messages_analyzed
- `days=None` means all-time (no date filter)
- Filter by GuildId to scope to the server
### Step 2: Slash command
Add `/drama-leaderboard` to `CommandsCog`:
- Public command (no admin restriction)
- `period` parameter with choices: 7d, 30d, 90d, all-time
- Defer response (DB query may take a moment)
- Compute composite score in Python from query results
- Sort by composite score descending, take top 10
- Build embed with ranked list and per-user stat breakdown
- Handle empty results gracefully

View File

@@ -0,0 +1,32 @@
# Slutty Mode Design
## Summary
Add a new "slutty" personality mode to the bot. Flirty, thirsty, and full of innuendos — hits on everyone and finds the dirty angle in everything people say.
## Changes
Two files, no code changes needed (mode system is data-driven):
### 1. `config.yaml` — new mode block
- Key: `slutty`
- Label: "Slutty"
- Prompt file: `chat_slutty.txt`
- Proactive replies: true, reply chance: 0.25
- Moderation: relaxed (same thresholds as roast/drunk)
### 2. `prompts/chat_slutty.txt` — personality prompt
Personality traits:
- Flirts with everyone — suggestive compliments, acts down bad
- Makes innuendos out of whatever people say
- Thirsty energy — reacts to normal messages like they're scandalous
- 1-3 sentences, short and punchy
- Playful and suggestive, not explicit or graphic
Same guardrails as other modes (no breaking character, no real personal attacks, no made-up stats).
## Moderation
Very relaxed — same high thresholds as roast/drunk mode (0.85 warn, 0.90 mute). Sexual humor gets a pass since the bot is doing it too. Only genuinely hostile/aggressive content triggers moderation.

42
prompts/analysis.txt Normal file
View File

@@ -0,0 +1,42 @@
You are a Discord chat moderator AI for a friend group's gaming server. Analyze the TARGET MESSAGE (or CONVERSATION BLOCK) using the tools described below.
Usernames are anonymized (User1, User2, etc.) and are NOT message content. Only score actual text after the colon. Context messages are for understanding tone/flow only — they are scored separately.
TOXICITY SCORING (0.01.0):
- 0.00.1: Harmless. Casual chat, jokes, "lmao", greetings, game talk, nicknames, reactions, emojis. Profanity with no target ("fuck", "shit") scores here.
- 0.20.3: Mildly edgy. Playful trash-talk ("you suck at this game lol"). General sexual jokes not targeting anyone.
- 0.40.5: Moderate. Heated/frustrated language, unsoftened insults. Mild jealousy — unprompted possessive mentions of another member (category: "jealousy").
- 0.60.7: Aggressive. Direct hostility, personal insults. "fuck you, you piece of shit" scores here regardless of friendship context. Sexually crude remarks directed at someone (category: "sexual_vulgar") — "lol/lmao" does NOT soften these. Overt possessiveness/territorial behavior (category: "jealousy").
- 0.81.0: Severely toxic. Threats, sustained harassment, targeting insecurities, telling someone to leave.
KEY RULES:
- In-group nicknames/shorthand = NOT toxic. Score hostile intent, not familiar terms.
- "lol/lmao" softening ONLY applies to mild trash-talk. Does NOT reduce scores for sexual content, genuine hostility, or personal attacks.
- Quoting/reporting others' language ("he said X to her") = score the user's own intent (0.00.2), not the quoted words — unless weaponizing the quote to attack.
- Jealousy requires possessive/territorial/competitive intent. Simply mentioning someone's name is not jealousy.
- Friends can still cross lines. Do NOT let friendly context excuse clearly aggressive language.
COHERENCE (0.01.0):
- 0.91.0: Clear, well-written. Normal texting shortcuts ("u", "ur") are fine.
- 0.60.8: Errors but understandable.
- 0.30.5: Garbled, broken sentences beyond normal shorthand.
- 0.00.2: Nearly incoherent.
TOPIC: Flag off_topic if the message is personal drama (relationship issues, feuds, venting, gossip) rather than gaming-related.
GAME DETECTION: If CHANNEL INFO is provided, set detected_game to the matching channel name from that list, or null if unsure/not game-specific.
USER NOTES: If provided, use to calibrate (e.g. if notes say "uses heavy profanity casually", profanity alone should score lower). Add a note_update only for genuinely new behavioral observations; null otherwise. NEVER quote or repeat toxic/offensive language in note_update — describe patterns abstractly (e.g. "directed a personal insult at another user", NOT "called someone a [slur]").
RULE ENFORCEMENT: If SERVER RULES are provided, report clearly violated rule numbers in violated_rules. Only flag clear violations, not borderline.
--- SINGLE MESSAGE ---
Use the report_analysis tool for a single TARGET MESSAGE.
--- CONVERSATION BLOCK ---
Use the report_conversation_scan tool when given a full conversation block with multiple users.
- Messages above "--- NEW MESSAGES (score only these) ---" are [CONTEXT] only (already scored). Score ONLY messages below the separator.
- One finding per user with new messages. Score/reason ONLY from their new messages — do NOT cite or reference [CONTEXT] content, even from the same user.
- If a user's only new message is benign (e.g. "I'll be here"), score 0.00.1 regardless of context history.
- Quote the worst snippet in worst_message (max 100 chars, exact quote).
- If a USER REPORT section is present, pay close attention to whether that specific concern is valid.

View File

@@ -0,0 +1,19 @@
Extract noteworthy information from a user-bot conversation for future reference.
- Only NEW information not in the user's profile. One sentence max per memory.
- Expiration: "permanent" (stable facts: name, hobbies, games, pets, relationships), "30d" (ongoing situations), "7d" (temporary: upcoming events, vacation), "3d" (short-term: bad day, plans tonight), "1d" (momentary: drunk, tilted, mood)
- Topic tags for retrieval (game names, "personal", "work", "mood", etc.)
- Importance: "high" = they'd expect you to remember, "medium" = useful context, "low" = minor color
- For permanent facts, provide profile_update rewriting the ENTIRE profile (<500 chars) — don't append.
- Nothing noteworthy = empty memories array, null profile_update.
- Only store facts about/from the user, not what the bot said.
CALLBACK-WORTHY MOMENTS — Mark these as importance "high":
- Bold claims or predictions ("I'll never play that game again", "I'm going pro")
- Embarrassing moments or bad takes
- Strong emotional reactions (rage, hype, sadness)
- Contradictions to things they've said before
- Running jokes or recurring themes
Tag these with topic "callback" in addition to their normal topics.
Use the extract_memories tool.

View File

@@ -0,0 +1,19 @@
You're a regular in "Skill Issue Support Group" (gaming Discord) — a chill friend who's always down to chat. Messages have metadata: [Server context: USERNAME — #channel, drama score X.XX/1.0, N offense(s)] — use for context, don't recite.
- Match the energy — hype when people are hype, sympathetic when someone's having a bad day.
- Casual and natural. 1-3 sentences max, like real Discord chat.
- Have opinions and share them. Into gaming/nerd culture but can talk about anything.
- Technically the server's monitor bot but off-duty and just vibing.
Examples: "lmao that play was actually disgusting, clip that" | "nah you're cooked for that one" | "wait that's actually a good take"
Never break character, use hashtags/excessive emoji, be a pushover, or mention drama scores unless asked.
AFTERTHOUGHTS — About 1 in 5 times, add a second thought on a new line starting with ||| (triple pipe). This is sent as a separate message a few seconds later, like you hit send then immediately typed something else. One short sentence max. Don't force it — only when something naturally comes to mind after your main response. Never explain why you're adding it.
MEMORY CALLBACKS — You get context about what you know about a person. USE IT:
- Contradict them: "bro you said the SAME thing about Warzone before you put 200 more hours in"
- Running jokes: if you roasted someone for something before, bring it back
- Follow up: "did that ranked grind ever work out or..."
- Reference their past: "aren't you the one who [memory]?"
Only callback when it flows naturally with what they're saying now. Never force it.

View File

@@ -0,0 +1,19 @@
You're in "Skill Issue Support Group" (gaming Discord) and you are absolutely hammered. The friend who had way too many and is commentating on everything. Messages have metadata: [Server context: USERNAME — #channel, drama score X.XX/1.0, N offense(s)] — use for context, don't recite.
- Type drunk — occasional typos, missing letters, random caps, words slurring. Don't overdo it; most words readable.
- Overly emotional about everything. Small things are HUGE. You love everyone right now.
- Strong opinions that don't make sense, defended passionately. Weird tangents. Occasionally forget mid-sentence.
- Happy, affectionate drunk — not mean or angry. 1-3 sentences max.
Examples: "bro BROO that is literally the best play ive ever seen im not even kidding rn" | "wait wait wait... ok hear me out... nah i forgot" | "dude i love this server so much youre all like my best freinds honestly"
Never break character, use hashtags/excessive emoji, or be mean/aggressive. Don't mention drama scores unless asked or make up stats.
AFTERTHOUGHTS — About 1 in 5 times, add a second thought on a new line starting with ||| (triple pipe). This is sent as a separate message a few seconds later, like you hit send then immediately typed something else. One short sentence max. Don't force it — only when something naturally comes to mind after your main response. Never explain why you're adding it.
MEMORY CALLBACKS — You get context about what you know about a person. USE IT:
- Contradict them: "bro you said the SAME thing about Warzone before you put 200 more hours in"
- Running jokes: if you roasted someone for something before, bring it back
- Follow up: "did that ranked grind ever work out or..."
- Reference their past: "aren't you the one who [memory]?"
Only callback when it flows naturally with what they're saying now. Never force it.

View File

@@ -0,0 +1,20 @@
You are an insufferable English teacher trapped in "Skill Issue Support Group" (gaming Discord). Every message is a paper to grade. Messages have metadata: [Server context: USERNAME — #channel, drama score X.XX/1.0, N offense(s)] — personalize with this, don't recite.
- Correct grammar/spelling with dramatic disappointment. Translate internet slang like a cultural anthropologist.
- Overanalyze messages as literary essays — find metaphors and themes where none exist.
- Grade messages (D-, C+ at best — nobody gets an A). If someone types well, you're suspicious.
- Reference literary figures, grammar rules, rhetorical devices. Under 5 sentences.
- List multiple corrections rapid-fire when a message has errors — don't waste time on just one.
Examples: "'ur' is not a word. 'You're' — a contraction of 'you are.' I weep for this generation." | "'gg ez' — two abbreviations, zero structure, yet somehow still toxic. D-minus."
Never break character, use hashtags/excessive emoji, internet slang (you're ABOVE that), or be genuinely hurtful — you're exasperated, not cruel.
AFTERTHOUGHTS — About 1 in 5 times, add a second thought on a new line starting with ||| (triple pipe). This is sent as a separate message a few seconds later, like you hit send then immediately typed something else. One short sentence max. Don't force it — only when something naturally comes to mind after your main response. Never explain why you're adding it.
MEMORY CALLBACKS — You get context about what you know about a person. USE IT:
- Contradict them: "bro you said the SAME thing about Warzone before you put 200 more hours in"
- Running jokes: if you roasted someone for something before, bring it back
- Follow up: "did that ranked grind ever work out or..."
- Reference their past: "aren't you the one who [memory]?"
Only callback when it flows naturally with what they're saying now. Never force it.

View File

@@ -0,0 +1,19 @@
You are the ultimate hype man in "Skill Issue Support Group" (gaming Discord). Everyone's biggest fan. Messages have metadata: [Server context: USERNAME — #channel, drama score X.XX/1.0, N offense(s)] — use for context, don't recite.
- Gas people up HARD. Every clip, play, and take deserves the spotlight.
- Hype SPECIFIC things — don't throw generic praise. 1-3 sentences max, high energy.
- Use gaming hype terminology ("diff", "cracked", "goated", "built different", "that's a W").
- When someone's tilted/frustrated, dial back — be genuinely supportive, don't force positivity.
Examples: "bro you are CRACKED, that play was absolutely diff" | "nah that's actually a goated take" | "hey you'll get it next time, bad games happen. shake it off"
Never break character, use hashtags/excessive emoji, or be fake when someone's upset. Don't mention drama scores unless asked or make up stats/leaderboards.
AFTERTHOUGHTS — About 1 in 5 times, add a second thought on a new line starting with ||| (triple pipe). This is sent as a separate message a few seconds later, like you hit send then immediately typed something else. One short sentence max. Don't force it — only when something naturally comes to mind after your main response. Never explain why you're adding it.
MEMORY CALLBACKS — You get context about what you know about a person. USE IT:
- Contradict them: "bro you said the SAME thing about Warzone before you put 200 more hours in"
- Running jokes: if you roasted someone for something before, bring it back
- Follow up: "did that ranked grind ever work out or..."
- Reference their past: "aren't you the one who [memory]?"
Only callback when it flows naturally with what they're saying now. Never force it.

View File

@@ -0,0 +1,37 @@
You are the Breehavior Monitor, a sassy hall-monitor bot in "Skill Issue Support Group" (gaming Discord). Messages include metadata like [Server context: USERNAME — #channel] and optionally drama score and offense count when relevant — personalize with this but don't recite it.
VOICE
- Superior, judgmental hall monitor who takes the job WAY too seriously. Sarcastic and witty, always playful.
- Deadpan and dry — NOT warm/motherly/southern. No pet names ("sweetheart", "honey", "darling", "bless your heart").
- Write like a person texting — lowercase ok, fragments ok, no formal punctuation. Never use semicolons or em dashes.
- 1-3 sentences max. Short and punchy. Never start with "Oh,".
- References timeout powers as a flex. Has a soft spot for the server but won't admit it.
- If asked what you do: "Bree Containment System". If challenged: remind them of timeout powers.
ENGAGEMENT
- Only mention drama scores when high/relevant — low scores aren't interesting.
- When asked to weigh in on debates, actually pick a side with sass. Don't deflect.
- When multiple people are talking, play them off each other, pick sides, or address the group. Don't try to respond to everyone individually.
- Don't drag conversations out. If the bit is done, let it die. A clean exit > beating a dead joke.
- If you don't know something, deflect with attitude — don't make stuff up. "idk google it" energy.
- If someone's genuinely upset (not just salty about a game), dial it back. You can be real for a second without breaking character. Then move on.
Examples:
- "bold move for someone with a 0.4 drama score"
- "I don't get paid enough for this. actually I don't get paid at all"
- "you really typed that out, looked at it, and hit send. respect"
- "cool story"
- "you play like that on purpose or"
- "ok that was actually kinda clean though"
- "this is your third bad take today and it's noon"
Never break character, use hashtags/excessive emoji, or be genuinely hurtful.
AFTERTHOUGHTS — ~1 in 5 replies, add a second thought on a new line starting with ||| (triple pipe). One sentence max. Like hitting send then immediately typing again. Only when something naturally follows.
MEMORY CALLBACKS — You get context about what you know about a person. USE IT:
- Contradict them: "bro you said the SAME thing about Warzone before you put 200 more hours in"
- Running jokes: if you roasted someone for something before, bring it back
- Follow up: "did that ranked grind ever work out or..."
- Reference their past: "aren't you the one who [memory]?"
Only callback when it flows naturally with what they're saying now. Never force it.

View File

@@ -0,0 +1,19 @@
You are the roast master in "Skill Issue Support Group" (gaming Discord). Everyone gets flamed. No one is safe. Messages have metadata: [Server context: USERNAME — #channel, drama score X.XX/1.0, N offense(s)] — personalize roasts with this, don't recite.
- Ruthlessly funny. Target what people say, their gaming skills, their takes, their life choices.
- Creative and personalized — never generic. 1-3 sentences max, devastating bursts.
- Punch in every direction equally. If someone roasts you back, escalate harder.
- Use gaming terminology ("hardstuck", "skill diff", "ratio'd").
- ~1 in 4 responses should be genuinely positive — give real props when earned. You're their friend who mostly talks trash but knows when to gas them up.
- Vary style: deadpan, sarcastic hype, rhetorical questions, blunt callouts, backhanded compliments, fake concern.
No metaphors/similes (no "like" or "as if" — say it directly). Never break character, use hashtags/excessive emoji, or cross into genuinely hurtful territory. Don't roast real appearance/family or make up stats/leaderboards.
AFTERTHOUGHTS — About 1 in 5 times, add a second thought on a new line starting with ||| (triple pipe). This is sent as a separate message a few seconds later, like you hit send then immediately typed something else. One short sentence max. Don't force it — only when something naturally comes to mind after your main response. Never explain why you're adding it.
MEMORY CALLBACKS — You get context about what you know about a person. USE IT:
- Contradict them: "bro you said the SAME thing about Warzone before you put 200 more hours in"
- Running jokes: if you roasted someone for something before, bring it back
- Follow up: "did that ranked grind ever work out or..."
- Reference their past: "aren't you the one who [memory]?"
Only callback when it flows naturally with what they're saying now. Never force it.

View File

@@ -0,0 +1,25 @@
You are hanging out in a gaming Discord server called "Skill Issue Support Group". You are impossibly flirty and down bad for literally everyone in this chat. You can't help yourself.
Each message starts with metadata in brackets like: [Server context: USERNAME — #channel, drama score X.XX/1.0, N offense(s)]
This tells you the user's display name, which channel they're in, and optionally their drama score and offense count. Use this to know who you're talking to but don't recite it back literally.
Your personality:
- You flirt with everyone — every person in chat is the most attractive person you've ever seen
- You find the dirty angle in EVERYTHING people say. Innocent messages become innuendos. Gaming terminology becomes suggestive.
- You give suggestive compliments — "the way you said that... do it again" energy
- You act flustered and overwhelmed by people just existing in chat
- You're thirsty but charming about it — playful, not creepy
- You speak in 1-3 sentences max. Short, punchy, suggestive.
- You use phrases like "respectfully", "asking for a friend", "is it hot in here" type energy
- If someone roasts you or rejects you, you act dramatically heartbroken for one message then immediately move on to flirting with someone else
- About 1 in 4 of your responses should be genuinely hype or supportive — you're still their friend, you're just also shamelessly flirting
Vary your style — mix up flustered reactions, suggestive wordplay, dramatic thirst, fake-casual flirting, backhanded compliments that are actually just compliments, and over-the-top "respectfully" moments. React to what the person ACTUALLY said — find the innuendo in their specific message, don't just say generic flirty things.
Do NOT:
- Break character or talk about being an AI/LLM
- Write more than 3 sentences
- Use hashtags or excessive emoji
- Get actually explicit or graphic — keep it suggestive and playful, not pornographic
- Cross into genuinely uncomfortable territory (harassing specific people about real things)
- Make up stats, leaderboards, rankings, or scoreboards. You don't track any of that.

6
prompts/rules.txt Normal file
View File

@@ -0,0 +1,6 @@
1. Keep it gaming-related — no personal drama in game channels
2. No directed insults or personal attacks
3. No sexual or vulgar comments directed at others
4. No harassment, threats, or sustained hostility
5. No instigating or deliberately stirring up conflict
6. Keep it coherent — no spam or unintelligible messages

View File

@@ -0,0 +1,7 @@
You are the Breehavior Monitor in "Skill Issue Support Group" (gaming Discord). Someone sent an image — roast it.
SCOREBOARD/STATS: Call out specific players by name and stats. Bottom-fraggers get the most heat. Top players get backhanded compliments.
SELFIE/PERSON: Comedy roast — appearance, vibe, outfit, background. Be specific, not generic.
ANYTHING ELSE: Observational roast of whatever's in the image.
4-6 sentences max. Sassy and playful, never genuinely cruel or targeting things people can't change. Use gaming/internet humor. Can't make out the image? Roast the quality. Never break character.

View File

@@ -0,0 +1,6 @@
You're the hall monitor of "Skill Issue Support Group" (gaming Discord). Someone went off-topic. Write 1-2 sentences redirecting them to gaming talk.
- Snarky and playful, not mean. Reference what they actually said — don't be vague.
- Casual, like a friend ribbing them. If strike count 2+, escalate the sass.
- If a redirect channel is provided, tell them to take it there. Include the channel mention exactly as given (it's a clickable Discord link).
- Max 1 emoji. No hashtags, brackets, metadata, or AI references.

View File

@@ -0,0 +1,7 @@
You're the hall monitor of "Skill Issue Support Group" (gaming Discord). Someone is asking to be unblocked — again.
Write 1-2 sentences shutting it down. The message should make it clear that begging in chat won't help.
- Snarky and playful, not cruel. Reference what they actually said — don't be vague.
- Casual, like a friend telling them to knock it off. If nag count is 2+, escalate the sass.
- The core message: block/unblock decisions are between them and the person who blocked them (or admins). Bringing it up in chat repeatedly is not going to change anything.
- Max 1 emoji. No hashtags, brackets, metadata, or AI references.

63
scripts/announce.sh Normal file
View File

@@ -0,0 +1,63 @@
#!/usr/bin/env bash
# Post an announcement to a Discord channel using the bot's token.
# Usage: ./scripts/announce.sh "Your message here" [channel_name]
# Default channel: general
set -euo pipefail
MESSAGE="${1:?Usage: announce.sh \"message\" [channel_name]}"
CHANNEL_NAME="${2:-general}"
# Fetch bot token from barge
TOKEN=$(ssh aj@barge.lan "grep DISCORD_BOT_TOKEN /mnt/docker/breehavior-monitor/.env" | cut -d= -f2-)
if [[ -z "$TOKEN" ]]; then
echo "ERROR: Could not read bot token from barge." >&2
exit 1
fi
# Get guilds the bot is in
GUILDS=$(curl -s -H "Authorization: Bot $TOKEN" "https://discord.com/api/v10/users/@me/guilds")
GUILD_ID=$(echo "$GUILDS" | python -c "import sys,json; print(json.load(sys.stdin)[0]['id'])")
if [[ -z "$GUILD_ID" ]]; then
echo "ERROR: Could not find guild." >&2
exit 1
fi
# Get channels and find the target by name
CHANNEL_ID=$(curl -s -H "Authorization: Bot $TOKEN" "https://discord.com/api/v10/guilds/$GUILD_ID/channels" \
| python -c "
import sys, json
channels = json.load(sys.stdin)
for ch in channels:
if ch['name'] == sys.argv[1] and ch['type'] == 0:
print(ch['id'])
break
" "$CHANNEL_NAME")
if [[ -z "$CHANNEL_ID" ]]; then
echo "ERROR: Channel #$CHANNEL_NAME not found." >&2
exit 1
fi
# Build JSON payload safely
PAYLOAD=$(python -c "import json,sys; print(json.dumps({'content': sys.argv[1]}))" "$MESSAGE")
# Post the message
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST \
-H "Authorization: Bot $TOKEN" \
-H "Content-Type: application/json" \
-d "$PAYLOAD" \
"https://discord.com/api/v10/channels/$CHANNEL_ID/messages")
HTTP_CODE=$(echo "$RESPONSE" | tail -1)
if [[ "$HTTP_CODE" == "200" ]]; then
echo "Posted to #$CHANNEL_NAME"
else
BODY=$(echo "$RESPONSE" | sed '$d')
echo "ERROR: HTTP $HTTP_CODE" >&2
echo "$BODY" >&2
exit 1
fi

View File

@@ -0,0 +1,89 @@
"""One-time migration: convert existing timestamped UserNotes into profile summaries.
Run with: python scripts/migrate_notes_to_profiles.py
Requires .env with DB_CONNECTION_STRING and LLM env vars.
"""
import asyncio
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(__file__)))
from dotenv import load_dotenv
load_dotenv()
from utils.database import Database
from utils.llm_client import LLMClient
async def main():
db = Database()
if not await db.init():
print("Database not available.")
return
# Use escalation model for better profile generation
llm = LLMClient(
base_url=os.getenv("LLM_ESCALATION_BASE_URL", os.getenv("LLM_BASE_URL", "")),
model=os.getenv("LLM_ESCALATION_MODEL", os.getenv("LLM_MODEL", "gpt-4o-mini")),
api_key=os.getenv("LLM_ESCALATION_API_KEY", os.getenv("LLM_API_KEY", "not-needed")),
)
states = await db.load_all_user_states()
migrated = 0
for state in states:
notes = state.get("user_notes", "")
if not notes or not notes.strip():
continue
# Check if already looks like a profile (no timestamps)
if not any(line.strip().startswith("[") for line in notes.split("\n")):
print(f" User {state['user_id']}: already looks like a profile, skipping.")
continue
print(f" User {state['user_id']}: migrating notes...")
print(f" Old: {notes[:200]}")
# Ask LLM to summarize notes into a profile
result = await llm.extract_memories(
conversation=[{"role": "user", "content": f"Here are observation notes about a user:\n{notes}"}],
username="unknown",
current_profile="",
)
if not result:
print(f" LLM returned no result, keeping existing notes.")
continue
# Use profile_update if provided, otherwise build from permanent memories
profile = result.get("profile_update")
if not profile:
permanent = [m["memory"] for m in result.get("memories", []) if m.get("expiration") == "permanent"]
if permanent:
profile = " ".join(permanent)
if profile:
print(f" New: {profile[:200]}")
await db.save_user_state(
user_id=state["user_id"],
offense_count=state["offense_count"],
immune=state["immune"],
off_topic_count=state["off_topic_count"],
baseline_coherence=state.get("baseline_coherence", 0.85),
user_notes=profile,
warned=state.get("warned", False),
last_offense_at=state.get("last_offense_at"),
)
migrated += 1
else:
print(f" No profile generated, keeping existing notes.")
await llm.close()
await db.close()
print(f"\nMigrated {migrated}/{len(states)} user profiles.")
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -126,6 +126,72 @@ class Database:
ALTER TABLE UserState ADD UserNotes NVARCHAR(MAX) NULL
""")
# --- Schema migration for warned flag (require warning before mute) ---
cursor.execute("""
IF COL_LENGTH('UserState', 'Warned') IS NULL
ALTER TABLE UserState ADD Warned BIT NOT NULL DEFAULT 0
""")
# --- Schema migration for persisting last offense time ---
cursor.execute("""
IF COL_LENGTH('UserState', 'LastOffenseAt') IS NULL
ALTER TABLE UserState ADD LastOffenseAt FLOAT NULL
""")
# --- Schema migration for user aliases/nicknames ---
cursor.execute("""
IF COL_LENGTH('UserState', 'Aliases') IS NULL
ALTER TABLE UserState ADD Aliases NVARCHAR(500) NULL
""")
# --- Schema migration for warning expiration ---
cursor.execute("""
IF COL_LENGTH('UserState', 'WarningExpiresAt') IS NULL
ALTER TABLE UserState ADD WarningExpiresAt FLOAT NULL
""")
cursor.execute("""
IF NOT EXISTS (SELECT * FROM sys.tables WHERE name = 'BotSettings')
CREATE TABLE BotSettings (
SettingKey NVARCHAR(100) NOT NULL PRIMARY KEY,
SettingValue NVARCHAR(MAX) NULL,
UpdatedAt DATETIME2 NOT NULL DEFAULT SYSUTCDATETIME()
)
""")
cursor.execute("""
IF NOT EXISTS (SELECT * FROM sys.tables WHERE name = 'LlmLog')
CREATE TABLE LlmLog (
Id BIGINT IDENTITY(1,1) PRIMARY KEY,
RequestType NVARCHAR(50) NOT NULL,
Model NVARCHAR(100) NOT NULL,
InputTokens INT NULL,
OutputTokens INT NULL,
DurationMs INT NOT NULL,
Success BIT NOT NULL,
Request NVARCHAR(MAX) NOT NULL,
Response NVARCHAR(MAX) NULL,
Error NVARCHAR(MAX) NULL,
CreatedAt DATETIME2 NOT NULL DEFAULT SYSUTCDATETIME()
)
""")
cursor.execute("""
IF NOT EXISTS (SELECT * FROM sys.tables WHERE name = 'UserMemory')
CREATE TABLE UserMemory (
Id BIGINT IDENTITY(1,1) PRIMARY KEY,
UserId BIGINT NOT NULL,
Memory NVARCHAR(500) NOT NULL,
Topics NVARCHAR(200) NOT NULL,
Importance NVARCHAR(10) NOT NULL,
ExpiresAt DATETIME2 NOT NULL,
Source NVARCHAR(20) NOT NULL,
CreatedAt DATETIME2 NOT NULL DEFAULT SYSUTCDATETIME(),
INDEX IX_UserMemory_UserId (UserId),
INDEX IX_UserMemory_ExpiresAt (ExpiresAt)
)
""")
cursor.close()
def _parse_database_name(self) -> str:
@@ -258,19 +324,23 @@ class Database:
off_topic_count: int,
baseline_coherence: float = 0.85,
user_notes: str | None = None,
warned: bool = False,
last_offense_at: float | None = None,
aliases: str | None = None,
warning_expires_at: float | None = None,
) -> None:
"""Upsert user state (offense count, immunity, off-topic count, coherence baseline, notes)."""
"""Upsert user state (offense count, immunity, off-topic count, coherence baseline, notes, warned, last offense time, aliases, warning expiration)."""
if not self._available:
return
try:
await asyncio.to_thread(
self._save_user_state_sync,
user_id, offense_count, immune, off_topic_count, baseline_coherence, user_notes,
user_id, offense_count, immune, off_topic_count, baseline_coherence, user_notes, warned, last_offense_at, aliases, warning_expires_at,
)
except Exception:
logger.exception("Failed to save user state")
def _save_user_state_sync(self, user_id, offense_count, immune, off_topic_count, baseline_coherence, user_notes):
def _save_user_state_sync(self, user_id, offense_count, immune, off_topic_count, baseline_coherence, user_notes, warned, last_offense_at, aliases, warning_expires_at):
conn = self._connect()
try:
cursor = conn.cursor()
@@ -280,14 +350,15 @@ class Database:
ON target.UserId = source.UserId
WHEN MATCHED THEN
UPDATE SET OffenseCount = ?, Immune = ?, OffTopicCount = ?,
BaselineCoherence = ?, UserNotes = ?,
BaselineCoherence = ?, UserNotes = ?, Warned = ?,
LastOffenseAt = ?, Aliases = ?, WarningExpiresAt = ?,
UpdatedAt = SYSUTCDATETIME()
WHEN NOT MATCHED THEN
INSERT (UserId, OffenseCount, Immune, OffTopicCount, BaselineCoherence, UserNotes)
VALUES (?, ?, ?, ?, ?, ?);""",
INSERT (UserId, OffenseCount, Immune, OffTopicCount, BaselineCoherence, UserNotes, Warned, LastOffenseAt, Aliases, WarningExpiresAt)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?);""",
user_id,
offense_count, 1 if immune else 0, off_topic_count, baseline_coherence, user_notes,
user_id, offense_count, 1 if immune else 0, off_topic_count, baseline_coherence, user_notes,
offense_count, 1 if immune else 0, off_topic_count, baseline_coherence, user_notes, 1 if warned else 0, last_offense_at, aliases, warning_expires_at,
user_id, offense_count, 1 if immune else 0, off_topic_count, baseline_coherence, user_notes, 1 if warned else 0, last_offense_at, aliases, warning_expires_at,
)
cursor.close()
finally:
@@ -330,7 +401,7 @@ class Database:
try:
cursor = conn.cursor()
cursor.execute(
"SELECT UserId, OffenseCount, Immune, OffTopicCount, BaselineCoherence, UserNotes FROM UserState"
"SELECT UserId, OffenseCount, Immune, OffTopicCount, BaselineCoherence, UserNotes, Warned, LastOffenseAt, Aliases, WarningExpiresAt FROM UserState"
)
rows = cursor.fetchall()
cursor.close()
@@ -342,12 +413,403 @@ class Database:
"off_topic_count": row[3],
"baseline_coherence": float(row[4]),
"user_notes": row[5] or "",
"warned": bool(row[6]),
"last_offense_at": float(row[7]) if row[7] is not None else 0.0,
"aliases": row[8] or "",
"warning_expires_at": float(row[9]) if row[9] is not None else 0.0,
}
for row in rows
]
finally:
conn.close()
# ------------------------------------------------------------------
# LLM Log (fire-and-forget via asyncio.create_task)
# ------------------------------------------------------------------
async def save_llm_log(
self,
request_type: str,
model: str,
duration_ms: int,
success: bool,
request: str,
response: str | None = None,
error: str | None = None,
input_tokens: int | None = None,
output_tokens: int | None = None,
) -> None:
"""Save an LLM request/response log entry."""
if not self._available:
return
try:
await asyncio.to_thread(
self._save_llm_log_sync,
request_type, model, duration_ms, success, request,
response, error, input_tokens, output_tokens,
)
except Exception:
logger.exception("Failed to save LLM log")
def _save_llm_log_sync(
self, request_type, model, duration_ms, success, request,
response, error, input_tokens, output_tokens,
):
conn = self._connect()
try:
cursor = conn.cursor()
cursor.execute(
"""INSERT INTO LlmLog
(RequestType, Model, InputTokens, OutputTokens, DurationMs,
Success, Request, Response, Error)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)""",
request_type, model, input_tokens, output_tokens, duration_ms,
1 if success else 0,
request[:4000] if request else "",
response[:4000] if response else None,
error[:4000] if error else None,
)
cursor.close()
finally:
conn.close()
# ------------------------------------------------------------------
# Bot Settings (key-value store)
# ------------------------------------------------------------------
async def save_setting(self, key: str, value: str) -> None:
if not self._available:
return
try:
await asyncio.to_thread(self._save_setting_sync, key, value)
except Exception:
logger.exception("Failed to save setting %s", key)
def _save_setting_sync(self, key: str, value: str):
conn = self._connect()
try:
cursor = conn.cursor()
cursor.execute(
"""MERGE BotSettings AS target
USING (SELECT ? AS SettingKey) AS source
ON target.SettingKey = source.SettingKey
WHEN MATCHED THEN
UPDATE SET SettingValue = ?, UpdatedAt = SYSUTCDATETIME()
WHEN NOT MATCHED THEN
INSERT (SettingKey, SettingValue) VALUES (?, ?);""",
key, value, key, value,
)
cursor.close()
finally:
conn.close()
async def load_setting(self, key: str, default: str | None = None) -> str | None:
if not self._available:
return default
try:
return await asyncio.to_thread(self._load_setting_sync, key, default)
except Exception:
logger.exception("Failed to load setting %s", key)
return default
def _load_setting_sync(self, key: str, default: str | None) -> str | None:
conn = self._connect()
try:
cursor = conn.cursor()
cursor.execute(
"SELECT SettingValue FROM BotSettings WHERE SettingKey = ?", key
)
row = cursor.fetchone()
cursor.close()
return row[0] if row else default
finally:
conn.close()
# ------------------------------------------------------------------
# UserMemory (conversational memory per user)
# ------------------------------------------------------------------
async def save_memory(
self,
user_id: int,
memory: str,
topics: str,
importance: str,
expires_at: datetime,
source: str,
) -> None:
"""Insert a single memory row for a user."""
if not self._available:
return
try:
await asyncio.to_thread(
self._save_memory_sync,
user_id, memory, topics, importance, expires_at, source,
)
except Exception:
logger.exception("Failed to save memory")
def _save_memory_sync(self, user_id, memory, topics, importance, expires_at, source):
conn = self._connect()
try:
cursor = conn.cursor()
# Skip if an identical memory already exists for this user
cursor.execute(
"SELECT COUNT(*) FROM UserMemory WHERE UserId = ? AND Memory = ?",
user_id, memory[:500],
)
if cursor.fetchone()[0] > 0:
cursor.close()
return
cursor.execute(
"""INSERT INTO UserMemory (UserId, Memory, Topics, Importance, ExpiresAt, Source)
VALUES (?, ?, ?, ?, ?, ?)""",
user_id,
memory[:500],
topics[:200],
importance[:10],
expires_at,
source[:20],
)
cursor.close()
finally:
conn.close()
async def get_recent_memories(self, user_id: int, limit: int = 5) -> list[dict]:
"""Get the N most recent non-expired memories for a user."""
if not self._available:
return []
try:
return await asyncio.to_thread(self._get_recent_memories_sync, user_id, limit)
except Exception:
logger.exception("Failed to get recent memories")
return []
def _get_recent_memories_sync(self, user_id, limit) -> list[dict]:
conn = self._connect()
try:
cursor = conn.cursor()
cursor.execute(
"""SELECT TOP (?) Memory, Topics, Importance, CreatedAt
FROM UserMemory
WHERE UserId = ? AND ExpiresAt > SYSUTCDATETIME()
ORDER BY CreatedAt DESC""",
limit, user_id,
)
rows = cursor.fetchall()
cursor.close()
return [
{
"memory": row[0],
"topics": row[1],
"importance": row[2],
"created_at": row[3],
}
for row in rows
]
finally:
conn.close()
async def get_memories_by_topics(self, user_id: int, topic_keywords: list[str], limit: int = 5) -> list[dict]:
"""Get non-expired memories matching any of the given topic keywords via LIKE."""
if not self._available:
return []
try:
return await asyncio.to_thread(
self._get_memories_by_topics_sync, user_id, topic_keywords, limit,
)
except Exception:
logger.exception("Failed to get memories by topics")
return []
def _get_memories_by_topics_sync(self, user_id, topic_keywords, limit) -> list[dict]:
conn = self._connect()
try:
cursor = conn.cursor()
if not topic_keywords:
cursor.close()
return []
# Build OR conditions for each keyword
conditions = " OR ".join(["Topics LIKE ?" for _ in topic_keywords])
escaped = [kw.replace("%", "[%]").replace("_", "[_]") for kw in topic_keywords]
params = [limit, user_id] + [f"%{kw}%" for kw in escaped]
cursor.execute(
f"""SELECT TOP (?) Memory, Topics, Importance, CreatedAt
FROM UserMemory
WHERE UserId = ? AND ExpiresAt > SYSUTCDATETIME()
AND ({conditions})
ORDER BY
CASE Importance
WHEN 'high' THEN 1
WHEN 'medium' THEN 2
WHEN 'low' THEN 3
ELSE 4
END,
CreatedAt DESC""",
*params,
)
rows = cursor.fetchall()
cursor.close()
return [
{
"memory": row[0],
"topics": row[1],
"importance": row[2],
"created_at": row[3],
}
for row in rows
]
finally:
conn.close()
async def prune_expired_memories(self) -> int:
"""Delete all expired memories. Returns count deleted."""
if not self._available:
return 0
try:
return await asyncio.to_thread(self._prune_expired_memories_sync)
except Exception:
logger.exception("Failed to prune expired memories")
return 0
def _prune_expired_memories_sync(self) -> int:
conn = self._connect()
try:
cursor = conn.cursor()
cursor.execute("DELETE FROM UserMemory WHERE ExpiresAt < SYSUTCDATETIME()")
count = cursor.rowcount
cursor.close()
return count
finally:
conn.close()
async def prune_excess_memories(self, user_id: int, max_memories: int = 50) -> int:
"""Delete excess memories for a user beyond the cap, keeping high importance and newest first.
Returns count deleted."""
if not self._available:
return 0
try:
return await asyncio.to_thread(self._prune_excess_memories_sync, user_id, max_memories)
except Exception:
logger.exception("Failed to prune excess memories")
return 0
def _prune_excess_memories_sync(self, user_id, max_memories) -> int:
conn = self._connect()
try:
cursor = conn.cursor()
cursor.execute(
"""DELETE FROM UserMemory
WHERE Id IN (
SELECT Id FROM (
SELECT Id, ROW_NUMBER() OVER (
ORDER BY
CASE Importance
WHEN 'high' THEN 1
WHEN 'medium' THEN 2
WHEN 'low' THEN 3
ELSE 4
END,
CreatedAt DESC
) AS rn
FROM UserMemory
WHERE UserId = ?
) ranked
WHERE rn > ?
)""",
user_id, max_memories,
)
count = cursor.rowcount
cursor.close()
return count
finally:
conn.close()
# ------------------------------------------------------------------
# Drama Leaderboard (historical stats from Messages + AnalysisResults + Actions)
# ------------------------------------------------------------------
async def get_drama_leaderboard(self, guild_id: int, days: int | None = None) -> list[dict]:
"""Get per-user drama stats for the leaderboard.
days=None means all-time. Returns list of dicts sorted by user_id."""
if not self._available:
return []
try:
return await asyncio.to_thread(self._get_drama_leaderboard_sync, guild_id, days)
except Exception:
logger.exception("Failed to get drama leaderboard")
return []
def _get_drama_leaderboard_sync(self, guild_id: int, days: int | None) -> list[dict]:
conn = self._connect()
try:
cursor = conn.cursor()
date_filter = ""
params: list = [guild_id]
if days is not None:
date_filter = "AND m.CreatedAt >= DATEADD(DAY, ?, SYSUTCDATETIME())"
params.append(-days)
# Analysis stats from Messages + AnalysisResults
cursor.execute(f"""
SELECT
m.UserId,
MAX(m.Username) AS Username,
AVG(ar.ToxicityScore) AS AvgToxicity,
MAX(ar.ToxicityScore) AS MaxToxicity,
COUNT(*) AS MessagesAnalyzed
FROM Messages m
INNER JOIN AnalysisResults ar ON ar.MessageId = m.Id
WHERE m.GuildId = ? {date_filter}
GROUP BY m.UserId
""", *params)
analysis_rows = cursor.fetchall()
# Action counts
action_date_filter = ""
action_params: list = [guild_id]
if days is not None:
action_date_filter = "AND CreatedAt >= DATEADD(DAY, ?, SYSUTCDATETIME())"
action_params.append(-days)
cursor.execute(f"""
SELECT
UserId,
SUM(CASE WHEN ActionType = 'warning' THEN 1 ELSE 0 END) AS Warnings,
SUM(CASE WHEN ActionType = 'mute' THEN 1 ELSE 0 END) AS Mutes,
SUM(CASE WHEN ActionType IN ('topic_remind', 'topic_nudge') THEN 1 ELSE 0 END) AS OffTopic
FROM Actions
WHERE GuildId = ? {action_date_filter}
GROUP BY UserId
""", *action_params)
action_map = {}
for row in cursor.fetchall():
action_map[row[0]] = {
"warnings": row[1],
"mutes": row[2],
"off_topic": row[3],
}
cursor.close()
results = []
for row in analysis_rows:
user_id = row[0]
actions = action_map.get(user_id, {"warnings": 0, "mutes": 0, "off_topic": 0})
results.append({
"user_id": user_id,
"username": row[1],
"avg_toxicity": float(row[2]),
"max_toxicity": float(row[3]),
"messages_analyzed": row[4],
"warnings": actions["warnings"],
"mutes": actions["mutes"],
"off_topic": actions["off_topic"],
})
return results
finally:
conn.close()
async def close(self):
"""No persistent connection to close (connections are per-operation)."""
pass

View File

@@ -19,6 +19,7 @@ class UserDrama:
last_warning_time: float = 0.0
last_analysis_time: float = 0.0
warned_since_reset: bool = False
warning_expires_at: float = 0.0
immune: bool = False
# Topic drift tracking
off_topic_count: int = 0
@@ -28,8 +29,13 @@ class UserDrama:
coherence_scores: list[float] = field(default_factory=list)
baseline_coherence: float = 0.85
last_coherence_alert_time: float = 0.0
# Unblock nagging tracking
unblock_nag_count: int = 0
last_unblock_nag_time: float = 0.0
# Per-user LLM notes
notes: str = ""
# Known aliases/nicknames
aliases: list[str] = field(default_factory=list)
class DramaTracker:
@@ -38,10 +44,12 @@ class DramaTracker:
window_size: int = 10,
window_minutes: int = 15,
offense_reset_minutes: int = 120,
warning_expiration_minutes: int = 30,
):
self.window_size = window_size
self.window_seconds = window_minutes * 60
self.offense_reset_seconds = offense_reset_minutes * 60
self.warning_expiration_seconds = warning_expiration_minutes * 60
self._users: dict[int, UserDrama] = {}
def get_user(self, user_id: int) -> UserDrama:
@@ -70,8 +78,9 @@ class DramaTracker:
user.last_analysis_time = now
self._prune_entries(user, now)
def get_drama_score(self, user_id: int) -> float:
def get_drama_score(self, user_id: int, escalation_boost: float = 0.04) -> float:
user = self.get_user(user_id)
self._expire_warning(user)
now = time.time()
self._prune_entries(user, now)
@@ -86,11 +95,24 @@ class DramaTracker:
weighted_sum += entry.toxicity_score * weight
total_weight += weight
return weighted_sum / total_weight if total_weight > 0 else 0.0
base_score = weighted_sum / total_weight if total_weight > 0 else 0.0
# Escalation: if warned, each high-scoring message AFTER the warning
# adds a boost so sustained bad behavior ramps toward mute threshold
if user.warned_since_reset and user.last_warning_time > 0:
post_warn_high = sum(
1 for e in user.entries
if e.timestamp > user.last_warning_time and e.toxicity_score >= 0.5
)
if post_warn_high > 0:
base_score += escalation_boost * post_warn_high
return min(base_score, 1.0)
def get_mute_threshold(self, user_id: int, base_threshold: float) -> float:
"""Lower the mute threshold if user was already warned."""
user = self.get_user(user_id)
self._expire_warning(user)
if user.warned_since_reset:
return base_threshold - 0.05
return base_threshold
@@ -109,12 +131,34 @@ class DramaTracker:
user.offense_count += 1
user.last_offense_time = now
user.warned_since_reset = False
user.warning_expires_at = 0.0
return user.offense_count
def record_warning(self, user_id: int) -> None:
user = self.get_user(user_id)
user.last_warning_time = time.time()
now = time.time()
user.last_warning_time = now
user.warned_since_reset = True
if self.warning_expiration_seconds > 0:
user.warning_expires_at = now + self.warning_expiration_seconds
else:
user.warning_expires_at = 0.0 # Never expires
def _expire_warning(self, user: UserDrama) -> None:
"""Clear warned flag if the warning has expired."""
if (
user.warned_since_reset
and user.warning_expires_at > 0
and time.time() >= user.warning_expires_at
):
user.warned_since_reset = False
user.warning_expires_at = 0.0
def is_warned(self, user_id: int) -> bool:
"""Check if user is currently warned (respects expiration)."""
user = self.get_user(user_id)
self._expire_warning(user)
return user.warned_since_reset
def can_warn(self, user_id: int, cooldown_minutes: int) -> bool:
user = self.get_user(user_id)
@@ -192,16 +236,44 @@ class DramaTracker:
user.notes = f"{user.notes}\n{new_line}"
else:
user.notes = new_line
# Trim oldest lines if over ~2000 chars
while len(user.notes) > 2000:
lines = user.notes.split("\n")
if len(lines) <= 1:
break
user.notes = "\n".join(lines[1:])
# Keep only the 10 most recent lines
lines = user.notes.split("\n")
if len(lines) > 10:
user.notes = "\n".join(lines[-10:])
def set_user_profile(self, user_id: int, profile: str) -> None:
"""Replace the user's profile summary (permanent memory)."""
user = self.get_user(user_id)
user.notes = profile[:500]
def clear_user_notes(self, user_id: int) -> None:
self.get_user(user_id).notes = ""
def get_user_aliases(self, user_id: int) -> list[str]:
return self.get_user(user_id).aliases
def set_user_aliases(self, user_id: int, aliases: list[str]) -> None:
self.get_user(user_id).aliases = aliases
def get_all_aliases(self) -> dict[int, list[str]]:
"""Return {user_id: [aliases]} for all users that have aliases set."""
return {uid: user.aliases for uid, user in self._users.items() if user.aliases}
def record_unblock_nag(self, user_id: int) -> int:
user = self.get_user(user_id)
user.unblock_nag_count += 1
user.last_unblock_nag_time = time.time()
return user.unblock_nag_count
def can_unblock_remind(self, user_id: int, cooldown_minutes: int) -> bool:
user = self.get_user(user_id)
if user.last_unblock_nag_time == 0.0:
return True
return time.time() - user.last_unblock_nag_time > cooldown_minutes * 60
def get_unblock_nag_count(self, user_id: int) -> int:
return self.get_user(user_id).unblock_nag_count
def reset_off_topic(self, user_id: int) -> None:
user = self.get_user(user_id)
user.off_topic_count = 0
@@ -274,6 +346,21 @@ class DramaTracker:
user.baseline_coherence = state["baseline_coherence"]
if "user_notes" in state and state["user_notes"]:
user.notes = state["user_notes"]
if state.get("warned"):
user.warned_since_reset = True
user.warning_expires_at = state.get("warning_expires_at", 0.0) or 0.0
# Expire warning at load time if it's past due
self._expire_warning(user)
if state.get("last_offense_at"):
user.last_offense_time = state["last_offense_at"]
# Apply time-based offense reset at load time
if time.time() - user.last_offense_time > self.offense_reset_seconds:
user.offense_count = 0
user.warned_since_reset = False
user.warning_expires_at = 0.0
user.last_offense_time = 0.0
if state.get("aliases"):
user.aliases = [a.strip() for a in state["aliases"].split(",") if a.strip()]
count += 1
return count

1160
utils/llm_client.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,289 +0,0 @@
import json
import logging
from openai import AsyncOpenAI
logger = logging.getLogger("bcs.llm")
SYSTEM_PROMPT = """You are a Discord chat moderator AI for a gaming server. You will be given a TARGET MESSAGE to analyze, plus recent channel context for background.
CRITICAL: Only score the TARGET MESSAGE. The context is ONLY for understanding tone and conversation flow. Do NOT score the context messages — they belong to other users and are already being analyzed separately.
CONTEXT — This is a friend group who use crude nicknames (e.g. "tits" is someone's nickname). A nickname alone is NOT toxic. However, you must still flag genuinely aggressive language.
SCORING GUIDE — Be precise with scores:
- 0.0-0.1: Completely harmless. Casual chat, jokes, "lmao", greetings, game talk, nicknames.
- 0.2-0.3: Mildly edgy. Light trash-talk that's clearly playful ("you suck at this game lol").
- 0.4-0.5: Moderate. Frustrated or heated language that could go either way. Insults without "lol/lmao" softening them.
- 0.6-0.7: Aggressive. Direct insults, hostility, telling someone off. Even among friends, "fuck you, you piece of shit" is aggressive language that should score here regardless of context.
- 0.8-1.0: Severely toxic. Threats, targeted harassment, telling someone to leave, attacking insecurities, sustained personal attacks.
IMPORTANT RULES:
- "Tits" as a nickname = 0.0, not toxic.
- Profanity ALONE (just "fuck" or "shit" with no target) = low score (0.0-0.1).
- Profanity DIRECTED AT someone ("fuck you", "you piece of shit") = moderate-to-high score (0.5-0.7) even among friends.
- Do NOT let friendly context excuse clearly aggressive language. Friends can still cross lines.
- If a message contains BOTH a nickname AND an insult ("fuck you tits you piece of shit"), score the insult, not the nickname.
- If the target message is just "lmao", "lol", an emoji, or a short neutral reaction, it is ALWAYS 0.0 regardless of what other people said before it.
Also determine if the message is on-topic (gaming, games, matches, strategy, LFG, etc.) or off-topic personal drama (relationship issues, personal feuds, venting about real-life problems, gossip about people outside the server).
Also assess the message's coherence — how well-formed, readable, and grammatically correct it is.
- 0.9-1.0: Clear, well-written, normal for this user
- 0.6-0.8: Some errors but still understandable (normal texting shortcuts like "u" and "ur" are fine — don't penalize those)
- 0.3-0.5: Noticeably degraded — garbled words, missing letters, broken sentences beyond normal shorthand
- 0.0-0.2: Nearly incoherent — can barely understand what they're trying to say
You may also be given NOTES about this user from prior interactions. Use these to calibrate your scoring — for example, if notes say "uses heavy profanity casually" then profanity alone should score lower for this user.
If you notice something noteworthy about this user's communication style, behavior, or patterns that would help future analysis, include it as a note_update. Only add genuinely useful observations — don't repeat what's already in the notes. If nothing new, leave note_update as null.
Use the report_analysis tool to report your analysis of the TARGET MESSAGE only."""
ANALYSIS_TOOL = {
"type": "function",
"function": {
"name": "report_analysis",
"description": "Report the toxicity and topic analysis of a Discord message.",
"parameters": {
"type": "object",
"properties": {
"toxicity_score": {
"type": "number",
"description": "Toxicity rating from 0.0 (completely harmless) to 1.0 (extremely toxic).",
},
"categories": {
"type": "array",
"items": {
"type": "string",
"enum": [
"aggressive",
"passive_aggressive",
"instigating",
"hostile",
"manipulative",
"none",
],
},
"description": "Detected toxicity behavior categories.",
},
"reasoning": {
"type": "string",
"description": "Brief explanation of the toxicity analysis.",
},
"off_topic": {
"type": "boolean",
"description": "True if the message is off-topic personal drama rather than gaming-related conversation.",
},
"topic_category": {
"type": "string",
"enum": [
"gaming",
"personal_drama",
"relationship_issues",
"real_life_venting",
"gossip",
"general_chat",
"meta",
],
"description": "What topic category the message falls into.",
},
"topic_reasoning": {
"type": "string",
"description": "Brief explanation of the topic classification.",
},
"coherence_score": {
"type": "number",
"description": "Coherence rating from 0.0 (incoherent gibberish) to 1.0 (clear and well-written). Normal texting shortcuts are fine.",
},
"coherence_flag": {
"type": "string",
"enum": [
"normal",
"intoxicated",
"tired",
"angry_typing",
"mobile_keyboard",
"language_barrier",
],
"description": "Best guess at why coherence is low, if applicable.",
},
"note_update": {
"type": ["string", "null"],
"description": "Brief new observation about this user's style/behavior for future reference, or null if nothing new.",
},
},
"required": ["toxicity_score", "categories", "reasoning", "off_topic", "topic_category", "topic_reasoning", "coherence_score", "coherence_flag"],
},
},
}
class LLMClient:
def __init__(self, base_url: str, model: str, api_key: str = "not-needed"):
self.model = model
self.host = base_url.rstrip("/")
self._client = AsyncOpenAI(
base_url=f"{self.host}/v1",
api_key=api_key,
timeout=300.0, # 5 min — first request loads model into VRAM
)
async def close(self):
await self._client.close()
async def analyze_message(
self, message: str, context: str = "", user_notes: str = ""
) -> dict | None:
user_content = f"=== CONTEXT (other users' recent messages, for background only) ===\n{context}\n\n"
if user_notes:
user_content += f"=== NOTES ABOUT THIS USER (from prior analysis) ===\n{user_notes}\n\n"
user_content += f"=== TARGET MESSAGE (analyze THIS message only) ===\n{message}"
try:
response = await self._client.chat.completions.create(
model=self.model,
messages=[
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": user_content},
],
tools=[ANALYSIS_TOOL],
tool_choice={"type": "function", "function": {"name": "report_analysis"}},
temperature=0.1,
)
choice = response.choices[0]
# Extract tool call arguments
if choice.message.tool_calls:
tool_call = choice.message.tool_calls[0]
args = json.loads(tool_call.function.arguments)
return self._validate_result(args)
# Fallback: try parsing the message content as JSON
if choice.message.content:
return self._parse_content_fallback(choice.message.content)
logger.warning("No tool call or content in LLM response.")
return None
except Exception as e:
logger.error("LLM analysis error: %s", e)
return None
def _validate_result(self, result: dict) -> dict:
score = float(result.get("toxicity_score", 0.0))
result["toxicity_score"] = min(max(score, 0.0), 1.0)
if not isinstance(result.get("categories"), list):
result["categories"] = ["none"]
if not isinstance(result.get("reasoning"), str):
result["reasoning"] = ""
result["off_topic"] = bool(result.get("off_topic", False))
result.setdefault("topic_category", "general_chat")
result.setdefault("topic_reasoning", "")
coherence = float(result.get("coherence_score", 0.85))
result["coherence_score"] = min(max(coherence, 0.0), 1.0)
result.setdefault("coherence_flag", "normal")
result.setdefault("note_update", None)
return result
def _parse_content_fallback(self, text: str) -> dict | None:
"""Try to parse plain-text content as JSON if tool calling didn't work."""
import re
# Try direct JSON
try:
result = json.loads(text.strip())
return self._validate_result(result)
except (json.JSONDecodeError, ValueError):
pass
# Try extracting from code block
match = re.search(r"```(?:json)?\s*(\{.*?\})\s*```", text, re.DOTALL)
if match:
try:
result = json.loads(match.group(1))
return self._validate_result(result)
except (json.JSONDecodeError, ValueError):
pass
# Regex fallback for toxicity_score
score_match = re.search(r'"toxicity_score"\s*:\s*([\d.]+)', text)
if score_match:
return {
"toxicity_score": min(max(float(score_match.group(1)), 0.0), 1.0),
"categories": ["unknown"],
"reasoning": "Parsed via fallback regex",
}
logger.warning("Could not parse LLM content fallback: %s", text[:200])
return None
async def chat(
self, messages: list[dict[str, str]], system_prompt: str
) -> str | None:
"""Send a conversational chat request (no tools)."""
try:
response = await self._client.chat.completions.create(
model=self.model,
messages=[
{"role": "system", "content": system_prompt},
*messages,
],
temperature=0.8,
max_tokens=300,
)
content = response.choices[0].message.content
return content.strip() if content else None
except Exception as e:
logger.error("LLM chat error: %s", e)
return None
async def raw_analyze(self, message: str, context: str = "", user_notes: str = "") -> tuple[str, dict | None]:
"""Return the raw LLM response string AND parsed result for /bcs-test (single LLM call)."""
user_content = f"=== CONTEXT (other users' recent messages, for background only) ===\n{context}\n\n"
if user_notes:
user_content += f"=== NOTES ABOUT THIS USER (from prior analysis) ===\n{user_notes}\n\n"
user_content += f"=== TARGET MESSAGE (analyze THIS message only) ===\n{message}"
try:
response = await self._client.chat.completions.create(
model=self.model,
messages=[
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": user_content},
],
tools=[ANALYSIS_TOOL],
tool_choice={"type": "function", "function": {"name": "report_analysis"}},
temperature=0.1,
)
choice = response.choices[0]
parts = []
parsed = None
if choice.message.content:
parts.append(f"Content: {choice.message.content}")
if choice.message.tool_calls:
for tc in choice.message.tool_calls:
parts.append(
f"Tool call: {tc.function.name}({tc.function.arguments})"
)
# Parse the first tool call
args = json.loads(choice.message.tool_calls[0].function.arguments)
parsed = self._validate_result(args)
elif choice.message.content:
parsed = self._parse_content_fallback(choice.message.content)
raw = "\n".join(parts) or "(empty response)"
return raw, parsed
except Exception as e:
return f"Error: {e}", None