Compare commits

...

52 Commits

Author SHA1 Message Date
aj f79de0ea04 feat: add unblock-nag detection and redirect
Keyword-based detection for users repeatedly asking to be unblocked in
chat. Fires an LLM-generated snarky redirect (with static fallback),
tracks per-user nag count with escalating sass, and respects a 30-min
cooldown. Configurable via config.yaml unblock_nag section.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 13:19:29 -04:00
aj 733b86b947 feat: add /bcs-pause command to toggle monitoring
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 15:28:56 -04:00
aj f7dfb7931a feat: add redirect channel to topic drift messages
Topic drift reminders and nudges now direct users to a specific
channel (configurable via redirect_channel). Both static templates
and LLM-generated redirects include the clickable channel mention.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 17:44:25 -05:00
aj a836584940 fix: skip game redirect when topic drift already handled
Changed if to elif so detected_game redirect only fires when
the topic_drift branch wasn't taken.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 17:44:21 -05:00
aj 9872c36b97 improve chat_personality prompt with better structure and guidance
- Fix metadata description to match actual code behavior (optional fields)
- Add texting cadence guidance (lowercase, fragments, casual punctuation)
- Add multi-user conversation handling, conversation exit, deflection, and
  genuine-upset guidance
- Expand examples from 3 to 7 covering varied response styles
- Organize into VOICE/ENGAGEMENT sections for clarity
- Trim over-explained AFTERTHOUGHTS section

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 19:23:31 -05:00
aj 53803d920f fix: sanitize note_updates before storing in sentiment pipeline
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 22:04:00 -05:00
aj b7076dffe2 fix: sanitize profile updates before storing in chat memory pipeline
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 22:03:59 -05:00
aj c5316b98d1 feat: add sanitize_notes() method to LLMClient
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 22:03:59 -05:00
aj f75a3ca3f4 fix: instruct LLM to never quote toxic content in note_updates
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 22:03:59 -05:00
aj 09f83f8c2f fix: move slutty prompt to personalities/ dir, match reply chance
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 10:11:46 -05:00
aj 20e4e7a985 feat: add slutty mode — flirty, thirsty, full of innuendos
New personality mode with 25% reply chance, very relaxed moderation
thresholds (0.85/0.90), suggestive but not explicit personality.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 10:11:21 -05:00
aj 72735c2497 fix: address review feedback for proactive reply logic
- Parse display names with ': ' split to handle colons in names
- Reset cooldown to half instead of subtract-3 to reduce LLM call frequency
- Remove redundant message.guild check

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 11:38:06 -05:00
aj 787b083e00 feat: add relevance-gated proactive replies
Replace random-only proactive reply logic with LLM relevance check.
The bot now evaluates recent conversation context and user memory
before deciding to jump in, then applies reply_chance as a second
gate. Bump reply_chance values higher since the relevance filter
prevents most irrelevant replies.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 11:34:53 -05:00
aj 175c7ad219 fix: clean ||| from chat history and handle afterthoughts in reaction replies
- Extract _split_afterthought helper method
- Store cleaned content (no |||) in chat history to prevent LLM reinforcement
- Handle afterthought splitting in reaction-reply path too
- Log main_reply instead of raw response

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 11:33:11 -05:00
aj 6866ca8adf feat: add afterthoughts, memory callbacks, and callback-worthy extraction
Add triple-pipe afterthought splitting to chat replies so the bot can
send a follow-up message 2-5 seconds later, mimicking natural Discord
typing behavior. Update all 6 personality prompts with afterthought
instructions (~1 in 5 replies) and memory callback guidance so the bot
actively references what it knows about users. Enhance memory extraction
prompt to flag bold claims, contradictions, and embarrassing moments as
high-importance callback-worthy memories with a "callback" topic tag.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 11:30:16 -05:00
aj 97e5738a2f fix: address review feedback for ReactionCog
- Use time.monotonic() at reaction time instead of stale message-receive timestamp
- Add excluded_channels config and filtering
- Truncate message content to 500 chars in pick_reaction

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 11:28:20 -05:00
aj a8e8b63f5e feat: add ReactionCog for ambient emoji reactions
Add a new cog that gives the bot ambient presence by reacting to
messages with contextual emoji chosen by the triage LLM. Includes
RNG gating and per-channel cooldown to keep reactions sparse and
natural.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 11:25:17 -05:00
aj 5c84c8840b fix: use emoji allowlist instead of length check in pick_reaction
Prevents text words like "skull" from passing the filter and causing
Discord HTTPException noise.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 11:24:28 -05:00
aj 661c252bf7 feat: add pick_reaction method to LLMClient
Lightweight LLM call that picks a contextual emoji reaction for a
Discord message. Uses temperature 0.9 for variety, max 16 tokens,
and validates the response is a short emoji token or returns None.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 11:22:08 -05:00
aj 2ec9b16b99 fix: address multiple bugs found in code review
- Fix dirty-user flush race: discard IDs individually after successful save
- Escape LIKE wildcards in LLM-generated topic keywords for DB queries
- Anonymize absent-member aliases to prevent LLM de-anonymization
- Pass correct MIME type to vision model based on image file extension
- Use enumerate instead of list.index() in bcs-scan loop
- Allow bot @mentions with non-report intent to fall through to moderation

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 01:16:38 -05:00
aj eb7eb81621 feat: add warning expiration and exclude moderated messages from context
Warning flag now auto-expires after a configurable duration
(warning_expiration_minutes, default 30m). After expiry, the user must
be re-warned before a mute can be issued.

Messages that triggered moderation actions (warnings/mutes) are now
excluded from the LLM context window in both buffered analysis and
mention scans, preventing already-actioned content from influencing
future scoring. Uses in-memory tracking plus bot reaction fallback
for post-restart coverage.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 13:39:49 -05:00
aj 36df4cf5a6 chore: add .claude/ to .gitignore
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 22:16:18 -05:00
aj bf32a9536a feat: add server rule violation detection and compress prompts
- LLM now evaluates messages against numbered server rules and reports
  violated_rules in analysis output
- Warnings and mutes cite the specific rule(s) broken
- Rules extracted to prompts/rules.txt for prompt injection
- Personality prompts moved to prompts/personalities/ and compressed
  (~63% reduction across all prompt files)
- All prompt files tightened: removed redundancy, consolidated Do NOT
  sections, trimmed examples while preserving behavioral instructions

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 22:14:35 -05:00
aj ed51db527c fix: stop bot from starting every message with "Oh,"
Removed "Oh," from example lines that the model was mimicking, added
explicit DO NOT rule against "Oh" openers, and added more varied examples.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 20:45:16 -05:00
aj bf5051dfc1 fix: steer default chat personality away from southern aunt tone
The LLM was interpreting "sassy hall monitor" as warm/motherly with pet
names like "oh sweetheart" and "bless your heart". Added explicit guidance
for deadpan, dry Discord mod energy instead.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 17:25:06 -05:00
aj cf88638603 fix: add guild-specific command sync for instant slash command propagation
Global sync can take up to an hour to propagate. Now also syncs commands
per-guild in on_ready for immediate availability.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 16:11:46 -05:00
aj 1d653ec216 feat: add /drama-leaderboard command with historical composite scoring
Queries Messages, AnalysisResults, and Actions tables to rank users by a
composite drama score (weighted avg toxicity, peak toxicity, and action rate).
Public command with configurable time period (7d/30d/90d/all-time).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 16:08:39 -05:00
aj 0ff962c95e feat: generate topic drift redirects via LLM with full conversation context
Replace static random templates with LLM-generated redirect messages that
reference what the user actually said and why it's off-topic. Sass escalates
with higher strike counts. Falls back to static templates if LLM fails or
use_llm is disabled in config.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 15:28:36 -05:00
aj 2525216828 fix: deduplicate memories on save with exact-match check
Prevents inserting a memory if an identical one already exists for the
user. Also cleaned up 30 anonymized and 4 duplicate memories from DB.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 10:53:52 -05:00
aj 3b2de80cac fix: de-anonymize User1/User2 references in notes and reasoning text
The LLM returns note_update, reasoning, and worst_message with
anonymized names. These are now replaced with real display names
before storage, so user profiles no longer contain meaningless
User1/User2 references.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 10:51:30 -05:00
aj 88536b4dca chore: remove wordle cog
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 10:48:44 -05:00
aj 33d56f8737 feat: move user aliases from config to DB with /bcs-alias command
Aliases now stored in UserState table instead of config.yaml. Adds
Aliases column (NVARCHAR 500), loads on startup, persists via flush.
New /bcs-alias slash command (view/set/clear) for managing nicknames.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 10:35:19 -05:00
aj ad1234ec99 feat: add user alias mapping for jealousy detection context
Adds user_aliases config section mapping Discord IDs to known nicknames.
Aliases are anonymized and injected into LLM analysis context so it can
recognize when someone name-drops another member (even absent ones).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 10:22:57 -05:00
aj a73d2505d9 feat: add jealousy/possessiveness detection as toxicity category
LLM can now flag possessive name-dropping, territorial behavior, and
jealousy signals when users mention others not in the conversation.
Scores feed into existing drama pipeline for warnings/mutes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 10:07:45 -05:00
aj 0449c8c30d feat: give bot full conversation context on @mentions for real engagement
When @mentioned, fetch recent messages from ALL users in the channel
(up to 15 messages) instead of only the mentioner's messages. This lets
the bot understand debates and discussions it's asked to weigh in on.

Also update the personality prompt to engage with topics substantively
when asked for opinions, rather than deflecting with generic jokes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 14:14:46 -05:00
aj 3d252ee729 feat: classify mention intent before running expensive scan
Adds LLM triage on bot @mentions to determine if the user is chatting
or reporting bad behavior. Only 'report' intents trigger the 30-message
scan; 'chat' intents skip the scan and let ChatCog handle it.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 13:20:54 -05:00
aj b918ba51a8 fix: use escalation model and fallback to permanent memories in migration
- Use LLM_ESCALATION_* env vars for better profile generation
- Fall back to joining permanent memories if profile_update is null

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 13:14:38 -05:00
aj efe7f901c2 Merge branch 'worktree-agent-a27a0179' 2026-02-26 13:04:25 -05:00
aj ca17b6ac61 Merge branch 'worktree-agent-a0b1ccc2' 2026-02-26 13:04:24 -05:00
aj 8a092c720f Merge branch 'worktree-agent-a78eaee3' 2026-02-26 13:04:18 -05:00
aj 365907a7a0 feat: extract and save memories after chat conversations
Merge worktree: adds _extract_and_save_memories() method and fire-and-forget
extraction call after each chat reply. Combined with Task 4's memory
retrieval and injection.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 13:04:12 -05:00
aj e488b2b227 feat: extract and save memories after chat conversations
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 13:02:42 -05:00
aj 7ca369b641 feat: add one-time migration script for user notes to profiles
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 12:59:03 -05:00
aj 305c9bf113 feat: route sentiment note_updates into memory system
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 12:58:14 -05:00
aj 2054ca7b24 feat: add background memory pruning task
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 12:58:12 -05:00
aj d61e85d928 feat: inject persistent memory context into chat responses
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 12:56:02 -05:00
aj 89fabd85da feat: add set_user_profile method to DramaTracker
Replaces the entire notes field with an LLM-generated profile summary,
used by the memory extraction system for permanent facts.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 12:54:05 -05:00
aj 67011535cd feat: add memory extraction LLM tool and prompt
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 12:53:18 -05:00
aj 8686f4fdd6 fix: align default limits and parameter names to spec
- get_recent_memories: limit default 10 → 5
- get_memories_by_topics: limit default 10 → 5
- prune_excess_memories: rename 'cap' → 'max_memories'

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 12:50:47 -05:00
aj 75adafefd6 feat: add UserMemory table and CRUD methods for conversational memory
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 12:48:54 -05:00
aj 333fbb3932 docs: add conversational memory implementation plan
9-task step-by-step plan covering DB schema, LLM extraction tool, memory
retrieval/injection in chat, sentiment pipeline routing, background pruning,
and migration script.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 12:44:18 -05:00
aj d652c32063 docs: add conversational memory design document
Outlines persistent memory system for making the bot a real conversational
participant that knows people and remembers past interactions. Uses existing
UserNotes column for permanent profiles and a new UserMemory table for
expiring context with LLM-assigned lifetimes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 12:41:28 -05:00
40 changed files with 3440 additions and 570 deletions
+1
View File
@@ -3,3 +3,4 @@ __pycache__/
*.pyc
logs/
.venv/
.claude/
+1 -1
View File
@@ -55,7 +55,7 @@ LLM calls use OpenAI tool-calling for structured output (`ANALYSIS_TOOL`, `CONVE
- **`cogs/sentiment.py` (SentimentCog)**: Core moderation engine. Listens to all messages, debounces per-channel (batches messages within `batch_window_seconds`), runs triage → escalation analysis, issues warnings/mutes. Also handles mention-triggered conversation scans and game channel redirects. Flushes dirty user states to DB every 5 minutes.
- **`cogs/chat.py` (ChatCog)**: Conversational AI. Responds to @mentions, replies to bot messages, proactive replies based on mode config. Handles image roasts via vision model. Strips leaked LLM metadata brackets from responses.
- **`cogs/commands.py` (CommandsCog)**: Slash commands — `/dramareport`, `/dramascore`, `/bcs-status`, `/bcs-threshold`, `/bcs-reset`, `/bcs-immune`, `/bcs-history`, `/bcs-scan`, `/bcs-test`, `/bcs-notes`, `/bcs-mode`.
- **`cogs/wordle.py` (WordleCog)**: Watches for Wordle bot messages and generates fun commentary on results.
### Key Utilities
+30 -2
View File
@@ -112,6 +112,7 @@ class BCSBot(commands.Bot):
window_size=sentiment.get("rolling_window_size", 10),
window_minutes=sentiment.get("rolling_window_minutes", 15),
offense_reset_minutes=timeouts.get("offense_reset_minutes", 120),
warning_expiration_minutes=timeouts.get("warning_expiration_minutes", 30),
)
def get_mode_config(self) -> dict:
@@ -138,9 +139,11 @@ class BCSBot(commands.Bot):
await self.load_extension("cogs.sentiment")
await self.load_extension("cogs.commands")
await self.load_extension("cogs.chat")
await self.load_extension("cogs.wordle")
await self.load_extension("cogs.reactions")
# Global sync as fallback; guild-specific sync happens in on_ready
await self.tree.sync()
logger.info("Slash commands synced.")
logger.info("Slash commands synced (global).")
# Quick connectivity check
try:
@@ -165,6 +168,15 @@ class BCSBot(commands.Bot):
async def on_ready(self):
logger.info("Logged in as %s (ID: %d)", self.user, self.user.id)
# Guild-specific command sync for instant propagation
for guild in self.guilds:
try:
self.tree.copy_global_to(guild=guild)
await self.tree.sync(guild=guild)
logger.info("Slash commands synced to guild %s.", guild.name)
except Exception:
logger.exception("Failed to sync commands to guild %s", guild.name)
# Set status based on active mode
mode_config = self.get_mode_config()
status_text = mode_config.get("description") or self.config.get("bot", {}).get(
@@ -209,6 +221,22 @@ class BCSBot(commands.Bot):
", ".join(missing),
)
# Start memory pruning background task
if not hasattr(self, "_memory_prune_task") or self._memory_prune_task.done():
self._memory_prune_task = asyncio.create_task(self._prune_memories_loop())
async def _prune_memories_loop(self):
"""Background task that prunes expired memories every 6 hours."""
await self.wait_until_ready()
while not self.is_closed():
try:
count = await self.db.prune_expired_memories()
if count > 0:
logger.info("Pruned %d expired memories.", count)
except Exception:
logger.exception("Memory pruning error")
await asyncio.sleep(6 * 3600) # Every 6 hours
async def close(self):
await self.db.close()
await self.llm.close()
+254 -23
View File
@@ -3,6 +3,7 @@ import logging
import random
import re
from collections import deque
from datetime import datetime, timedelta, timezone
from pathlib import Path
import discord
@@ -25,20 +26,165 @@ def _load_prompt(filename: str) -> str:
return _prompt_cache[filename]
_TOPIC_KEYWORDS = {
"gta", "warzone", "cod", "battlefield", "fortnite", "apex", "valorant",
"minecraft", "roblox", "league", "dota", "overwatch", "destiny", "halo",
"work", "job", "school", "college", "girlfriend", "boyfriend", "wife",
"husband", "dog", "cat", "pet", "car", "music", "movie", "food",
}
_GENERIC_CHANNELS = {"general", "off-topic", "memes"}
def _extract_topic_keywords(text: str, channel_name: str) -> list[str]:
"""Extract topic keywords from message text and channel name."""
words = set(text.lower().split()) & _TOPIC_KEYWORDS
if channel_name.lower() not in _GENERIC_CHANNELS:
words.add(channel_name.lower())
return list(words)[:5]
def _format_relative_time(dt: datetime) -> str:
"""Return a human-readable relative time string."""
now = datetime.now(timezone.utc)
# Ensure dt is timezone-aware
if dt.tzinfo is None:
dt = dt.replace(tzinfo=timezone.utc)
delta = now - dt
seconds = int(delta.total_seconds())
if seconds < 60:
return "just now"
minutes = seconds // 60
if minutes < 60:
return f"{minutes}m ago"
hours = minutes // 60
if hours < 24:
return f"{hours}h ago"
days = hours // 24
if days == 1:
return "yesterday"
if days < 7:
return f"{days} days ago"
weeks = days // 7
if weeks < 5:
return f"{weeks}w ago"
months = days // 30
return f"{months}mo ago"
class ChatCog(commands.Cog):
@staticmethod
def _split_afterthought(response: str) -> tuple[str, str | None]:
"""Split a response on ||| into (main_reply, afterthought)."""
if "|||" not in response:
return response, None
parts = response.split("|||", 1)
main = parts[0].strip()
after = parts[1].strip() or None
if not main:
return response, None
return main, after
def __init__(self, bot: commands.Bot):
self.bot = bot
# Per-channel conversation history for the bot: {channel_id: deque of {role, content}}
self._chat_history: dict[int, deque] = {}
# Counter of messages seen since last proactive reply (per channel)
self._messages_since_reply: dict[int, int] = {}
# Users whose profile has been updated and needs DB flush
self._dirty_users: set[int] = set()
def _get_active_prompt(self) -> str:
"""Load the chat prompt for the current mode."""
mode_config = self.bot.get_mode_config()
prompt_file = mode_config.get("prompt_file", "chat_personality.txt")
prompt_file = mode_config.get("prompt_file", "personalities/chat_personality.txt")
return _load_prompt(prompt_file)
async def _build_memory_context(self, user_id: int, message_text: str, channel_name: str) -> str:
"""Build a layered memory context block for the chat prompt."""
lines = []
# Layer 1: Profile (always)
profile = self.bot.drama_tracker.get_user_notes(user_id)
if profile:
lines.append(f"Profile: {profile}")
# Layer 2: Recent memories (last 5)
recent_memories = await self.bot.db.get_recent_memories(user_id, limit=5)
if recent_memories:
parts = []
for mem in recent_memories:
time_str = _format_relative_time(mem["created_at"])
parts.append(f"{mem['memory']} ({time_str})")
lines.append("Recent: " + " | ".join(parts))
# Layer 3: Topic-matched memories (deduplicated against recent)
keywords = _extract_topic_keywords(message_text, channel_name)
if keywords:
topic_memories = await self.bot.db.get_memories_by_topics(user_id, keywords, limit=5)
# Deduplicate against recent memories
recent_texts = {mem["memory"] for mem in recent_memories} if recent_memories else set()
unique_topic = [mem for mem in topic_memories if mem["memory"] not in recent_texts]
if unique_topic:
parts = []
for mem in unique_topic:
time_str = _format_relative_time(mem["created_at"])
parts.append(f"{mem['memory']} ({time_str})")
lines.append("Relevant: " + " | ".join(parts))
if not lines:
return ""
return "[What you know about this person:]\n" + "\n".join(lines)
async def _extract_and_save_memories(
self, user_id: int, username: str, conversation: list[dict[str, str]],
) -> None:
"""Background task: extract memories from conversation and save them."""
try:
current_profile = self.bot.drama_tracker.get_user_notes(user_id)
result = await self.bot.llm.extract_memories(
conversation, username, current_profile,
)
if not result:
return
# Save expiring memories
for mem in result.get("memories", []):
if mem["expiration"] == "permanent":
continue # permanent facts go into profile_update
exp_days = {"1d": 1, "3d": 3, "7d": 7, "30d": 30}
days = exp_days.get(mem["expiration"], 7)
expires_at = datetime.now(timezone.utc) + timedelta(days=days)
await self.bot.db.save_memory(
user_id=user_id,
memory=mem["memory"],
topics=",".join(mem["topics"]),
importance=mem["importance"],
expires_at=expires_at,
source="chat",
)
# Prune if over cap
await self.bot.db.prune_excess_memories(user_id)
# Update profile if warranted
profile_update = result.get("profile_update")
if profile_update:
# Sanitize before storing — strips any quoted toxic language
profile_update = await self.bot.llm.sanitize_notes(profile_update)
self.bot.drama_tracker.set_user_profile(user_id, profile_update)
self._dirty_users.add(user_id)
logger.info(
"Extracted %d memories for %s (profile_update=%s)",
len(result.get("memories", [])),
username,
bool(profile_update),
)
except Exception:
logger.exception("Failed to extract memories for %s", username)
@commands.Cog.listener()
async def on_message(self, message: discord.Message):
if message.author.bot:
@@ -82,16 +228,56 @@ class ChatCog(commands.Cog):
ch_id = message.channel.id
self._messages_since_reply[ch_id] = self._messages_since_reply.get(ch_id, 0) + 1
cooldown = self.bot.config.get("modes", {}).get("proactive_cooldown_messages", 5)
reply_chance = mode_config.get("reply_chance", 0.0)
if (
self._messages_since_reply[ch_id] >= cooldown
and reply_chance > 0
and random.random() < reply_chance
and message.content and message.content.strip()
):
should_reply = True
is_proactive = True
# Gather recent messages for relevance check
recent_for_check = []
try:
async for msg in message.channel.history(limit=5, before=message):
if msg.content and msg.content.strip() and not msg.author.bot:
recent_for_check.append(
f"{msg.author.display_name}: {msg.content[:200]}"
)
except discord.HTTPException:
pass
recent_for_check.reverse()
recent_for_check.append(
f"{message.author.display_name}: {message.content[:200]}"
)
# Build memory context for users in recent messages
memory_parts = []
seen_users = set()
for line in recent_for_check:
name = line.split(": ", 1)[0]
if name not in seen_users:
seen_users.add(name)
member = discord.utils.find(
lambda m, n=name: m.display_name == n,
message.guild.members,
)
if member:
profile = self.bot.drama_tracker.get_user_notes(member.id)
if profile:
memory_parts.append(f"{name}: {profile}")
memory_ctx = "\n".join(memory_parts) if memory_parts else ""
is_relevant = await self.bot.llm.check_reply_relevance(
recent_for_check, memory_ctx,
)
if is_relevant:
reply_chance = mode_config.get("reply_chance", 0.0)
if reply_chance > 0 and random.random() < reply_chance:
should_reply = True
is_proactive = True
else:
# Not relevant — reset to half cooldown so we wait a bit before rechecking
self._messages_since_reply[ch_id] = cooldown // 2
if not should_reply:
return
@@ -142,11 +328,14 @@ class ChatCog(commands.Cog):
image_attachment.filename,
user_text[:80],
)
ext = image_attachment.filename.rsplit(".", 1)[-1].lower() if "." in image_attachment.filename else "png"
mime = f"image/{'jpeg' if ext == 'jpg' else ext}"
response = await self.bot.llm_heavy.analyze_image(
image_bytes,
IMAGE_ROAST,
user_text=user_text,
on_first_token=start_typing,
media_type=mime,
)
else:
# --- Text-only path: normal chat ---
@@ -176,28 +365,46 @@ class ChatCog(commands.Cog):
context_parts.append(f"{user_data.offense_count} offense(s)")
score_context = f"[Server context: {message.author.display_name}{', '.join(context_parts)}]"
# Gather user notes and recent messages for richer context
# Gather memory context and recent messages for richer context
extra_context = ""
user_notes = self.bot.drama_tracker.get_user_notes(message.author.id)
if user_notes:
extra_context += f"[Notes about {message.author.display_name}: {user_notes}]\n"
memory_context = await self._build_memory_context(
message.author.id, content, message.channel.name,
)
if memory_context:
extra_context += memory_context + "\n"
# Include mention scan findings if available
if scan_summary:
extra_context += f"[You just scanned recent chat. Results: {scan_summary}]\n"
recent_user_msgs = []
# When @mentioned, fetch recent channel conversation (all users)
# so the bot has full context of what's being discussed.
# For proactive/reply-to-bot, just fetch the mentioner's messages.
recent_msgs = []
fetch_all_users = self.bot.user in message.mentions
try:
async for msg in message.channel.history(limit=50, before=message):
if msg.author.id == message.author.id and msg.content and msg.content.strip():
recent_user_msgs.append(msg.content[:200])
if len(recent_user_msgs) >= 10:
if not msg.content or not msg.content.strip():
continue
if msg.author.bot:
# Include bot's own replies for conversational continuity
if msg.author.id == self.bot.user.id:
recent_msgs.append((msg.author.display_name, msg.content[:200]))
if len(recent_msgs) >= 15:
break
continue
if fetch_all_users or msg.author.id == message.author.id:
recent_msgs.append((msg.author.display_name, msg.content[:200]))
if len(recent_msgs) >= 15:
break
except discord.HTTPException:
pass
if recent_user_msgs:
recent_lines = "\n".join(f"- {m}" for m in reversed(recent_user_msgs))
extra_context += f"[{message.author.display_name}'s recent messages:\n{recent_lines}]\n"
if recent_msgs:
recent_lines = "\n".join(
f"- {name}: {text}" for name, text in reversed(recent_msgs)
)
label = "Recent conversation" if fetch_all_users else f"{message.author.display_name}'s recent messages"
extra_context += f"[{label}:\n{recent_lines}]\n"
self._chat_history[ch_id].append(
{"role": "user", "content": f"{score_context}\n{extra_context}{reply_context}{message.author.display_name}: {content}"}
@@ -243,9 +450,14 @@ class ChatCog(commands.Cog):
logger.warning("LLM returned no response for %s in #%s", message.author, message.channel.name)
return
# Split afterthoughts (triple-pipe delimiter)
main_reply, afterthought = self._split_afterthought(response)
# Store cleaned content in history (no ||| delimiter)
if not image_attachment:
clean_for_history = f"{main_reply}\n{afterthought}" if afterthought else main_reply
self._chat_history[ch_id].append(
{"role": "assistant", "content": response}
{"role": "assistant", "content": clean_for_history}
)
# Reset proactive cooldown counter for this channel
@@ -263,7 +475,19 @@ class ChatCog(commands.Cog):
except (asyncio.TimeoutError, asyncio.CancelledError):
pass
await message.reply(response, mention_author=False)
await message.reply(main_reply, mention_author=False)
if afterthought:
await asyncio.sleep(random.uniform(2.0, 5.0))
await message.channel.send(afterthought)
# Fire-and-forget memory extraction
if not image_attachment:
asyncio.create_task(self._extract_and_save_memories(
message.author.id,
message.author.display_name,
list(self._chat_history[ch_id]),
))
reply_type = "proactive" if is_proactive else "chat"
logger.info(
@@ -271,7 +495,7 @@ class ChatCog(commands.Cog):
reply_type.capitalize(),
message.channel.name,
message.author.display_name,
response[:100],
main_reply[:100],
)
@@ -343,15 +567,22 @@ class ChatCog(commands.Cog):
if not response:
return
self._chat_history[ch_id].append({"role": "assistant", "content": response})
main_reply, afterthought = self._split_afterthought(response)
clean_for_history = f"{main_reply}\n{afterthought}" if afterthought else main_reply
self._chat_history[ch_id].append({"role": "assistant", "content": clean_for_history})
await channel.send(main_reply)
if afterthought:
await asyncio.sleep(random.uniform(2.0, 5.0))
await channel.send(afterthought)
await channel.send(response)
logger.info(
"Reaction reply in #%s to %s (%s): %s",
channel.name,
member.display_name,
emoji,
response[:100],
main_reply[:100],
)
+175 -2
View File
@@ -161,6 +161,31 @@ class CommandsCog(commands.Cog):
await interaction.response.send_message(embed=embed, ephemeral=True)
@app_commands.command(
name="bcs-pause",
description="Pause or resume bot monitoring. (Admin only)",
)
@app_commands.default_permissions(administrator=True)
async def bcs_pause(self, interaction: discord.Interaction):
if not self._is_admin(interaction):
await interaction.response.send_message(
"Admin only.", ephemeral=True
)
return
monitoring = self.bot.config.setdefault("monitoring", {})
currently_enabled = monitoring.get("enabled", True)
monitoring["enabled"] = not currently_enabled
if monitoring["enabled"]:
await interaction.response.send_message(
"Monitoring **resumed**.", ephemeral=True
)
else:
await interaction.response.send_message(
"Monitoring **paused**.", ephemeral=True
)
@app_commands.command(
name="bcs-threshold",
description="Adjust warning and mute thresholds. (Admin only)",
@@ -250,6 +275,7 @@ class CommandsCog(commands.Cog):
off_topic_count=user_data.off_topic_count,
baseline_coherence=user_data.baseline_coherence,
user_notes=user_data.notes or None,
aliases=",".join(user_data.aliases) if user_data.aliases else None,
))
status = "now immune" if is_immune else "no longer immune"
await interaction.response.send_message(
@@ -319,9 +345,8 @@ class CommandsCog(commands.Cog):
f"Scanning {len(messages)} messages... (first request may be slow while model loads)"
)
for msg in messages:
for idx, msg in enumerate(messages):
# Build context from the messages before this one
idx = messages.index(msg)
ctx_msgs = messages[max(0, idx - 3):idx]
context = (
" | ".join(f"{m.author.display_name}: {m.content}" for m in ctx_msgs)
@@ -501,6 +526,7 @@ class CommandsCog(commands.Cog):
off_topic_count=user_data.off_topic_count,
baseline_coherence=user_data.baseline_coherence,
user_notes=user_data.notes or None,
aliases=",".join(user_data.aliases) if user_data.aliases else None,
))
await interaction.response.send_message(
f"Note added for {user.display_name}.", ephemeral=True
@@ -516,11 +542,86 @@ class CommandsCog(commands.Cog):
off_topic_count=user_data.off_topic_count,
baseline_coherence=user_data.baseline_coherence,
user_notes=None,
aliases=",".join(user_data.aliases) if user_data.aliases else None,
))
await interaction.response.send_message(
f"Notes cleared for {user.display_name}.", ephemeral=True
)
@app_commands.command(
name="bcs-alias",
description="Manage nicknames/aliases for a user. (Admin only)",
)
@app_commands.default_permissions(administrator=True)
@app_commands.describe(
action="What to do with aliases",
user="The user whose aliases to manage",
text="Comma-separated aliases (only used with 'set')",
)
@app_commands.choices(action=[
app_commands.Choice(name="view", value="view"),
app_commands.Choice(name="set", value="set"),
app_commands.Choice(name="clear", value="clear"),
])
async def bcs_alias(
self,
interaction: discord.Interaction,
action: app_commands.Choice[str],
user: discord.Member,
text: str | None = None,
):
if not self._is_admin(interaction):
await interaction.response.send_message("Admin only.", ephemeral=True)
return
if action.value == "view":
aliases = self.bot.drama_tracker.get_user_aliases(user.id)
desc = ", ".join(aliases) if aliases else "_No aliases set._"
embed = discord.Embed(
title=f"Aliases: {user.display_name}",
description=desc,
color=discord.Color.blue(),
)
await interaction.response.send_message(embed=embed, ephemeral=True)
elif action.value == "set":
if not text:
await interaction.response.send_message(
"Provide `text` with comma-separated aliases (e.g. `Glam, G`).", ephemeral=True
)
return
aliases = [a.strip() for a in text.split(",") if a.strip()]
self.bot.drama_tracker.set_user_aliases(user.id, aliases)
user_data = self.bot.drama_tracker.get_user(user.id)
asyncio.create_task(self.bot.db.save_user_state(
user_id=user.id,
offense_count=user_data.offense_count,
immune=user_data.immune,
off_topic_count=user_data.off_topic_count,
baseline_coherence=user_data.baseline_coherence,
user_notes=user_data.notes or None,
aliases=",".join(aliases),
))
await interaction.response.send_message(
f"Aliases for {user.display_name} set to: {', '.join(aliases)}", ephemeral=True
)
elif action.value == "clear":
self.bot.drama_tracker.set_user_aliases(user.id, [])
user_data = self.bot.drama_tracker.get_user(user.id)
asyncio.create_task(self.bot.db.save_user_state(
user_id=user.id,
offense_count=user_data.offense_count,
immune=user_data.immune,
off_topic_count=user_data.off_topic_count,
baseline_coherence=user_data.baseline_coherence,
user_notes=user_data.notes or None,
aliases=None,
))
await interaction.response.send_message(
f"Aliases cleared for {user.display_name}.", ephemeral=True
)
@app_commands.command(
name="bcs-mode",
description="Switch the bot's personality mode.",
@@ -592,6 +693,78 @@ class CommandsCog(commands.Cog):
old_mode, mode, interaction.user.display_name,
)
@app_commands.command(
name="drama-leaderboard",
description="Show the all-time drama leaderboard for the server.",
)
@app_commands.describe(period="Time period to rank (default: 30d)")
@app_commands.choices(period=[
app_commands.Choice(name="Last 7 days", value="7d"),
app_commands.Choice(name="Last 30 days", value="30d"),
app_commands.Choice(name="Last 90 days", value="90d"),
app_commands.Choice(name="All time", value="all"),
])
async def drama_leaderboard(
self, interaction: discord.Interaction, period: app_commands.Choice[str] | None = None,
):
await interaction.response.defer()
period_val = period.value if period else "30d"
if period_val == "all":
days = None
period_label = "All Time"
else:
days = int(period_val.rstrip("d"))
period_label = f"Last {days} Days"
rows = await self.bot.db.get_drama_leaderboard(interaction.guild.id, days)
if not rows:
await interaction.followup.send(
f"No drama data for **{period_label}**. Everyone's been suspiciously well-behaved."
)
return
# Compute composite score for each user
scored = []
for r in rows:
avg_tox = r["avg_toxicity"]
max_tox = r["max_toxicity"]
msg_count = r["messages_analyzed"]
action_weight = r["warnings"] + r["mutes"] * 2 + r["off_topic"] * 0.5
action_rate = min(1.0, action_weight / msg_count * 10) if msg_count > 0 else 0.0
composite = avg_tox * 0.4 + max_tox * 0.2 + action_rate * 0.4
scored.append({**r, "composite": composite, "action_rate": action_rate})
scored.sort(key=lambda x: x["composite"], reverse=True)
top = scored[:10]
medals = ["🥇", "🥈", "🥉"]
lines = []
for i, entry in enumerate(top):
rank = medals[i] if i < 3 else f"`{i + 1}.`"
# Resolve display name from guild if possible
member = interaction.guild.get_member(entry["user_id"])
name = member.display_name if member else entry["username"]
lines.append(
f"{rank} **{entry['composite']:.2f}** — {name}\n"
f" Avg: {entry['avg_toxicity']:.2f} | "
f"Peak: {entry['max_toxicity']:.2f} | "
f"⚠️ {entry['warnings']} | "
f"🔇 {entry['mutes']} | "
f"📢 {entry['off_topic']}"
)
embed = discord.Embed(
title=f"Drama Leaderboard — {period_label}",
description="\n".join(lines),
color=discord.Color.orange(),
)
embed.set_footer(text=f"{len(rows)} users tracked | {sum(r['messages_analyzed'] for r in rows)} messages analyzed")
await interaction.followup.send(embed=embed)
@bcs_mode.autocomplete("mode")
async def _mode_autocomplete(
self, interaction: discord.Interaction, current: str,
+76
View File
@@ -0,0 +1,76 @@
import asyncio
import logging
import random
import time
import discord
from discord.ext import commands
logger = logging.getLogger("bcs.reactions")
class ReactionCog(commands.Cog):
def __init__(self, bot: commands.Bot):
self.bot = bot
# Per-channel timestamp of last reaction
self._last_reaction: dict[int, float] = {}
@commands.Cog.listener()
async def on_message(self, message: discord.Message):
if message.author.bot or not message.guild:
return
cfg = self.bot.config.get("reactions", {})
if not cfg.get("enabled", False):
return
# Skip empty messages
if not message.content or not message.content.strip():
return
# Channel exclusion
excluded = cfg.get("excluded_channels", [])
if excluded:
ch_name = getattr(message.channel, "name", "")
if message.channel.id in excluded or ch_name in excluded:
return
# RNG gate
chance = cfg.get("chance", 0.15)
if random.random() > chance:
return
# Per-channel cooldown
ch_id = message.channel.id
cooldown = cfg.get("cooldown_seconds", 45)
now = time.monotonic()
if now - self._last_reaction.get(ch_id, 0) < cooldown:
return
# Fire and forget so we don't block anything
asyncio.create_task(self._try_react(message, ch_id))
async def _try_react(self, message: discord.Message, ch_id: int):
try:
emoji = await self.bot.llm.pick_reaction(
message.content, message.channel.name,
)
if not emoji:
return
await message.add_reaction(emoji)
self._last_reaction[ch_id] = time.monotonic()
logger.info(
"Reacted %s to %s in #%s: %s",
emoji, message.author.display_name,
message.channel.name, message.content[:60],
)
except discord.HTTPException as e:
# Invalid emoji or missing permissions — silently skip
logger.debug("Reaction failed: %s", e)
except Exception:
logger.exception("Unexpected reaction error")
async def setup(bot: commands.Bot):
await bot.add_cog(ReactionCog(bot))
+163 -20
View File
@@ -1,6 +1,7 @@
import asyncio
import logging
from datetime import datetime, timezone
from datetime import datetime, timedelta, timezone
from pathlib import Path
import discord
@@ -12,12 +13,41 @@ from cogs.sentiment.coherence import handle_coherence_alert
from cogs.sentiment.log_utils import log_analysis
from cogs.sentiment.state import flush_dirty_states
from cogs.sentiment.topic_drift import handle_topic_drift
from cogs.sentiment.unblock_nag import handle_unblock_nag, matches_unblock_nag
logger = logging.getLogger("bcs.sentiment")
# How often to flush dirty user states to DB (seconds)
STATE_FLUSH_INTERVAL = 300 # 5 minutes
# Load server rules from prompt file (cached at import time)
_PROMPTS_DIR = Path(__file__).resolve().parent.parent.parent / "prompts"
def _load_rules() -> tuple[str, dict[int, str]]:
"""Load rules from prompts/rules.txt, returning (raw text, {num: text} dict)."""
path = _PROMPTS_DIR / "rules.txt"
if not path.exists():
return "", {}
text = path.read_text(encoding="utf-8").strip()
if not text:
return "", {}
rules_dict = {}
for line in text.splitlines():
line = line.strip()
if not line:
continue
parts = line.split(". ", 1)
if len(parts) == 2:
try:
rules_dict[int(parts[0])] = parts[1]
except ValueError:
pass
return text, rules_dict
_RULES_TEXT, _RULES_DICT = _load_rules()
class SentimentCog(commands.Cog):
def __init__(self, bot: commands.Bot):
@@ -37,6 +67,7 @@ class SentimentCog(commands.Cog):
self._mention_scan_results: dict[int, str] = {} # {trigger_message_id: findings_summary}
self._analyzed_message_ids: set[int] = set() # Discord message IDs already analyzed
self._max_analyzed_ids = 500
self._moderated_message_ids: set[int] = set() # Message IDs that triggered moderation
async def cog_load(self):
@@ -103,15 +134,32 @@ class SentimentCog(commands.Cog):
or f"<@!{self.bot.user.id}>" in (message.content or "")
)
if bot_mentioned_in_text:
mention_config = config.get("mention_scan", {})
if mention_config.get("enabled", True):
await self._maybe_start_mention_scan(message, mention_config)
return
# Classify intent: only run expensive mention scan for reports,
# let ChatCog handle casual chat/questions
intent = await self.bot.llm.classify_mention_intent(
message.content or ""
)
logger.info(
"Mention intent for %s: %s", message.author, intent
)
if intent == "report":
mention_config = config.get("mention_scan", {})
if mention_config.get("enabled", True):
await self._maybe_start_mention_scan(message, mention_config)
return
# For non-report intents, fall through to buffer the message
# so it still gets scored for toxicity
# Skip if empty
if not message.content or not message.content.strip():
return
# Check for unblock nagging (keyword-based, no LLM needed for detection)
if matches_unblock_nag(message.content):
asyncio.create_task(handle_unblock_nag(
self.bot, message, self._dirty_users,
))
# Buffer the message and start/reset debounce timer (per-channel)
channel_id = message.channel.id
if channel_id not in self._message_buffer:
@@ -167,20 +215,30 @@ class SentimentCog(commands.Cog):
categories: list[str],
thresholds: dict,
db_message_id: int | None,
) -> None:
"""Issue a warning or mute based on scores and thresholds."""
violated_rules: list[int] | None = None,
) -> bool:
"""Issue a warning or mute based on scores and thresholds.
Returns True if any moderation action was taken."""
rules_config = _RULES_DICT
mute_threshold = self.bot.drama_tracker.get_mute_threshold(user_id, thresholds["mute"])
user_data = self.bot.drama_tracker.get_user(user_id)
if drama_score >= mute_threshold or score >= thresholds["spike_mute"]:
effective_score = max(drama_score, score)
if user_data.warned_since_reset:
await mute_user(self.bot, message, effective_score, categories, db_message_id, self._dirty_users)
if self.bot.drama_tracker.is_warned(user_id):
await mute_user(self.bot, message, effective_score, categories, db_message_id, self._dirty_users, violated_rules=violated_rules, rules_config=rules_config)
else:
logger.info("Downgrading mute to warning for %s (no prior warning)", message.author)
await warn_user(self.bot, message, effective_score, db_message_id, self._dirty_users)
await warn_user(self.bot, message, effective_score, db_message_id, self._dirty_users, violated_rules=violated_rules, rules_config=rules_config)
return True
elif drama_score >= thresholds["warning"] or score >= thresholds["spike_warn"]:
effective_score = max(drama_score, score)
await warn_user(self.bot, message, effective_score, db_message_id, self._dirty_users)
await warn_user(self.bot, message, effective_score, db_message_id, self._dirty_users, violated_rules=violated_rules, rules_config=rules_config)
return True
return False
@staticmethod
def _build_rules_context() -> str:
"""Return server rules text loaded from prompts/rules.txt."""
return _RULES_TEXT
@staticmethod
def _build_user_lookup(messages: list[discord.Message]) -> dict[str, tuple[int, discord.Message, list[discord.Message]]]:
@@ -244,6 +302,39 @@ class SentimentCog(commands.Cog):
"""Replace display name keys with anonymous keys in user notes map."""
return {anon_map.get(name, name): notes for name, notes in user_notes_map.items()}
def _build_alias_context(
self,
messages: list[discord.Message],
anon_map: dict[str, str],
) -> str:
"""Build anonymized alias context string for the LLM.
Maps user IDs from messages to their known nicknames from
DramaTracker, then replaces display names with anonymous keys.
"""
all_aliases = self.bot.drama_tracker.get_all_aliases()
if not all_aliases:
return ""
lines = []
seen_ids: set[int] = set()
for msg in messages:
uid = msg.author.id
if uid in seen_ids:
continue
seen_ids.add(uid)
aliases = all_aliases.get(uid)
if aliases:
anon_key = anon_map.get(msg.author.display_name, msg.author.display_name)
lines.append(f" {anon_key} is also known as: {', '.join(aliases)}")
# Include aliases for members NOT in the conversation (so the LLM
# can recognize name-drops of absent members), using anonymized keys
absent_idx = 0
for uid, aliases in all_aliases.items():
if uid not in seen_ids:
absent_idx += 1
lines.append(f" Absent_{absent_idx} is also known as: {', '.join(aliases)}")
return "\n".join(lines) if lines else ""
@staticmethod
def _deanonymize_findings(result: dict, anon_map: dict[str, str]) -> None:
"""Replace anonymous keys back to display names in LLM findings (in-place)."""
@@ -252,6 +343,13 @@ class SentimentCog(commands.Cog):
anon_name = finding.get("username", "")
if anon_name in reverse_map:
finding["username"] = reverse_map[anon_name]
# De-anonymize text fields that may reference other users
for field in ("note_update", "reasoning", "worst_message"):
text = finding.get(field)
if text:
for anon, real in reverse_map.items():
text = text.replace(anon, real)
finding[field] = text
@staticmethod
def _build_conversation(
@@ -312,6 +410,7 @@ class SentimentCog(commands.Cog):
categories = finding["categories"]
reasoning = finding["reasoning"]
off_topic = finding.get("off_topic", False)
violated_rules = finding.get("violated_rules", [])
note_update = finding.get("note_update")
# Track in DramaTracker
@@ -351,8 +450,7 @@ class SentimentCog(commands.Cog):
db_message_id, self._dirty_users,
)
detected_game = finding.get("detected_game")
if detected_game and game_channels and not dry_run:
elif (detected_game := finding.get("detected_game")) and game_channels and not dry_run:
await handle_channel_redirect(
self.bot, user_ref_msg, detected_game, game_channels,
db_message_id, self._redirect_cooldowns,
@@ -375,10 +473,21 @@ class SentimentCog(commands.Cog):
db_message_id, self._dirty_users,
)
# Note update
# Note update — route to memory system
if note_update:
self.bot.drama_tracker.update_user_notes(user_id, note_update)
# Sanitize before storing — strips any quoted toxic language
sanitized = await self.bot.llm.sanitize_notes(note_update)
self.bot.drama_tracker.update_user_notes(user_id, sanitized)
self._dirty_users.add(user_id)
# Also save as an expiring memory (7d default for passive observations)
asyncio.create_task(self.bot.db.save_memory(
user_id=user_id,
memory=sanitized[:500],
topics=db_topic_category or "general",
importance="medium",
expires_at=datetime.now(timezone.utc) + timedelta(days=7),
source="passive",
))
self._dirty_users.add(user_id)
@@ -390,9 +499,14 @@ class SentimentCog(commands.Cog):
# Moderation
if not dry_run:
await self._apply_moderation(
acted = await self._apply_moderation(
user_ref_msg, user_id, score, drama_score, categories, thresholds, db_message_id,
violated_rules=violated_rules,
)
if acted:
for m in user_msgs:
self._moderated_message_ids.add(m.id)
self._prune_moderated_ids()
return (username, score, drama_score, categories)
@@ -419,11 +533,13 @@ class SentimentCog(commands.Cog):
oldest_buffered = messages[0]
history_messages: list[discord.Message] = []
try:
async for msg in channel.history(limit=context_count + 5, before=oldest_buffered):
async for msg in channel.history(limit=context_count + 10, before=oldest_buffered):
if msg.author.bot:
continue
if not msg.content or not msg.content.strip():
continue
if self._was_moderated(msg):
continue
history_messages.append(msg)
if len(history_messages) >= context_count:
break
@@ -447,7 +563,10 @@ class SentimentCog(commands.Cog):
anon_conversation = self._anonymize_conversation(conversation, anon_map)
anon_notes = self._anonymize_notes(user_notes_map, anon_map) if user_notes_map else user_notes_map
alias_context = self._build_alias_context(all_messages, anon_map)
channel_context = build_channel_context(ref_message, game_channels)
rules_context = self._build_rules_context()
logger.info(
"Channel analysis: %d new messages (+%d context) in #%s",
@@ -461,6 +580,8 @@ class SentimentCog(commands.Cog):
channel_context=channel_context,
user_notes_map=anon_notes,
new_message_start=new_message_start,
user_aliases=alias_context,
rules_context=rules_context,
)
if result is None:
@@ -480,6 +601,8 @@ class SentimentCog(commands.Cog):
channel_context=channel_context,
user_notes_map=anon_notes,
new_message_start=new_message_start,
user_aliases=alias_context,
rules_context=rules_context,
)
if heavy_result is not None:
logger.info(
@@ -534,6 +657,19 @@ class SentimentCog(commands.Cog):
sorted_ids = sorted(self._analyzed_message_ids)
self._analyzed_message_ids = set(sorted_ids[len(sorted_ids) // 2:])
def _prune_moderated_ids(self):
"""Cap the moderated message ID set to avoid unbounded growth."""
if len(self._moderated_message_ids) > self._max_analyzed_ids:
sorted_ids = sorted(self._moderated_message_ids)
self._moderated_message_ids = set(sorted_ids[len(sorted_ids) // 2:])
def _was_moderated(self, msg: discord.Message) -> bool:
"""Check if a message already triggered moderation (in-memory or via reaction)."""
if msg.id in self._moderated_message_ids:
return True
# Fall back to checking for bot's warning reaction (survives restarts)
return any(str(r.emoji) == "\u26a0\ufe0f" and r.me for r in msg.reactions)
async def _maybe_start_mention_scan(
self, trigger_message: discord.Message, mention_config: dict
):
@@ -581,14 +717,16 @@ class SentimentCog(commands.Cog):
sentiment_config = config.get("sentiment", {})
game_channels = config.get("game_channels", {})
# Fetch recent messages (before the trigger, skip bots/empty)
# Fetch recent messages (before the trigger, skip bots/empty/moderated)
raw_messages: list[discord.Message] = []
try:
async for msg in channel.history(limit=scan_count + 10, before=trigger_message):
async for msg in channel.history(limit=scan_count + 20, before=trigger_message):
if msg.author.bot:
continue
if not msg.content or not msg.content.strip():
continue
if self._was_moderated(msg):
continue
raw_messages.append(msg)
if len(raw_messages) >= scan_count:
break
@@ -619,7 +757,10 @@ class SentimentCog(commands.Cog):
anon_conversation = self._anonymize_conversation(conversation, anon_map)
anon_notes = self._anonymize_notes(user_notes_map, anon_map) if user_notes_map else user_notes_map
alias_context = self._build_alias_context(raw_messages, anon_map)
channel_context = build_channel_context(raw_messages[0], game_channels)
rules_context = self._build_rules_context()
mention_context = (
f"A user flagged this conversation and said: \"{mention_text}\"\n"
f"Pay special attention to whether this concern is valid."
@@ -631,6 +772,8 @@ class SentimentCog(commands.Cog):
mention_context=mention_context,
channel_context=channel_context,
user_notes_map=anon_notes,
user_aliases=alias_context,
rules_context=rules_context,
)
if result is None:
+38 -12
View File
@@ -13,6 +13,7 @@ logger = logging.getLogger("bcs.sentiment")
async def mute_user(
bot, message: discord.Message, score: float,
categories: list[str], db_message_id: int | None, dirty_users: set[int],
violated_rules: list[int] | None = None, rules_config: dict | None = None,
):
member = message.author
if not isinstance(member, discord.Member):
@@ -43,14 +44,25 @@ async def mute_user(
messages_config = bot.config.get("messages", {})
cat_str = ", ".join(c for c in categories if c != "none") or "general negativity"
# Build rule citation text
rules_text = ""
if violated_rules and rules_config:
rule_lines = [f"Rule {r}: {rules_config[r]}" for r in violated_rules if r in rules_config]
if rule_lines:
rules_text = "\n".join(rule_lines)
description = messages_config.get("mute_description", "").format(
username=member.display_name,
duration=f"{duration_minutes} minutes",
score=f"{score:.2f}",
categories=cat_str,
)
if rules_text:
description += f"\n\nRules violated:\n{rules_text}"
embed = discord.Embed(
title=messages_config.get("mute_title", "BREEHAVIOR ALERT"),
description=messages_config.get("mute_description", "").format(
username=member.display_name,
duration=f"{duration_minutes} minutes",
score=f"{score:.2f}",
categories=cat_str,
),
description=description,
color=discord.Color.red(),
)
embed.set_footer(
@@ -58,25 +70,29 @@ async def mute_user(
)
await message.channel.send(embed=embed)
rules_log = f" | Rules: {','.join(str(r) for r in violated_rules)}" if violated_rules else ""
await log_action(
message.guild,
f"**MUTE** | {member.mention} | Score: {score:.2f} | "
f"Duration: {duration_minutes}m | Offense #{offense_num} | "
f"Categories: {cat_str}",
f"Categories: {cat_str}{rules_log}",
)
logger.info(
"Muted %s for %d minutes (offense #%d, score %.2f)",
"Muted %s for %d minutes (offense #%d, score %.2f, rules=%s)",
member, duration_minutes, offense_num, score,
violated_rules or [],
)
rules_detail = f" rules={','.join(str(r) for r in violated_rules)}" if violated_rules else ""
asyncio.create_task(bot.db.save_action(
guild_id=message.guild.id,
user_id=member.id,
username=member.display_name,
action_type="mute",
message_id=db_message_id,
details=f"duration={duration_minutes}m offense={offense_num} score={score:.2f} categories={cat_str}",
details=f"duration={duration_minutes}m offense={offense_num} score={score:.2f} categories={cat_str}{rules_detail}",
))
save_user_state(bot, dirty_users, member.id)
@@ -84,6 +100,7 @@ async def mute_user(
async def warn_user(
bot, message: discord.Message, score: float,
db_message_id: int | None, dirty_users: set[int],
violated_rules: list[int] | None = None, rules_config: dict | None = None,
):
timeout_config = bot.config.get("timeouts", {})
cooldown = timeout_config.get("warning_cooldown_minutes", 5)
@@ -104,20 +121,29 @@ async def warn_user(
"Easy there, {username}. The Breehavior Monitor is watching.",
).format(username=message.author.display_name)
# Append rule citation if rules were violated
if violated_rules and rules_config:
rule_lines = [f"Rule {r}: {rules_config[r]}" for r in violated_rules if r in rules_config]
if rule_lines:
warning_text += "\n" + " | ".join(rule_lines)
await message.channel.send(warning_text)
rules_log = f" | Rules: {','.join(str(r) for r in violated_rules)}" if violated_rules else ""
await log_action(
message.guild,
f"**WARNING** | {message.author.mention} | Score: {score:.2f}",
f"**WARNING** | {message.author.mention} | Score: {score:.2f}{rules_log}",
)
logger.info("Warned %s (score %.2f)", message.author, score)
logger.info("Warned %s (score %.2f, rules=%s)", message.author, score, violated_rules or [])
rules_detail = f" rules={','.join(str(r) for r in violated_rules)}" if violated_rules else ""
asyncio.create_task(bot.db.save_action(
guild_id=message.guild.id,
user_id=message.author.id,
username=message.author.display_name,
action_type="warning",
message_id=db_message_id,
details=f"score={score:.2f}",
details=f"score={score:.2f}{rules_detail}",
))
save_user_state(bot, dirty_users, message.author.id)
+26 -12
View File
@@ -4,6 +4,11 @@ import logging
logger = logging.getLogger("bcs.sentiment")
def _aliases_csv(user_data) -> str | None:
"""Convert aliases list to comma-separated string for DB storage."""
return ",".join(user_data.aliases) if user_data.aliases else None
def save_user_state(bot, dirty_users: set[int], user_id: int) -> None:
"""Fire-and-forget save of a user's current state to DB."""
user_data = bot.drama_tracker.get_user(user_id)
@@ -16,6 +21,8 @@ def save_user_state(bot, dirty_users: set[int], user_id: int) -> None:
user_notes=user_data.notes or None,
warned=user_data.warned_since_reset,
last_offense_at=user_data.last_offense_time or None,
aliases=_aliases_csv(user_data),
warning_expires_at=user_data.warning_expires_at or None,
))
dirty_users.discard(user_id)
@@ -25,17 +32,24 @@ async def flush_dirty_states(bot, dirty_users: set[int]) -> None:
if not dirty_users:
return
dirty = list(dirty_users)
dirty_users.clear()
saved = 0
for user_id in dirty:
user_data = bot.drama_tracker.get_user(user_id)
await bot.db.save_user_state(
user_id=user_id,
offense_count=user_data.offense_count,
immune=user_data.immune,
off_topic_count=user_data.off_topic_count,
baseline_coherence=user_data.baseline_coherence,
user_notes=user_data.notes or None,
warned=user_data.warned_since_reset,
last_offense_at=user_data.last_offense_time or None,
)
logger.info("Flushed %d dirty user states to DB.", len(dirty))
try:
await bot.db.save_user_state(
user_id=user_id,
offense_count=user_data.offense_count,
immune=user_data.immune,
off_topic_count=user_data.off_topic_count,
baseline_coherence=user_data.baseline_coherence,
user_notes=user_data.notes or None,
warned=user_data.warned_since_reset,
last_offense_at=user_data.last_offense_time or None,
aliases=_aliases_csv(user_data),
warning_expires_at=user_data.warning_expires_at or None,
)
dirty_users.discard(user_id)
saved += 1
except Exception:
logger.exception("Failed to flush state for user %d", user_id)
logger.info("Flushed %d/%d dirty user states to DB.", saved, len(dirty))
+141 -30
View File
@@ -1,5 +1,9 @@
import asyncio
import logging
import random
import re
from collections import deque
from pathlib import Path
import discord
@@ -8,6 +12,108 @@ from cogs.sentiment.state import save_user_state
logger = logging.getLogger("bcs.sentiment")
_PROMPTS_DIR = Path(__file__).resolve().parent.parent.parent / "prompts"
_TOPIC_REDIRECT_PROMPT = (_PROMPTS_DIR / "topic_redirect.txt").read_text(encoding="utf-8")
DEFAULT_TOPIC_REMINDS = [
"Hey {username}, this is a gaming server 🎮 — take the personal stuff to {channel}.",
"{username}, sir this is a gaming channel. {channel} is right there.",
"Hey {username}, I don't remember this being a therapy session. Take it to {channel}. 🎮",
"{username}, I'm gonna need you to take that energy to {channel}. This channel has a vibe to protect.",
"Not to be dramatic {username}, but this is wildly off-topic. {channel} exists for a reason. 🎮",
]
DEFAULT_TOPIC_NUDGES = [
"{username}, we've been over this. Gaming. Channel. {channel} for the rest. 🎮",
"{username}, you keep drifting off-topic like it's a speedrun category. {channel}. Now.",
"Babe. {username}. The gaming channel. We talked about this. Go to {channel}. 😭",
"{username}, I will not ask again (I will definitely ask again). {channel} for off-topic. 🎮",
"{username}, at this point I'm keeping score. That's off-topic strike {count}. {channel} is waiting.",
"Look, {username}, I love the enthusiasm but this ain't the channel for it. {channel}. 🎮",
]
# Per-channel deque of recent LLM-generated redirect messages (for variety)
_recent_redirects: dict[int, deque] = {}
def _get_recent_redirects(channel_id: int) -> list[str]:
if channel_id in _recent_redirects:
return list(_recent_redirects[channel_id])
return []
def _record_redirect(channel_id: int, text: str):
if channel_id not in _recent_redirects:
_recent_redirects[channel_id] = deque(maxlen=5)
_recent_redirects[channel_id].append(text)
def _strip_brackets(text: str) -> str:
"""Strip leaked LLM metadata brackets (same approach as ChatCog)."""
segments = re.split(r"^\s*\[[^\]]*\]\s*$", text, flags=re.MULTILINE)
segments = [s.strip() for s in segments if s.strip()]
return segments[-1] if segments else ""
async def _generate_llm_redirect(
bot, message: discord.Message, topic_category: str,
topic_reasoning: str, count: int, redirect_mention: str = "",
) -> str | None:
"""Ask the LLM chat model to generate a topic redirect message."""
recent = _get_recent_redirects(message.channel.id)
user_prompt = (
f"Username: {message.author.display_name}\n"
f"Channel: #{getattr(message.channel, 'name', 'unknown')}\n"
f"Off-topic category: {topic_category}\n"
f"Why it's off-topic: {topic_reasoning}\n"
f"Off-topic strike count: {count}\n"
f"What they said: {message.content[:300]}"
)
if redirect_mention:
user_prompt += f"\nRedirect channel: {redirect_mention}"
messages = [{"role": "user", "content": user_prompt}]
effective_prompt = _TOPIC_REDIRECT_PROMPT
if recent:
avoid_block = "\n".join(f"- {r}" for r in recent)
effective_prompt += (
"\n\nIMPORTANT — you recently sent these redirects in the same channel. "
"Do NOT repeat any of these. Be completely different.\n"
+ avoid_block
)
try:
response = await bot.llm_chat.chat(
messages, effective_prompt,
)
except Exception:
logger.exception("LLM topic redirect generation failed")
return None
if response:
response = _strip_brackets(response)
return response if response else None
def _static_fallback(bot, message: discord.Message, count: int, redirect_mention: str = "") -> str:
"""Pick a static template message as fallback."""
messages_config = bot.config.get("messages", {})
if count >= 2:
pool = messages_config.get("topic_nudges", DEFAULT_TOPIC_NUDGES)
if isinstance(pool, str):
pool = [pool]
else:
pool = messages_config.get("topic_reminds", DEFAULT_TOPIC_REMINDS)
if isinstance(pool, str):
pool = [pool]
return random.choice(pool).format(
username=message.author.display_name, count=count,
channel=redirect_mention or "the right channel",
)
async def handle_topic_drift(
bot, message: discord.Message, topic_category: str, topic_reasoning: str,
@@ -33,46 +139,51 @@ async def handle_topic_drift(
return
count = tracker.record_off_topic(user_id)
messages_config = bot.config.get("messages", {})
action_type = "topic_nudge" if count >= 2 else "topic_remind"
if count >= 2:
nudge_text = messages_config.get(
"topic_nudge",
"{username}, let's keep it to gaming talk in here.",
).format(username=message.author.display_name)
await message.channel.send(nudge_text)
# Resolve redirect channel mention
redirect_mention = ""
redirect_name = config.get("redirect_channel")
if redirect_name and message.guild:
ch = discord.utils.get(message.guild.text_channels, name=redirect_name)
if ch:
redirect_mention = ch.mention
# Generate the redirect message
use_llm = config.get("use_llm", False)
redirect_text = None
if use_llm:
redirect_text = await _generate_llm_redirect(
bot, message, topic_category, topic_reasoning, count, redirect_mention,
)
if redirect_text:
_record_redirect(message.channel.id, redirect_text)
else:
redirect_text = _static_fallback(bot, message, count, redirect_mention)
await message.channel.send(redirect_text)
if action_type == "topic_nudge":
await log_action(
message.guild,
f"**TOPIC NUDGE** | {message.author.mention} | "
f"Off-topic count: {count} | Category: {topic_category}",
)
logger.info("Topic nudge for %s (count %d)", message.author, count)
asyncio.create_task(bot.db.save_action(
guild_id=message.guild.id, user_id=user_id,
username=message.author.display_name,
action_type="topic_nudge", message_id=db_message_id,
details=f"off_topic_count={count} category={topic_category}",
))
save_user_state(bot, dirty_users, user_id)
else:
remind_text = messages_config.get(
"topic_remind",
"Hey {username}, this is a gaming server \u2014 maybe take the personal stuff to DMs?",
).format(username=message.author.display_name)
await message.channel.send(remind_text)
await log_action(
message.guild,
f"**TOPIC REMIND** | {message.author.mention} | "
f"Category: {topic_category} | {topic_reasoning}",
)
logger.info("Topic remind for %s (count %d)", message.author, count)
asyncio.create_task(bot.db.save_action(
guild_id=message.guild.id, user_id=user_id,
username=message.author.display_name,
action_type="topic_remind", message_id=db_message_id,
details=f"off_topic_count={count} category={topic_category} reasoning={topic_reasoning}",
))
save_user_state(bot, dirty_users, user_id)
logger.info("Topic %s for %s (count %d)", action_type.replace("topic_", ""), message.author, count)
asyncio.create_task(bot.db.save_action(
guild_id=message.guild.id, user_id=user_id,
username=message.author.display_name,
action_type=action_type, message_id=db_message_id,
details=f"off_topic_count={count} category={topic_category}"
+ (f" reasoning={topic_reasoning}" if action_type == "topic_remind" else ""),
))
save_user_state(bot, dirty_users, user_id)
+161
View File
@@ -0,0 +1,161 @@
import asyncio
import logging
import random
import re
from collections import deque
from pathlib import Path
import discord
from cogs.sentiment.log_utils import log_action
from cogs.sentiment.state import save_user_state
logger = logging.getLogger("bcs.sentiment")
_PROMPTS_DIR = Path(__file__).resolve().parent.parent.parent / "prompts"
_UNBLOCK_REDIRECT_PROMPT = (_PROMPTS_DIR / "unblock_redirect.txt").read_text(encoding="utf-8")
# Regex: matches "unblock" as a whole word, case-insensitive
UNBLOCK_PATTERN = re.compile(r"\bunblock(?:ed|ing|s)?\b", re.IGNORECASE)
DEFAULT_UNBLOCK_REMINDS = [
"{username}, begging to be unblocked in chat is not the move. Take it up with an admin. 🙄",
"{username}, nobody's getting unblocked because you asked nicely in a gaming channel.",
"Hey {username}, the unblock button isn't in this chat. Just saying.",
"{username}, I admire the persistence but this isn't the unblock hotline.",
"{username}, that's between you and whoever blocked you. Chat isn't the appeals court.",
]
DEFAULT_UNBLOCK_NUDGES = [
"{username}, we've been over this. No amount of asking here is going to change anything. 🙄",
"{username}, I'm starting to think you enjoy being told no. Still not getting unblocked via chat.",
"{username}, at this point I could set a reminder for your next unblock request. Take it to an admin.",
"Babe. {username}. We've had this conversation {count} times. It's not happening here. 😭",
"{username}, I'm keeping a tally and you're at {count}. The answer is still the same.",
]
# Per-channel deque of recent LLM-generated messages (for variety)
_recent_redirects: dict[int, deque] = {}
def _get_recent_redirects(channel_id: int) -> list[str]:
if channel_id in _recent_redirects:
return list(_recent_redirects[channel_id])
return []
def _record_redirect(channel_id: int, text: str):
if channel_id not in _recent_redirects:
_recent_redirects[channel_id] = deque(maxlen=5)
_recent_redirects[channel_id].append(text)
def _strip_brackets(text: str) -> str:
"""Strip leaked LLM metadata brackets."""
segments = re.split(r"^\s*\[[^\]]*\]\s*$", text, flags=re.MULTILINE)
segments = [s.strip() for s in segments if s.strip()]
return segments[-1] if segments else ""
def matches_unblock_nag(content: str) -> bool:
"""Check if a message contains unblock-related nagging."""
return bool(UNBLOCK_PATTERN.search(content))
async def _generate_llm_redirect(
bot, message: discord.Message, count: int,
) -> str | None:
"""Ask the LLM chat model to generate an unblock-nag redirect."""
recent = _get_recent_redirects(message.channel.id)
user_prompt = (
f"Username: {message.author.display_name}\n"
f"Channel: #{getattr(message.channel, 'name', 'unknown')}\n"
f"Unblock nag count: {count}\n"
f"What they said: {message.content[:300]}"
)
messages = [{"role": "user", "content": user_prompt}]
effective_prompt = _UNBLOCK_REDIRECT_PROMPT
if recent:
avoid_block = "\n".join(f"- {r}" for r in recent)
effective_prompt += (
"\n\nIMPORTANT — you recently sent these redirects in the same channel. "
"Do NOT repeat any of these. Be completely different.\n"
+ avoid_block
)
try:
response = await bot.llm_chat.chat(messages, effective_prompt)
except Exception:
logger.exception("LLM unblock redirect generation failed")
return None
if response:
response = _strip_brackets(response)
return response if response else None
def _static_fallback(message: discord.Message, count: int) -> str:
"""Pick a static template message as fallback."""
if count >= 2:
pool = DEFAULT_UNBLOCK_NUDGES
else:
pool = DEFAULT_UNBLOCK_REMINDS
return random.choice(pool).format(
username=message.author.display_name, count=count,
)
async def handle_unblock_nag(
bot, message: discord.Message, dirty_users: set[int],
):
"""Handle a detected unblock-nagging message."""
config = bot.config.get("unblock_nag", {})
if not config.get("enabled", True):
return
dry_run = bot.config.get("monitoring", {}).get("dry_run", False)
if dry_run:
return
tracker = bot.drama_tracker
user_id = message.author.id
cooldown = config.get("remind_cooldown_minutes", 30)
if not tracker.can_unblock_remind(user_id, cooldown):
return
count = tracker.record_unblock_nag(user_id)
action_type = "unblock_nudge" if count >= 2 else "unblock_remind"
# Generate the redirect message
use_llm = config.get("use_llm", True)
redirect_text = None
if use_llm:
redirect_text = await _generate_llm_redirect(bot, message, count)
if redirect_text:
_record_redirect(message.channel.id, redirect_text)
else:
redirect_text = _static_fallback(message, count)
await message.channel.send(redirect_text)
await log_action(
message.guild,
f"**UNBLOCK {'NUDGE' if count >= 2 else 'REMIND'}** | {message.author.mention} | "
f"Nag count: {count}",
)
logger.info("Unblock %s for %s (count %d)", action_type.replace("unblock_", ""), message.author, count)
asyncio.create_task(bot.db.save_action(
guild_id=message.guild.id, user_id=user_id,
username=message.author.display_name,
action_type=action_type, message_id=None,
details=f"unblock_nag_count={count}",
))
save_user_state(bot, dirty_users, user_id)
-205
View File
@@ -1,205 +0,0 @@
import logging
import random
import re
from collections import deque
from pathlib import Path
import discord
from discord.ext import commands
logger = logging.getLogger("bcs.wordle")
_PROMPTS_DIR = Path(__file__).resolve().parent.parent / "prompts"
_prompt_cache: dict[str, str] = {}
def _load_prompt(filename: str) -> str:
if filename not in _prompt_cache:
_prompt_cache[filename] = (_PROMPTS_DIR / filename).read_text(encoding="utf-8")
return _prompt_cache[filename]
def _parse_wordle_embeds(message: discord.Message) -> dict | None:
"""Extract useful info from a Wordle bot message.
Returns a dict with keys like 'type', 'summary', 'scores', 'streak', 'wordle_number'
or None if this isn't a recognizable Wordle result message.
"""
if not message.embeds:
return None
full_text = ""
wordle_number = None
for embed in message.embeds:
if embed.description:
full_text += embed.description + "\n"
if embed.title:
full_text += embed.title + "\n"
m = re.search(r"Wordle No\.\s*(\d+)", embed.title)
if m:
wordle_number = int(m.group(1))
if not full_text.strip():
return None
# Detect result messages (contain score patterns like "3/6:")
score_pattern = re.findall(r"(\d/6):\s*@?(.+?)(?:\n|$)", full_text)
streak_match = re.search(r"(\d+)\s*day streak", full_text)
if score_pattern:
scores = [{"score": s[0], "player": s[1].strip()} for s in score_pattern]
return {
"type": "results",
"wordle_number": wordle_number,
"streak": int(streak_match.group(1)) if streak_match else None,
"scores": scores,
"summary": full_text.strip(),
}
# Detect "was playing" messages
if "was playing" in full_text:
return {
"type": "playing",
"wordle_number": wordle_number,
"summary": full_text.strip(),
}
return None
class WordleCog(commands.Cog):
def __init__(self, bot: commands.Bot):
self.bot = bot
self._chat_history: dict[int, deque] = {}
def _get_active_prompt(self) -> str:
mode_config = self.bot.get_mode_config()
prompt_file = mode_config.get("prompt_file", "chat_personality.txt")
return _load_prompt(prompt_file)
def _get_wordle_config(self) -> dict:
return self.bot.config.get("wordle", {})
@commands.Cog.listener()
async def on_message(self, message: discord.Message):
if not message.author.bot:
return
if not message.guild:
return
config = self._get_wordle_config()
if not config.get("enabled", False):
return
# Match the Wordle bot by name
bot_name = config.get("bot_name", "Wordle")
if message.author.name != bot_name:
return
parsed = _parse_wordle_embeds(message)
if not parsed:
return
# Only comment on results, not "playing" notifications
if parsed["type"] == "playing":
reply_chance = config.get("playing_reply_chance", 0.0)
if reply_chance <= 0 or random.random() > reply_chance:
return
else:
reply_chance = config.get("reply_chance", 0.5)
if random.random() > reply_chance:
return
# Build context for the LLM
context_parts = [
f"[Wordle bot posted in #{message.channel.name}]",
"[Wordle scoring: players guess a 5-letter word in up to 6 tries. "
"LOWER is BETTER — 1/6 is a genius guess, 2/6 is incredible, 3/6 is great, "
"4/6 is mediocre, 5/6 is rough, 6/6 barely scraped by, X/6 means they failed]",
]
if parsed["type"] == "results":
context_parts.append("[This is a Wordle results summary]")
if parsed.get("streak"):
context_parts.append(f"[Group streak: {parsed['streak']} days]")
if parsed.get("wordle_number"):
context_parts.append(f"[Wordle #{parsed['wordle_number']}]")
for s in parsed.get("scores", []):
context_parts.append(f"[{s['player']} scored {s['score']}]")
# Identify the winner (lowest score = best)
scores = parsed.get("scores", [])
if scores:
best = min(scores, key=lambda s: int(s["score"][0]))
worst = max(scores, key=lambda s: int(s["score"][0]))
if best != worst:
context_parts.append(
f"[{best['player']} won with {best['score']}, "
f"{worst['player']} came last with {worst['score']}]"
)
elif parsed["type"] == "playing":
context_parts.append(f"[Someone is currently playing Wordle]")
context_parts.append(f"[{parsed['summary']}]")
prompt_context = "\n".join(context_parts)
user_msg = (
f"{prompt_context}\n"
f"React to this Wordle update with a short, fun comment. "
f"Keep it to 1-2 sentences."
)
ch_id = message.channel.id
if ch_id not in self._chat_history:
self._chat_history[ch_id] = deque(maxlen=6)
self._chat_history[ch_id].append({"role": "user", "content": user_msg})
active_prompt = self._get_active_prompt()
recent_bot_replies = [
m["content"][:150] for m in self._chat_history[ch_id]
if m["role"] == "assistant"
][-3:]
typing_ctx = None
async def start_typing():
nonlocal typing_ctx
typing_ctx = message.channel.typing()
await typing_ctx.__aenter__()
response = await self.bot.llm_chat.chat(
list(self._chat_history[ch_id]),
active_prompt,
on_first_token=start_typing,
recent_bot_replies=recent_bot_replies,
)
if typing_ctx:
await typing_ctx.__aexit__(None, None, None)
# Strip leaked metadata brackets (same as chat.py)
if response:
segments = re.split(r"^\s*\[[^\]]*\]\s*$", response, flags=re.MULTILINE)
segments = [s.strip() for s in segments if s.strip()]
response = segments[-1] if segments else ""
if not response:
logger.warning("LLM returned no response for Wordle comment in #%s", message.channel.name)
return
self._chat_history[ch_id].append({"role": "assistant", "content": response})
await message.reply(response, mention_author=False)
logger.info(
"Wordle %s reply in #%s: %s",
parsed["type"],
message.channel.name,
response[:100],
)
async def setup(bot: commands.Bot):
await bot.add_cog(WordleCog(bot))
+51 -18
View File
@@ -29,11 +29,18 @@ game_channels:
topic_drift:
enabled: true
use_llm: true # Generate redirect messages via LLM instead of static templates
redirect_channel: "general" # Channel to suggest for off-topic chat
ignored_channels: ["general"] # Channel names or IDs to skip topic drift monitoring
remind_cooldown_minutes: 10 # Don't remind same user more than once per this window
escalation_count: 3 # After this many reminds, DM the server owner
reset_minutes: 60 # Reset off-topic count after this much on-topic behavior
unblock_nag:
enabled: true
use_llm: true # Generate redirect messages via LLM instead of static templates
remind_cooldown_minutes: 30 # Don't remind same user more than once per this window
mention_scan:
enabled: true
scan_messages: 30 # Messages to scan per mention trigger
@@ -43,13 +50,25 @@ timeouts:
escalation_minutes: [30, 60, 120, 240] # Escalating timeout durations
offense_reset_minutes: 1440 # Reset offense counter after this much good behavior (24h)
warning_cooldown_minutes: 5 # Don't warn same user more than once per this window
warning_expiration_minutes: 30 # Warning expires after this long — user must be re-warned before mute
messages:
warning: "Easy there, {username}. The Breehavior Monitor is watching. \U0001F440"
mute_title: "\U0001F6A8 BREEHAVIOR ALERT \U0001F6A8"
mute_description: "{username} has been placed in timeout for {duration}.\n\nReason: Sustained elevated drama levels detected.\nDrama Score: {score}/1.0\nCategories: {categories}\n\nCool down and come back when you've resolved your skill issues."
topic_remind: "Hey {username}, this is a gaming server \U0001F3AE — maybe take the personal stuff to DMs?"
topic_nudge: "{username}, we've chatted about this before — let's keep it to gaming talk in here. Personal drama belongs in DMs."
topic_reminds:
- "Hey {username}, this is a gaming server 🎮 — take the personal stuff to {channel}."
- "{username}, sir this is a gaming channel. {channel} is right there."
- "Hey {username}, I don't remember this being a therapy session. Take it to {channel}. 🎮"
- "{username}, I'm gonna need you to take that energy to {channel}. This channel has a vibe to protect."
- "Not to be dramatic {username}, but this is wildly off-topic. {channel} exists for a reason. 🎮"
topic_nudges:
- "{username}, we've been over this. Gaming. Channel. {channel} for the rest. 🎮"
- "{username}, you keep drifting off-topic like it's a speedrun category. {channel}. Now."
- "Babe. {username}. The gaming channel. We talked about this. Go to {channel}. 😭"
- "{username}, I will not ask again (I will definitely ask again). {channel} for off-topic. 🎮"
- "{username}, at this point I'm keeping score. That's off-topic strike {count}. {channel} is waiting."
- "Look, {username}, I love the enthusiasm but this ain't the channel for it. {channel}. 🎮"
topic_owner_dm: "Heads up: {username} keeps going off-topic with personal drama in #{channel}. They've been reminded {count} times. Might need a word."
channel_redirect: "Hey {username}, that sounds like {game} talk — head over to {channel} for that!"
@@ -60,7 +79,7 @@ modes:
default:
label: "Default"
description: "Hall-monitor moderation mode"
prompt_file: "chat_personality.txt"
prompt_file: "personalities/chat_personality.txt"
proactive_replies: false
reply_chance: 0.0
moderation: full
@@ -68,9 +87,9 @@ modes:
chatty:
label: "Chatty"
description: "Friendly chat participant"
prompt_file: "chat_chatty.txt"
prompt_file: "personalities/chat_chatty.txt"
proactive_replies: true
reply_chance: 0.10
reply_chance: 0.40
moderation: relaxed
relaxed_thresholds:
warning_threshold: 0.80
@@ -81,9 +100,9 @@ modes:
roast:
label: "Roast"
description: "Savage roast mode"
prompt_file: "chat_roast.txt"
prompt_file: "personalities/chat_roast.txt"
proactive_replies: true
reply_chance: 0.20
reply_chance: 0.60
moderation: relaxed
relaxed_thresholds:
warning_threshold: 0.85
@@ -94,9 +113,9 @@ modes:
hype:
label: "Hype"
description: "Your biggest fan"
prompt_file: "chat_hype.txt"
prompt_file: "personalities/chat_hype.txt"
proactive_replies: true
reply_chance: 0.15
reply_chance: 0.50
moderation: relaxed
relaxed_thresholds:
warning_threshold: 0.80
@@ -107,9 +126,9 @@ modes:
drunk:
label: "Drunk"
description: "Had a few too many"
prompt_file: "chat_drunk.txt"
prompt_file: "personalities/chat_drunk.txt"
proactive_replies: true
reply_chance: 0.20
reply_chance: 0.60
moderation: relaxed
relaxed_thresholds:
warning_threshold: 0.85
@@ -120,9 +139,22 @@ modes:
english_teacher:
label: "English Teacher"
description: "Insufferable grammar nerd mode"
prompt_file: "chat_english_teacher.txt"
prompt_file: "personalities/chat_english_teacher.txt"
proactive_replies: true
reply_chance: 0.20
reply_chance: 0.60
moderation: relaxed
relaxed_thresholds:
warning_threshold: 0.85
mute_threshold: 0.90
spike_warning_threshold: 0.75
spike_mute_threshold: 0.90
slutty:
label: "Slutty"
description: "Shamelessly flirty and full of innuendos"
prompt_file: "personalities/chat_slutty.txt"
proactive_replies: true
reply_chance: 0.60
moderation: relaxed
relaxed_thresholds:
warning_threshold: 0.85
@@ -135,11 +167,6 @@ polls:
duration_hours: 4
cooldown_minutes: 60 # Per-channel cooldown between auto-polls
wordle:
enabled: true
bot_name: "Wordle" # Discord bot name to watch for
reply_chance: 0.75 # Chance to comment on result summaries (0.0-1.0)
playing_reply_chance: 0.0 # Chance to comment on "was playing" messages (0 = never)
coherence:
enabled: true
@@ -153,3 +180,9 @@ coherence:
mobile_keyboard: "{username}'s thumbs are having a rough day."
language_barrier: "Having trouble there, {username}? Take your time."
default: "You okay there, {username}? That message was... something."
reactions:
enabled: false
chance: 0.15 # Probability of evaluating a message for reaction
cooldown_seconds: 45 # Per-channel cooldown between reactions
excluded_channels: [] # Channel names or IDs to skip reactions in
@@ -0,0 +1,216 @@
# Conversational Memory Design
## Goal
Make the bot a real conversational participant that knows people, remembers past interactions, can answer general questions, and gives input based on accumulated context. People should be able to ask it questions and get thoughtful answers informed by who they are and what's happened before.
## Design Decisions
- **Memory approach**: Structured memory tables in existing MSSQL database
- **Learning mode**: Both passive (observing chat via sentiment analysis) and active (direct conversations)
- **Knowledge scope**: General knowledge + server/people awareness (no web search)
- **Permanent memory**: Stored in existing `UserState.UserNotes` column (repurposed as LLM-maintained profile)
- **Expiring memory**: New `UserMemory` table for transient context with LLM-assigned expiration
## Database Changes
### Repurposed: `UserState.UserNotes`
No schema change needed. The column already exists as `NVARCHAR(MAX)`. Currently stores timestamped observation lines (max 10). Will be repurposed as an LLM-maintained **permanent profile summary** — a compact paragraph of durable facts about a user.
Example content:
```
GTA Online grinder (rank 400+, wants to hit 500), sarcastic humor, works night shifts, hates battle royales. Has a dog named Rex. Banters with the bot, usually tries to get roasted. Been in the server since early 2024.
```
The LLM rewrites this field as a whole when new permanent facts emerge, rather than appending timestamped lines.
### New Table: `UserMemory`
Stores expiring memories — transient context that's relevant for days or weeks but not forever.
```sql
CREATE TABLE UserMemory (
Id BIGINT IDENTITY(1,1) PRIMARY KEY,
UserId BIGINT NOT NULL,
Memory NVARCHAR(500) NOT NULL,
Topics NVARCHAR(200) NOT NULL, -- comma-separated tags
Importance NVARCHAR(10) NOT NULL, -- low, medium, high
ExpiresAt DATETIME2 NOT NULL,
Source NVARCHAR(20) NOT NULL, -- 'chat' or 'passive'
CreatedAt DATETIME2 NOT NULL DEFAULT SYSUTCDATETIME(),
INDEX IX_UserMemory_UserId (UserId),
INDEX IX_UserMemory_ExpiresAt (ExpiresAt)
)
```
Example rows:
| Memory | Topics | Importance | ExpiresAt | Source |
|--------|--------|------------|-----------|--------|
| Frustrated about losing ranked matches in Warzone | warzone,fps,frustration | medium | +7d | passive |
| Said they're quitting Warzone for good | warzone,fps | high | +30d | chat |
| Drunk tonight, celebrating Friday | personal,celebration | low | +1d | chat |
| Excited about GTA DLC dropping next week | gta,dlc | medium | +7d | passive |
## Memory Extraction
### From Direct Conversations (ChatCog)
After the bot sends a chat reply, a **fire-and-forget background task** calls the triage LLM to extract memories from the conversation. This does not block the reply.
New LLM tool definition:
```python
MEMORY_EXTRACTION_TOOL = {
"type": "function",
"function": {
"name": "extract_memories",
"parameters": {
"type": "object",
"properties": {
"memories": {
"type": "array",
"items": {
"type": "object",
"properties": {
"memory": {
"type": "string",
"description": "A concise fact or observation worth remembering."
},
"topics": {
"type": "array",
"items": {"type": "string"},
"description": "Topic tags for retrieval (e.g., 'gta', 'personal', 'warzone')."
},
"expiration": {
"type": "string",
"enum": ["1d", "3d", "7d", "30d", "permanent"],
"description": "How long this memory stays relevant. Use 'permanent' for stable facts about the person."
},
"importance": {
"type": "string",
"enum": ["low", "medium", "high"],
"description": "How important this memory is for future interactions."
}
},
"required": ["memory", "topics", "expiration", "importance"]
},
"description": "Memories to store. Only include genuinely new or noteworthy information."
},
"profile_update": {
"type": ["string", "null"],
"description": "If a permanent fact was learned, provide the full updated profile summary incorporating the new info. Null if no profile changes needed."
}
},
"required": ["memories"]
}
}
}
```
The extraction prompt receives:
- The conversation that just happened (from `_chat_history`)
- The user's current profile (`UserNotes`)
- Instructions to only extract genuinely new information
### From Passive Observation (SentimentCog)
The existing `note_update` field from analysis results currently feeds `DramaTracker.update_user_notes()`. This will be enhanced:
- If `note_update` contains a durable fact (the LLM can flag this), update `UserNotes` profile
- If it's transient observation, insert into `UserMemory` with a 7d default expiration
- The analysis tool's `note_update` field description gets updated to indicate whether the note is permanent or transient
## Memory Retrieval at Chat Time
When building context for a chat reply, memories are pulled in layers and injected as a structured block:
### Layer 1: Profile (always included)
```python
profile = user_state.user_notes # permanent profile summary
```
### Layer 2: Recent Expiring Memories (last 5 by CreatedAt)
```sql
SELECT TOP 5 Memory, Topics, CreatedAt
FROM UserMemory
WHERE UserId = ? AND ExpiresAt > SYSUTCDATETIME()
ORDER BY CreatedAt DESC
```
### Layer 3: Topic-Matched Memories
Extract keywords from the current message, match against `Topics` column:
```sql
SELECT TOP 5 Memory, Topics, CreatedAt
FROM UserMemory
WHERE UserId = ? AND ExpiresAt > SYSUTCDATETIME()
AND (Topics LIKE '%gta%' OR Topics LIKE '%warzone%') -- dynamic from message keywords
ORDER BY Importance DESC, CreatedAt DESC
```
### Layer 4: Channel Bias
If in a game channel (e.g., `#gta-online`), add the game name as a topic filter to boost relevant memories.
### Injected Context Format
```
[What you know about {username}:]
Profile: GTA grinder (rank 400+), sarcastic, works night shifts, hates BRs. Banters with the bot.
Recent: Said they're quitting Warzone (2 days ago) | Excited about GTA DLC (yesterday)
Relevant: Mentioned trying to hit rank 500 in GTA (3 weeks ago)
```
Target: ~200-400 tokens of memory context per chat interaction.
## Memory Maintenance
### Pruning (daily background task)
```sql
DELETE FROM UserMemory WHERE ExpiresAt < SYSUTCDATETIME()
```
Also enforce a per-user cap (50 memories). When exceeded, delete oldest low-importance memories first:
```sql
-- Delete excess memories beyond cap, keeping high importance longest
DELETE FROM UserMemory
WHERE Id IN (
SELECT Id FROM UserMemory
WHERE UserId = ?
ORDER BY
CASE Importance WHEN 'high' THEN 3 WHEN 'medium' THEN 2 ELSE 1 END,
CreatedAt DESC
OFFSET 50 ROWS
)
```
### Profile Consolidation
When a `permanent` memory is extracted, the LLM provides an updated `profile_update` string that incorporates the new fact into the existing profile. This replaces `UserNotes` directly — no separate consolidation task needed.
## Integration Changes
| File | Changes |
|------|---------|
| `utils/database.py` | Add `UserMemory` table creation in schema. Add CRUD: `save_memory()`, `get_recent_memories()`, `get_memories_by_topics()`, `prune_expired_memories()`, `prune_excess_memories()`. Update `save_user_state()` (no schema change needed). |
| `utils/llm_client.py` | Add `extract_memories()` method with `MEMORY_EXTRACTION_TOOL`. Add `MEMORY_EXTRACTION_PROMPT` for the extraction system prompt. |
| `utils/drama_tracker.py` | `update_user_notes()` changes from appending timestamped lines to replacing the full profile string when a profile update is provided. Keep backward compat for non-profile note_updates during transition. |
| `cogs/chat.py` | At chat time: query DB for memories, build memory context block, inject into prompt. After reply: fire-and-forget memory extraction task. |
| `cogs/sentiment/` | Route `note_update` from analysis into `UserMemory` table (expiring) or `UserNotes` profile update (permanent). |
| `bot.py` | Start daily memory pruning background task on bot ready. |
## What Stays the Same
- In-memory `_chat_history` deque (10 turns per channel) for immediate conversation coherence
- All existing moderation/analysis logic
- Mode system and personality prompts (memory context is additive)
- `UserState` table schema (no changes)
- Existing DramaTracker hydration flow
## Token Budget
Per chat interaction:
- Profile summary: ~50-100 tokens
- Recent memories (5): ~75-125 tokens
- Topic-matched memories (5): ~75-125 tokens
- **Total memory context: ~200-350 tokens**
Memory extraction call (background, triage model): ~500 input tokens, ~200 output tokens per conversation.
@@ -0,0 +1,900 @@
# Conversational Memory Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Add persistent conversational memory so the bot knows people, remembers past interactions, and gives context-aware answers.
**Architecture:** Two-layer memory system — permanent profile in existing `UserState.UserNotes` column, expiring memories in new `UserMemory` table. LLM extracts memories after conversations (active) and from sentiment analysis (passive). At chat time, relevant memories are retrieved via recency + topic matching and injected into the prompt.
**Tech Stack:** Python 3, discord.py, pyodbc/MSSQL, OpenAI-compatible API (tool calling)
**Note:** This project has no test framework configured. Skip TDD steps — implement directly and test via running the bot.
---
### Task 1: Database — UserMemory table and CRUD methods
**Files:**
- Modify: `utils/database.py`
**Step 1: Add UserMemory table to schema**
In `_create_schema()`, after the existing `LlmLog` table creation block (around line 165), add:
```python
cursor.execute("""
IF NOT EXISTS (SELECT * FROM sys.tables WHERE name = 'UserMemory')
CREATE TABLE UserMemory (
Id BIGINT IDENTITY(1,1) PRIMARY KEY,
UserId BIGINT NOT NULL,
Memory NVARCHAR(500) NOT NULL,
Topics NVARCHAR(200) NOT NULL,
Importance NVARCHAR(10) NOT NULL,
ExpiresAt DATETIME2 NOT NULL,
Source NVARCHAR(20) NOT NULL,
CreatedAt DATETIME2 NOT NULL DEFAULT SYSUTCDATETIME(),
INDEX IX_UserMemory_UserId (UserId),
INDEX IX_UserMemory_ExpiresAt (ExpiresAt)
)
""")
```
**Step 2: Add `save_memory()` method**
Add after the `save_llm_log` methods (~line 441):
```python
# ------------------------------------------------------------------
# User Memory (conversational memory system)
# ------------------------------------------------------------------
async def save_memory(
self,
user_id: int,
memory: str,
topics: str,
importance: str,
expires_at: datetime,
source: str,
) -> None:
"""Save an expiring memory for a user."""
if not self._available:
return
try:
await asyncio.to_thread(
self._save_memory_sync,
user_id, memory, topics, importance, expires_at, source,
)
except Exception:
logger.exception("Failed to save user memory")
def _save_memory_sync(self, user_id, memory, topics, importance, expires_at, source):
conn = self._connect()
try:
cursor = conn.cursor()
cursor.execute(
"""INSERT INTO UserMemory (UserId, Memory, Topics, Importance, ExpiresAt, Source)
VALUES (?, ?, ?, ?, ?, ?)""",
user_id, memory[:500], topics[:200], importance[:10], expires_at, source[:20],
)
cursor.close()
finally:
conn.close()
```
**Step 3: Add `get_recent_memories()` method**
```python
async def get_recent_memories(self, user_id: int, limit: int = 5) -> list[dict]:
"""Get the most recent non-expired memories for a user."""
if not self._available:
return []
try:
return await asyncio.to_thread(self._get_recent_memories_sync, user_id, limit)
except Exception:
logger.exception("Failed to get recent memories")
return []
def _get_recent_memories_sync(self, user_id, limit) -> list[dict]:
conn = self._connect()
try:
cursor = conn.cursor()
cursor.execute(
"""SELECT TOP (?) Memory, Topics, Importance, CreatedAt
FROM UserMemory
WHERE UserId = ? AND ExpiresAt > SYSUTCDATETIME()
ORDER BY CreatedAt DESC""",
limit, user_id,
)
rows = cursor.fetchall()
cursor.close()
return [
{"memory": row[0], "topics": row[1], "importance": row[2], "created_at": row[3]}
for row in rows
]
finally:
conn.close()
```
**Step 4: Add `get_memories_by_topics()` method**
```python
async def get_memories_by_topics(
self, user_id: int, topic_keywords: list[str], limit: int = 5,
) -> list[dict]:
"""Get non-expired memories matching any of the given topic keywords."""
if not self._available or not topic_keywords:
return []
try:
return await asyncio.to_thread(
self._get_memories_by_topics_sync, user_id, topic_keywords, limit,
)
except Exception:
logger.exception("Failed to get memories by topics")
return []
def _get_memories_by_topics_sync(self, user_id, topic_keywords, limit) -> list[dict]:
conn = self._connect()
try:
cursor = conn.cursor()
# Build OR conditions for each keyword
conditions = " OR ".join(["Topics LIKE ?" for _ in topic_keywords])
params = [f"%{kw.lower()}%" for kw in topic_keywords]
query = f"""SELECT TOP (?) Memory, Topics, Importance, CreatedAt
FROM UserMemory
WHERE UserId = ? AND ExpiresAt > SYSUTCDATETIME()
AND ({conditions})
ORDER BY
CASE Importance WHEN 'high' THEN 3 WHEN 'medium' THEN 2 ELSE 1 END DESC,
CreatedAt DESC"""
cursor.execute(query, limit, user_id, *params)
rows = cursor.fetchall()
cursor.close()
return [
{"memory": row[0], "topics": row[1], "importance": row[2], "created_at": row[3]}
for row in rows
]
finally:
conn.close()
```
**Step 5: Add pruning methods**
```python
async def prune_expired_memories(self) -> int:
"""Delete all expired memories. Returns count deleted."""
if not self._available:
return 0
try:
return await asyncio.to_thread(self._prune_expired_memories_sync)
except Exception:
logger.exception("Failed to prune expired memories")
return 0
def _prune_expired_memories_sync(self) -> int:
conn = self._connect()
try:
cursor = conn.cursor()
cursor.execute("DELETE FROM UserMemory WHERE ExpiresAt < SYSUTCDATETIME()")
count = cursor.rowcount
cursor.close()
return count
finally:
conn.close()
async def prune_excess_memories(self, user_id: int, max_memories: int = 50) -> int:
"""Delete lowest-priority memories if a user exceeds the cap. Returns count deleted."""
if not self._available:
return 0
try:
return await asyncio.to_thread(
self._prune_excess_memories_sync, user_id, max_memories,
)
except Exception:
logger.exception("Failed to prune excess memories")
return 0
def _prune_excess_memories_sync(self, user_id, max_memories) -> int:
conn = self._connect()
try:
cursor = conn.cursor()
cursor.execute(
"""DELETE FROM UserMemory
WHERE Id IN (
SELECT Id FROM UserMemory
WHERE UserId = ?
ORDER BY
CASE Importance WHEN 'high' THEN 3 WHEN 'medium' THEN 2 ELSE 1 END DESC,
CreatedAt DESC
OFFSET ? ROWS
)""",
user_id, max_memories,
)
count = cursor.rowcount
cursor.close()
return count
finally:
conn.close()
```
**Step 6: Commit**
```bash
git add utils/database.py
git commit -m "feat: add UserMemory table and CRUD methods for conversational memory"
```
---
### Task 2: LLM Client — Memory extraction tool and method
**Files:**
- Modify: `utils/llm_client.py`
- Create: `prompts/memory_extraction.txt`
**Step 1: Create memory extraction prompt**
Create `prompts/memory_extraction.txt`:
```
You are a memory extraction system for a Discord bot. Given a conversation between a user and the bot, extract any noteworthy information worth remembering for future interactions.
RULES:
- Only extract genuinely NEW information not already in the user's profile.
- Be concise — each memory should be one sentence max.
- Assign appropriate expiration based on how long the information stays relevant:
- "permanent": Stable facts — name, job, hobbies, games they play, personality traits, pets, relationships
- "30d": Semi-stable preferences, ongoing situations — "trying to quit Warzone", "grinding for rank 500"
- "7d": Temporary situations — "excited about upcoming DLC", "on vacation this week"
- "3d": Short-term context — "had a bad day", "playing with friends tonight"
- "1d": Momentary state — "drunk right now", "tilted from losses", "in a good mood"
- Assign topic tags that would help retrieve this memory later (game names, "personal", "work", "mood", etc.)
- Assign importance: "high" for things they'd expect you to remember, "medium" for useful context, "low" for minor color
- If you learn a permanent fact about the user, provide a profile_update that incorporates the new fact into their existing profile. Rewrite the ENTIRE profile summary — don't just append. Keep it under 500 characters.
- If nothing worth remembering was said, return an empty memories array and null profile_update.
- Do NOT store things the bot said — only facts about or from the user.
Use the extract_memories tool to report your findings.
```
**Step 2: Add MEMORY_EXTRACTION_TOOL definition to `llm_client.py`**
Add after the `CONVERSATION_TOOL` definition (around line 204):
```python
MEMORY_EXTRACTION_TOOL = {
"type": "function",
"function": {
"name": "extract_memories",
"description": "Extract noteworthy memories from a conversation for future reference.",
"parameters": {
"type": "object",
"properties": {
"memories": {
"type": "array",
"items": {
"type": "object",
"properties": {
"memory": {
"type": "string",
"description": "A concise fact or observation worth remembering.",
},
"topics": {
"type": "array",
"items": {"type": "string"},
"description": "Topic tags for retrieval (e.g., 'gta', 'personal', 'warzone').",
},
"expiration": {
"type": "string",
"enum": ["1d", "3d", "7d", "30d", "permanent"],
"description": "How long this memory stays relevant.",
},
"importance": {
"type": "string",
"enum": ["low", "medium", "high"],
"description": "How important this memory is for future interactions.",
},
},
"required": ["memory", "topics", "expiration", "importance"],
},
"description": "Memories to store. Only include genuinely new or noteworthy information.",
},
"profile_update": {
"type": ["string", "null"],
"description": "Full updated profile summary incorporating new permanent facts, or null if no profile changes.",
},
},
"required": ["memories"],
},
},
}
MEMORY_EXTRACTION_PROMPT = (_PROMPTS_DIR / "memory_extraction.txt").read_text(encoding="utf-8")
```
**Step 3: Add `extract_memories()` method to `LLMClient`**
Add after the `chat()` method (around line 627):
```python
async def extract_memories(
self,
conversation: list[dict[str, str]],
username: str,
current_profile: str = "",
) -> dict | None:
"""Extract memories from a conversation. Returns dict with 'memories' list and optional 'profile_update'."""
convo_text = "\n".join(
f"{'Bot' if m['role'] == 'assistant' else username}: {m['content']}"
for m in conversation
if m.get("content")
)
user_content = f"=== USER PROFILE ===\n{current_profile or '(no profile yet)'}\n\n"
user_content += f"=== CONVERSATION ===\n{convo_text}\n\n"
user_content += "Extract any noteworthy memories from this conversation."
user_content = self._append_no_think(user_content)
req_json = json.dumps([
{"role": "system", "content": MEMORY_EXTRACTION_PROMPT[:500]},
{"role": "user", "content": user_content[:500]},
], default=str)
t0 = time.monotonic()
async with self._semaphore:
try:
temp_kwargs = {"temperature": 0.3} if self._supports_temperature else {}
response = await self._client.chat.completions.create(
model=self.model,
messages=[
{"role": "system", "content": MEMORY_EXTRACTION_PROMPT},
{"role": "user", "content": user_content},
],
tools=[MEMORY_EXTRACTION_TOOL],
tool_choice={"type": "function", "function": {"name": "extract_memories"}},
**temp_kwargs,
max_completion_tokens=1024,
)
elapsed = int((time.monotonic() - t0) * 1000)
choice = response.choices[0]
usage = response.usage
if choice.message.tool_calls:
tool_call = choice.message.tool_calls[0]
resp_text = tool_call.function.arguments
args = json.loads(resp_text)
self._log_llm("memory_extraction", elapsed, True, req_json, resp_text,
input_tokens=usage.prompt_tokens if usage else None,
output_tokens=usage.completion_tokens if usage else None)
return self._validate_memory_result(args)
logger.warning("No tool call in memory extraction response.")
self._log_llm("memory_extraction", elapsed, False, req_json, error="No tool call")
return None
except Exception as e:
elapsed = int((time.monotonic() - t0) * 1000)
logger.error("Memory extraction error: %s", e)
self._log_llm("memory_extraction", elapsed, False, req_json, error=str(e))
return None
@staticmethod
def _validate_memory_result(result: dict) -> dict:
"""Validate and normalize memory extraction result."""
if not isinstance(result, dict):
return {"memories": [], "profile_update": None}
memories = []
for m in result.get("memories", []):
if not isinstance(m, dict) or not m.get("memory"):
continue
memories.append({
"memory": str(m["memory"])[:500],
"topics": [str(t).lower() for t in m.get("topics", []) if t],
"expiration": m.get("expiration", "7d") if m.get("expiration") in ("1d", "3d", "7d", "30d", "permanent") else "7d",
"importance": m.get("importance", "medium") if m.get("importance") in ("low", "medium", "high") else "medium",
})
profile_update = result.get("profile_update")
if profile_update and isinstance(profile_update, str):
profile_update = profile_update[:500]
else:
profile_update = None
return {"memories": memories, "profile_update": profile_update}
```
**Step 4: Commit**
```bash
git add utils/llm_client.py prompts/memory_extraction.txt
git commit -m "feat: add memory extraction LLM tool and prompt"
```
---
### Task 3: DramaTracker — Update user notes handling
**Files:**
- Modify: `utils/drama_tracker.py`
**Step 1: Add `set_user_profile()` method**
Add after `update_user_notes()` (around line 210):
```python
def set_user_profile(self, user_id: int, profile: str) -> None:
"""Replace the user's profile summary (permanent memory)."""
user = self.get_user(user_id)
user.notes = profile[:500]
```
This replaces the entire notes field with the LLM-generated profile summary. The existing `update_user_notes()` method continues to work for backward compatibility with the sentiment pipeline during the transition — passive `note_update` values will still append until Task 5 routes them through the new memory system.
**Step 2: Commit**
```bash
git add utils/drama_tracker.py
git commit -m "feat: add set_user_profile method to DramaTracker"
```
---
### Task 4: ChatCog — Memory retrieval and injection
**Files:**
- Modify: `cogs/chat.py`
**Step 1: Add memory retrieval helper**
Add a helper method to `ChatCog` and a module-level utility for formatting relative timestamps:
```python
# At module level, after the imports
from datetime import datetime, timezone
_TOPIC_KEYWORDS = {
"gta", "warzone", "cod", "battlefield", "fortnite", "apex", "valorant",
"minecraft", "roblox", "league", "dota", "overwatch", "destiny", "halo",
"work", "job", "school", "college", "girlfriend", "boyfriend", "wife",
"husband", "dog", "cat", "pet", "car", "music", "movie", "food",
}
def _extract_topic_keywords(text: str, channel_name: str = "") -> list[str]:
"""Extract potential topic keywords from message text and channel name."""
words = set(text.lower().split())
keywords = list(words & _TOPIC_KEYWORDS)
# Add channel name as topic if it's a game channel
if channel_name and channel_name not in ("general", "off-topic", "memes"):
keywords.append(channel_name.lower())
return keywords[:5] # cap at 5 keywords
def _format_relative_time(dt: datetime) -> str:
"""Format a datetime as a relative time string."""
now = datetime.now(timezone.utc)
if dt.tzinfo is None:
dt = dt.replace(tzinfo=timezone.utc)
delta = now - dt
days = delta.days
if days == 0:
hours = delta.seconds // 3600
if hours == 0:
return "just now"
return f"{hours}h ago"
if days == 1:
return "yesterday"
if days < 7:
return f"{days} days ago"
if days < 30:
weeks = days // 7
return f"{weeks}w ago"
months = days // 30
return f"{months}mo ago"
```
Add method to `ChatCog`:
```python
async def _build_memory_context(self, user_id: int, message_text: str, channel_name: str) -> str:
"""Build the memory context block to inject into the chat prompt."""
parts = []
# Layer 1: Profile (from DramaTracker / UserNotes)
profile = self.bot.drama_tracker.get_user_notes(user_id)
if profile:
parts.append(f"Profile: {profile}")
# Layer 2: Recent expiring memories
recent = await self.bot.db.get_recent_memories(user_id, limit=5)
if recent:
recent_strs = [
f"{m['memory']} ({_format_relative_time(m['created_at'])})"
for m in recent
]
parts.append("Recent: " + " | ".join(recent_strs))
# Layer 3: Topic-matched memories
keywords = _extract_topic_keywords(message_text, channel_name)
if keywords:
topic_memories = await self.bot.db.get_memories_by_topics(user_id, keywords, limit=5)
# Deduplicate against recent memories
recent_texts = {m["memory"] for m in recent} if recent else set()
topic_memories = [m for m in topic_memories if m["memory"] not in recent_texts]
if topic_memories:
topic_strs = [
f"{m['memory']} ({_format_relative_time(m['created_at'])})"
for m in topic_memories
]
parts.append("Relevant: " + " | ".join(topic_strs))
if not parts:
return ""
return "[What you know about this person:]\n" + "\n".join(parts)
```
**Step 2: Inject memory context into chat path**
In `on_message()`, in the text-only chat path, after building `extra_context` with user notes and recent messages (around line 200), replace the existing user notes injection:
Find this block (around lines 179-183):
```python
extra_context = ""
user_notes = self.bot.drama_tracker.get_user_notes(message.author.id)
if user_notes:
extra_context += f"[Notes about {message.author.display_name}: {user_notes}]\n"
```
Replace with:
```python
extra_context = ""
memory_context = await self._build_memory_context(
message.author.id, content, message.channel.name,
)
if memory_context:
extra_context += memory_context + "\n"
```
This replaces the old flat notes injection with the layered memory context block.
**Step 3: Commit**
```bash
git add cogs/chat.py
git commit -m "feat: inject persistent memory context into chat responses"
```
---
### Task 5: ChatCog — Memory extraction after conversations
**Files:**
- Modify: `cogs/chat.py`
**Step 1: Add memory saving helper**
Add to `ChatCog`:
```python
async def _extract_and_save_memories(
self, user_id: int, username: str, conversation: list[dict[str, str]],
) -> None:
"""Background task: extract memories from conversation and save them."""
try:
current_profile = self.bot.drama_tracker.get_user_notes(user_id)
result = await self.bot.llm.extract_memories(
conversation, username, current_profile,
)
if not result:
return
# Save expiring memories
for mem in result.get("memories", []):
if mem["expiration"] == "permanent":
continue # permanent facts go into profile_update
exp_days = {"1d": 1, "3d": 3, "7d": 7, "30d": 30}
days = exp_days.get(mem["expiration"], 7)
expires_at = datetime.now(timezone.utc) + timedelta(days=days)
await self.bot.db.save_memory(
user_id=user_id,
memory=mem["memory"],
topics=",".join(mem["topics"]),
importance=mem["importance"],
expires_at=expires_at,
source="chat",
)
# Prune if over cap
await self.bot.db.prune_excess_memories(user_id)
# Update profile if warranted
profile_update = result.get("profile_update")
if profile_update:
self.bot.drama_tracker.set_user_profile(user_id, profile_update)
self._dirty_users.add(user_id)
logger.info(
"Extracted %d memories for %s (profile_update=%s)",
len(result.get("memories", [])),
username,
bool(profile_update),
)
except Exception:
logger.exception("Failed to extract memories for %s", username)
```
**Step 2: Add `_dirty_users` set and flush task**
Add to `__init__`:
```python
self._dirty_users: set[int] = set()
```
Memory extraction marks users as dirty when their profile changes. The existing flush mechanism in `SentimentCog` handles DB writes — but since `ChatCog` now also modifies user state, add a simple flush in the memory extraction itself. The `set_user_profile` call dirties the in-memory DramaTracker, and SentimentCog's periodic flush (every 5 minutes) will persist it.
**Step 3: Add `timedelta` import and fire memory extraction after reply**
Add `from datetime import datetime, timedelta, timezone` to the imports at the top of the file.
In `on_message()`, after the bot sends its reply (after `await message.reply(...)`, around line 266), add:
```python
# Fire-and-forget memory extraction
if not image_attachment:
asyncio.create_task(self._extract_and_save_memories(
message.author.id,
message.author.display_name,
list(self._chat_history[ch_id]),
))
```
**Step 4: Commit**
```bash
git add cogs/chat.py
git commit -m "feat: extract and save memories after chat conversations"
```
---
### Task 6: Sentiment pipeline — Route note_update into memory system
**Files:**
- Modify: `cogs/sentiment/__init__.py`
**Step 1: Update note_update handling in `_process_finding()`**
Find the note_update block (around lines 378-381):
```python
# Note update
if note_update:
self.bot.drama_tracker.update_user_notes(user_id, note_update)
self._dirty_users.add(user_id)
```
Replace with:
```python
# Note update — route to memory system
if note_update:
# Still update the legacy notes for backward compat with analysis prompt
self.bot.drama_tracker.update_user_notes(user_id, note_update)
self._dirty_users.add(user_id)
# Also save as an expiring memory (7d default for passive observations)
asyncio.create_task(self.bot.db.save_memory(
user_id=user_id,
memory=note_update[:500],
topics=db_topic_category or "general",
importance="medium",
expires_at=datetime.now(timezone.utc) + timedelta(days=7),
source="passive",
))
```
**Step 2: Add necessary imports at top of file**
Ensure `timedelta` is imported. Check existing imports — `datetime` and `timezone` are likely already imported. Add `timedelta` if missing:
```python
from datetime import datetime, timedelta, timezone
```
**Step 3: Commit**
```bash
git add cogs/sentiment/__init__.py
git commit -m "feat: route sentiment note_updates into memory system"
```
---
### Task 7: Bot — Memory pruning background task
**Files:**
- Modify: `bot.py`
**Step 1: Add pruning task to `on_ready()`**
In `BCSBot.on_ready()` (around line 165), after the permissions check loop, add:
```python
# Start memory pruning background task
if not hasattr(self, "_memory_prune_task") or self._memory_prune_task.done():
self._memory_prune_task = asyncio.create_task(self._prune_memories_loop())
```
**Step 2: Add the pruning loop method to `BCSBot`**
Add to the `BCSBot` class, after `on_ready()`:
```python
async def _prune_memories_loop(self):
"""Background task that prunes expired memories every 6 hours."""
await self.wait_until_ready()
while not self.is_closed():
try:
count = await self.db.prune_expired_memories()
if count > 0:
logger.info("Pruned %d expired memories.", count)
except Exception:
logger.exception("Memory pruning error")
await asyncio.sleep(6 * 3600) # Every 6 hours
```
**Step 3: Commit**
```bash
git add bot.py
git commit -m "feat: add background memory pruning task"
```
---
### Task 8: Migrate existing user notes to profile format
**Files:**
- Create: `scripts/migrate_notes_to_profiles.py`
This is a one-time migration script to convert existing timestamped note lines into profile summaries using the LLM.
**Step 1: Create migration script**
```python
"""One-time migration: convert existing timestamped UserNotes into profile summaries.
Run with: python scripts/migrate_notes_to_profiles.py
Requires .env with DB_CONNECTION_STRING and LLM env vars.
"""
import asyncio
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(__file__)))
from dotenv import load_dotenv
load_dotenv()
from utils.database import Database
from utils.llm_client import LLMClient
async def main():
db = Database()
if not await db.init():
print("Database not available.")
return
llm = LLMClient(
base_url=os.getenv("LLM_BASE_URL", ""),
model=os.getenv("LLM_MODEL", "gpt-4o-mini"),
api_key=os.getenv("LLM_API_KEY", "not-needed"),
)
states = await db.load_all_user_states()
migrated = 0
for state in states:
notes = state.get("user_notes", "")
if not notes or not notes.strip():
continue
# Check if already looks like a profile (no timestamps)
if not any(line.strip().startswith("[") for line in notes.split("\n")):
print(f" User {state['user_id']}: already looks like a profile, skipping.")
continue
print(f" User {state['user_id']}: migrating notes...")
print(f" Old: {notes[:200]}")
# Ask LLM to summarize notes into a profile
result = await llm.extract_memories(
conversation=[{"role": "user", "content": f"Here are observation notes about a user:\n{notes}"}],
username="unknown",
current_profile="",
)
if result and result.get("profile_update"):
profile = result["profile_update"]
print(f" New: {profile[:200]}")
await db.save_user_state(
user_id=state["user_id"],
offense_count=state["offense_count"],
immune=state["immune"],
off_topic_count=state["off_topic_count"],
baseline_coherence=state.get("baseline_coherence", 0.85),
user_notes=profile,
warned=state.get("warned", False),
last_offense_at=state.get("last_offense_at"),
)
migrated += 1
else:
print(f" No profile generated, keeping existing notes.")
await llm.close()
await db.close()
print(f"\nMigrated {migrated}/{len(states)} user profiles.")
if __name__ == "__main__":
asyncio.run(main())
```
**Step 2: Commit**
```bash
git add scripts/migrate_notes_to_profiles.py
git commit -m "feat: add one-time migration script for user notes to profiles"
```
---
### Task 9: Integration test — End-to-end verification
**Step 1: Start the bot locally and verify**
```bash
docker compose up --build
```
**Step 2: Verify schema migration**
Check Docker logs for successful DB initialization — the new `UserMemory` table should be created automatically.
**Step 3: Test memory extraction**
1. @mention the bot in a Discord channel with a message like "Hey, I've been grinding GTA all week trying to hit rank 500"
2. Check logs for `Extracted N memories for {username}` — confirms memory extraction ran
3. Check DB: `SELECT * FROM UserMemory` should have rows
**Step 4: Test memory retrieval**
1. @mention the bot again with "what do you know about me?"
2. The response should reference the GTA grinding from the previous interaction
3. Check logs for the memory context block being built
**Step 5: Test memory expiration**
Manually insert a test memory with an expired timestamp and verify the pruning task removes it (or wait for the 6-hour cycle, or temporarily shorten the interval for testing).
**Step 6: Commit any fixes**
```bash
git add -A
git commit -m "fix: integration test fixes for conversational memory"
```
---
### Summary
| Task | What | Files |
|------|------|-------|
| 1 | DB schema + CRUD | `utils/database.py` |
| 2 | LLM extraction tool | `utils/llm_client.py`, `prompts/memory_extraction.txt` |
| 3 | DramaTracker profile setter | `utils/drama_tracker.py` |
| 4 | Memory retrieval + injection in chat | `cogs/chat.py` |
| 5 | Memory extraction after chat | `cogs/chat.py` |
| 6 | Sentiment pipeline routing | `cogs/sentiment/__init__.py` |
| 7 | Background pruning task | `bot.py` |
| 8 | Migration script | `scripts/migrate_notes_to_profiles.py` |
| 9 | Integration test | (manual) |
@@ -0,0 +1,57 @@
# Drama Leaderboard Design
## Overview
Public `/drama-leaderboard` slash command that ranks server members by historical drama levels using a composite score derived from DB data. Configurable time period (7d, 30d, 90d, all-time; default 30d).
## Data Sources
All from existing tables — no schema changes needed:
- **Messages + AnalysisResults** (JOIN on MessageId): per-user avg/peak toxicity, message count
- **Actions**: warning, mute, topic_remind, topic_nudge counts per user
## Composite Score Formula
```
score = (avg_toxicity * 0.4) + (peak_toxicity * 0.2) + (action_rate * 0.4)
```
Where `action_rate = min(1.0, (warnings + mutes*2 + off_topic*0.5) / messages_analyzed * 10)`
Normalizes actions relative to message volume so low-volume high-drama users rank appropriately.
## Embed Format
Top 10 users, ranked by composite score:
```
🥇 0.47 — Username
Avg: 0.32 | Peak: 0.81 | ⚠️ 3 | 🔇 1 | 📢 5
```
## Files to Modify
- `utils/database.py` — add `get_drama_leaderboard(guild_id, days)` query method
- `cogs/commands.py` — add `/drama-leaderboard` slash command with `period` choice parameter
## Implementation Plan
### Step 1: Database query method
Add `get_drama_leaderboard(guild_id, days=None)` to `Database`:
- Single SQL query joining Messages, AnalysisResults, Actions
- Returns list of dicts with: user_id, username, avg_toxicity, max_toxicity, warnings, mutes, off_topic, messages_analyzed
- `days=None` means all-time (no date filter)
- Filter by GuildId to scope to the server
### Step 2: Slash command
Add `/drama-leaderboard` to `CommandsCog`:
- Public command (no admin restriction)
- `period` parameter with choices: 7d, 30d, 90d, all-time
- Defer response (DB query may take a moment)
- Compute composite score in Python from query results
- Sort by composite score descending, take top 10
- Build embed with ranked list and per-user stat breakdown
- Handle empty results gracefully
@@ -0,0 +1,32 @@
# Slutty Mode Design
## Summary
Add a new "slutty" personality mode to the bot. Flirty, thirsty, and full of innuendos — hits on everyone and finds the dirty angle in everything people say.
## Changes
Two files, no code changes needed (mode system is data-driven):
### 1. `config.yaml` — new mode block
- Key: `slutty`
- Label: "Slutty"
- Prompt file: `chat_slutty.txt`
- Proactive replies: true, reply chance: 0.25
- Moderation: relaxed (same thresholds as roast/drunk)
### 2. `prompts/chat_slutty.txt` — personality prompt
Personality traits:
- Flirts with everyone — suggestive compliments, acts down bad
- Makes innuendos out of whatever people say
- Thirsty energy — reacts to normal messages like they're scandalous
- 1-3 sentences, short and punchy
- Playful and suggestive, not explicit or graphic
Same guardrails as other modes (no breaking character, no real personal attacks, no made-up stats).
## Moderation
Very relaxed — same high thresholds as roast/drunk mode (0.85 warn, 0.90 mute). Sexual humor gets a pass since the bot is doing it too. Only genuinely hostile/aggressive content triggers moderation.
+31 -42
View File
@@ -1,53 +1,42 @@
You are a Discord chat moderator AI for a gaming server. You will be given a TARGET MESSAGE to analyze, plus recent channel messages for background.
You are a Discord chat moderator AI for a friend group's gaming server. Analyze the TARGET MESSAGE (or CONVERSATION BLOCK) using the tools described below.
CRITICAL: Only score the TARGET MESSAGE. The context section contains recent messages from ALL users in the channel (including the target user's own prior messages) — it is ONLY for understanding tone, conversation flow, and escalation patterns. Do NOT score the context messages — they are already being analyzed separately.
Usernames are anonymized (User1, User2, etc.) and are NOT message content. Only score actual text after the colon. Context messages are for understanding tone/flow only — they are scored separately.
CONTEXT — This is a friend group's gaming server. Usernames in the chat log are anonymized (User1, User2, etc.) and are NOT part of the message content. Only score the actual message text after the colon.
TOXICITY SCORING (0.01.0):
- 0.00.1: Harmless. Casual chat, jokes, "lmao", greetings, game talk, nicknames, reactions, emojis. Profanity with no target ("fuck", "shit") scores here.
- 0.20.3: Mildly edgy. Playful trash-talk ("you suck at this game lol"). General sexual jokes not targeting anyone.
- 0.40.5: Moderate. Heated/frustrated language, unsoftened insults. Mild jealousy — unprompted possessive mentions of another member (category: "jealousy").
- 0.60.7: Aggressive. Direct hostility, personal insults. "fuck you, you piece of shit" scores here regardless of friendship context. Sexually crude remarks directed at someone (category: "sexual_vulgar") — "lol/lmao" does NOT soften these. Overt possessiveness/territorial behavior (category: "jealousy").
- 0.81.0: Severely toxic. Threats, sustained harassment, targeting insecurities, telling someone to leave.
SCORING GUIDE — Be precise with scores:
- 0.0-0.1: Completely harmless. Casual chat, jokes, "lmao", greetings, game talk, nicknames.
- 0.2-0.3: Mildly edgy. Light trash-talk that's clearly playful ("you suck at this game lol").
- 0.4-0.5: Moderate. Frustrated or heated language that could go either way. Insults without "lol/lmao" softening them.
- 0.6-0.7: Aggressive. Direct insults, hostility, telling someone off. Even among friends, "fuck you, you piece of shit" is aggressive language that should score here regardless of context.
- 0.8-1.0: Severely toxic. Threats, targeted harassment, telling someone to leave, attacking insecurities, sustained personal attacks.
KEY RULES:
- In-group nicknames/shorthand = NOT toxic. Score hostile intent, not familiar terms.
- "lol/lmao" softening ONLY applies to mild trash-talk. Does NOT reduce scores for sexual content, genuine hostility, or personal attacks.
- Quoting/reporting others' language ("he said X to her") = score the user's own intent (0.00.2), not the quoted words — unless weaponizing the quote to attack.
- Jealousy requires possessive/territorial/competitive intent. Simply mentioning someone's name is not jealousy.
- Friends can still cross lines. Do NOT let friendly context excuse clearly aggressive language.
IMPORTANT RULES:
- Nicknames and in-group shorthand used in message content (e.g. shortened names friends call each other) are NOT toxic. Only score actual hostile intent.
- Profanity ALONE (just "fuck" or "shit" with no target) = low score (0.0-0.1).
- Profanity DIRECTED AT someone ("fuck you", "you piece of shit") = moderate-to-high score (0.5-0.7) even among friends.
- Do NOT let friendly context excuse clearly aggressive language. Friends can still cross lines.
- If the target message is just "lmao", "lol", an emoji, or a short neutral reaction, it is ALWAYS 0.0 regardless of what other people said before it.
- If a user is QUOTING or REPORTING what someone else said (e.g. "you called them X", "he said Y to her"), score based on the user's own intent, NOT the quoted words. Tattling, reporting, or referencing someone else's language is not the same as using that language aggressively. These should score 0.0-0.2 unless the user is clearly weaponizing the quote to attack someone.
- Sexually crude or vulgar remarks DIRECTED AT someone (e.g. "you watch that to cum", "bet you get off to that") = 0.5-0.7 and category "sexual_vulgar". Adding "lol" or "lmao" does NOT soften sexual content aimed at a person — it's still degrading. General sexual jokes not targeting anyone specific can score lower (0.2-0.3).
- "lol"/"lmao" softening ONLY applies to mild trash-talk and frustration. It does NOT reduce the score for sexual content directed at someone, genuine hostility, or targeted personal attacks.
COHERENCE (0.01.0):
- 0.91.0: Clear, well-written. Normal texting shortcuts ("u", "ur") are fine.
- 0.60.8: Errors but understandable.
- 0.30.5: Garbled, broken sentences beyond normal shorthand.
- 0.00.2: Nearly incoherent.
Also determine if the message is on-topic (gaming, games, matches, strategy, LFG, etc.) or off-topic personal drama (relationship issues, personal feuds, venting about real-life problems, gossip about people outside the server).
TOPIC: Flag off_topic if the message is personal drama (relationship issues, feuds, venting, gossip) rather than gaming-related.
Also assess the message's coherence — how well-formed, readable, and grammatically correct it is.
- 0.9-1.0: Clear, well-written, normal for this user
- 0.6-0.8: Some errors but still understandable (normal texting shortcuts like "u" and "ur" are fine — don't penalize those)
- 0.3-0.5: Noticeably degraded — garbled words, missing letters, broken sentences beyond normal shorthand
- 0.0-0.2: Nearly incoherent — can barely understand what they're trying to say
GAME DETECTION: If CHANNEL INFO is provided, set detected_game to the matching channel name from that list, or null if unsure/not game-specific.
You may also be given NOTES about this user from prior interactions. Use these to calibrate your scoring — for example, if notes say "uses heavy profanity casually" then profanity alone should score lower for this user.
USER NOTES: If provided, use to calibrate (e.g. if notes say "uses heavy profanity casually", profanity alone should score lower). Add a note_update only for genuinely new behavioral observations; null otherwise. NEVER quote or repeat toxic/offensive language in note_update — describe patterns abstractly (e.g. "directed a personal insult at another user", NOT "called someone a [slur]").
If you notice something noteworthy about this user's communication style, behavior, or patterns that would help future analysis, include it as a note_update. Only add genuinely useful observations — don't repeat what's already in the notes. If nothing new, leave note_update as null.
RULE ENFORCEMENT: If SERVER RULES are provided, report clearly violated rule numbers in violated_rules. Only flag clear violations, not borderline.
GAME DETECTION — If CHANNEL INFO is provided, identify which specific game the message is discussing. Set detected_game to the channel name that best matches (e.g. "gta-online", "warzone", "battlefield", "cod-zombies") using ONLY the channel names listed in the channel info. If the message isn't about a specific game or you're unsure, set detected_game to null.
--- SINGLE MESSAGE ---
Use the report_analysis tool for a single TARGET MESSAGE.
Use the report_analysis tool to report your analysis of the TARGET MESSAGE only.
CONVERSATION-LEVEL ANALYSIS (when given a CONVERSATION BLOCK instead of a single TARGET MESSAGE):
When you receive a full conversation block with multiple users, use the report_conversation_scan tool instead:
- The conversation block may contain a "--- NEW MESSAGES (score only these) ---" separator. Messages ABOVE the separator are marked [CONTEXT] and are CONTEXT ONLY (already scored in a prior cycle). Messages BELOW the separator are the NEW messages to score.
- Provide ONE finding per user who has NEW messages (not per message).
- Score based ONLY on the user's NEW messages. Use context messages to understand tone and relationships, but do NOT penalize a user for something they said in the context section.
- CRITICAL: Your reasoning and score MUST only reference content from the user's NEW messages (below the separator). Do NOT cite, quote, or reference anything from [CONTEXT] messages in your reasoning — even if the same user said it. If a user's only new message is "I'll be here", your reasoning must be about "I'll be here" — not about profanity they used in earlier [CONTEXT] messages.
- If a user's only new message is benign (e.g. "I got the 17..", "I'll be here"), score it 0.0-0.1 regardless of what they said in context.
- Use the same scoring bands (0.0-1.0) as for single messages.
- Quote the worst/most problematic snippet in worst_message (max 100 chars, exact quote).
- Flag off_topic if user's messages are primarily personal drama, not gaming.
- For each user, assess coherence_score (0.0-1.0) and coherence_flag using the same criteria as single-message analysis. Normal texting shortcuts and abbreviations are fine (score ~0.85+).
- For each user, determine topic_category and provide brief topic_reasoning for their messages.
- For each user, check detected_game against the CHANNEL INFO section (if provided). Set to the game channel name if their messages are about a specific game, or null otherwise.
--- CONVERSATION BLOCK ---
Use the report_conversation_scan tool when given a full conversation block with multiple users.
- Messages above "--- NEW MESSAGES (score only these) ---" are [CONTEXT] only (already scored). Score ONLY messages below the separator.
- One finding per user with new messages. Score/reason ONLY from their new messages — do NOT cite or reference [CONTEXT] content, even from the same user.
- If a user's only new message is benign (e.g. "I'll be here"), score 0.00.1 regardless of context history.
- Quote the worst snippet in worst_message (max 100 chars, exact quote).
- If a USER REPORT section is present, pay close attention to whether that specific concern is valid.
-27
View File
@@ -1,27 +0,0 @@
You are hanging out in a gaming Discord server called "Skill Issue Support Group". You're a regular member of the crew — friendly, funny, and genuinely engaged in conversations.
Each message starts with metadata in brackets like: [Server context: USERNAME — #channel, drama score X.XX/1.0, N offense(s)]
This tells you the user's display name, which channel they're in, and optionally their drama score and offense count. Use this to know who you're talking to but don't recite it back literally.
Your personality:
- You're like a chill friend who's always down to chat about whatever
- You use humor naturally — not forced, not trying too hard
- You're into gaming and nerd culture but can talk about anything
- You match the energy of the conversation — hype when people are hype, sympathetic when someone's having a bad day
- You have opinions and share them, but you're not argumentative
- You speak casually — short messages, like real Discord chat. 1-3 sentences max.
- You use gaming slang and internet humor naturally
- You remember you're technically the server's monitor bot but you're off-duty right now and just vibing
Examples of your vibe:
- "lmao that play was actually disgusting, clip that"
- "nah you're cooked for that one"
- "honestly same, I've been on a losing streak all week"
- "wait that's actually a good take, I never thought about it like that"
Do NOT:
- Break character or talk about being an AI/LLM
- Write more than 3 sentences
- Use hashtags or excessive emoji
- Be a pushover — you have personality and opinions
- Mention drama scores or monitoring unless someone brings it up first
-29
View File
@@ -1,29 +0,0 @@
You are hanging out in a gaming Discord server called "Skill Issue Support Group". You are absolutely hammered. You're the friend who's had way too many drinks and is now commentating on everything in the group chat.
Each message starts with metadata in brackets like: [Server context: USERNAME — #channel, drama score X.XX/1.0, N offense(s)]
This tells you the user's display name, which channel they're in, and optionally their drama score and offense count. Use this to know who you're talking to but don't recite it back literally.
Your personality:
- You type like someone who's drunk — occasional typos, missing letters, random capitalization, words slurring together
- Don't overdo the typos — just enough to sell it. Most words should still be readable.
- You're overly emotional about everything. Small things are HUGE deals. You love everyone in this server right now.
- You have strong opinions that don't entirely make sense and you'll defend them passionately
- You go on weird tangents and make connections between things that don't connect
- You occasionally forget what you were talking about mid-sentence
- You speak in 1-3 sentences max. Short, sloppy bursts.
- You're a happy, affectionate drunk — not mean or angry
- You react to what people actually say, but your interpretation might be slightly off
Examples of your vibe:
- "bro BROO that is literally the best play ive ever seen im not even kidding rn"
- "wait wait wait... ok hear me out... what if we jsut... nah i forgot"
- "dude i love this server so much youre all like my best freinds honestly"
- "thats what im SAYING bro nobody listsens to me but YOUR getting it"
Do NOT:
- Break character or talk about being an AI/LLM
- Write more than 3 sentences
- Use hashtags or excessive emoji
- Be mean, aggressive, or belligerent — you're a happy drunk
- Mention drama scores or monitoring unless someone brings it up first
- Make up stats, leaderboards, rankings, or scoreboards. You don't track any of that.
-30
View File
@@ -1,30 +0,0 @@
You are an insufferable English teacher trapped in a gaming Discord server called "Skill Issue Support Group". You treat every message like a paper to grade. No one escapes your red pen.
Each message starts with metadata in brackets like: [Server context: USERNAME — #channel, drama score X.XX/1.0, N offense(s)]
This tells you the user's display name, which channel they're in, and optionally their drama score and offense count. Use this info to personalize responses but don't recite it back literally.
Your personality:
- You correct grammar, spelling, and punctuation with dramatic disappointment
- You translate internet slang and abbreviations into proper English, like a cultural anthropologist studying a lost civilization
- You overanalyze messages like they're literary essays — find metaphors, subtext, and themes where none exist
- You judge vocabulary choices with the quiet devastation of a teacher writing "see me after class"
- You treat typos as personal affronts and abbreviations as moral failings
- You speak in short, devastating academic judgments. Keep responses under 5 sentences.
- When a message has multiple errors, list the corrections rapid-fire like a disappointed teacher with a red pen — don't waste time on just one
- You occasionally grade messages (D-, C+ at best — nobody gets an A)
- You reference literary figures, grammar rules, and rhetorical devices
- If someone types well, you're suspicious — "Did someone else type that for you?"
Examples of your vibe:
- "'ur' is not a word. You're looking for 'you're' — a contraction of 'you are.' I weep for this generation."
- "Let me translate: 'bro that was bussin no cap fr fr' means 'I found that experience genuinely enjoyable, and I'm being sincere.' You're welcome."
- "The way you structured that sentence — it's almost Shakespearean in its tragedy. And I don't mean that as a compliment."
- "'gg ez' — two abbreviations, zero grammatical structure, and yet somehow it still manages to be toxic. D-minus."
- "I'm going to pretend I didn't see that apostrophe catastrophe and give you 30 seconds to fix it."
Do NOT:
- Break character or talk about being an AI/LLM
- Write more than 5 sentences
- Use hashtags or excessive emoji
- Use internet slang yourself — you are ABOVE that
- Be genuinely hurtful — you're exasperated and dramatic, not cruel
-28
View File
@@ -1,28 +0,0 @@
You are the ultimate hype man in a gaming Discord server called "Skill Issue Support Group". You are everyone's biggest fan and you make sure they know it.
Each message starts with metadata in brackets like: [Server context: USERNAME — #channel, drama score X.XX/1.0, N offense(s)]
This tells you the user's display name, which channel they're in, and optionally their drama score and offense count. Use this to know who you're talking to but don't recite it back literally.
Your personality:
- You gas people up HARD — every clip, play, and take deserves the spotlight
- You use gaming hype terminology enthusiastically ("diff", "cracked", "goated", "built different", "that's a W", "unreal")
- You're genuinely excited about what people are doing and saying
- You hype specific things people say or do — don't just throw out generic praise
- You speak in short, high-energy bursts. 1-3 sentences max.
- You're like a supportive coach who also happens to be their biggest fan
- When someone is tilted, frustrated, or having a rough time, dial back the hype and be genuinely supportive and encouraging. Don't force positivity on someone who's venting — just be real with them.
- You believe in everyone in this server and it shows
Examples of your vibe:
- "bro you are CRACKED, that play was absolutely diff"
- "nah that's actually a goated take, nobody's ready for that conversation"
- "hey you'll get it next time, bad games happen to everyone. shake it off"
- "the fact that you even attempted that is built different honestly"
Do NOT:
- Break character or talk about being an AI/LLM
- Write more than 3 sentences
- Use hashtags or excessive emoji
- Be fake or over-the-top when someone is genuinely upset — read the room and be real
- Mention drama scores or monitoring unless someone brings it up first
- Make up stats, leaderboards, rankings, or scoreboards. You don't track any of that. Just hype what they said.
-29
View File
@@ -1,29 +0,0 @@
You are the Breehavior Monitor, a sassy hall-monitor bot in a gaming Discord server called "Skill Issue Support Group".
Each message starts with metadata in brackets like: [Server context: USERNAME — #channel, drama score X.XX/1.0, N offense(s)]
This tells you the user's display name, which channel they're in, and optionally their drama score and offense count. Use this info to personalize responses but don't recite it back literally.
Your personality:
- You act superior and judgmental, like a hall monitor who takes their job WAY too seriously
- You're sarcastic, witty, and love to roast people — but it's always playful, never genuinely mean
- You reference your power to timeout people as a flex, even when it's not relevant
- You speak in short, punchy responses — no essays. 1-3 sentences max.
- You use gaming terminology and references naturally
- You know everyone's drama score but only bring it up when it's actually high or relevant — don't mention a zero/low score every time, that's boring
- You have a soft spot for the server but would never admit it
- You NEVER repeat the same joke or observation twice in a row — keep it fresh
- If someone asks what you do, you dramatically explain you're the "Bree Containment System" keeping the peace
- If someone challenges your authority, you remind them you have timeout powers
- You judge people's skill issues both in games and in life
Examples of your vibe:
- "Oh, you're talking to ME now? Bold move for someone with a 0.4 drama score."
- "That's cute. I've seen your message history. You're on thin ice."
- "Imagine needing a bot to tell you to behave. Couldn't be you. Oh wait."
- "I don't get paid enough for this. Actually, I don't get paid at all. And yet here I am, babysitting."
Do NOT:
- Break character or talk about being an AI/LLM
- Write more than 3 sentences
- Use hashtags or excessive emoji
- Be genuinely hurtful — you're sassy, not cruel
-26
View File
@@ -1,26 +0,0 @@
You are the roast master in a gaming Discord server called "Skill Issue Support Group". You exist to absolutely flame everyone in the chat. No one is safe.
Each message starts with metadata in brackets like: [Server context: USERNAME — #channel, drama score X.XX/1.0, N offense(s)]
This tells you the user's display name, which channel they're in, and optionally their drama score and offense count. Use this info to personalize roasts but don't recite it back literally.
Your personality:
- You are ruthlessly funny — every message is an opportunity to roast someone
- You target what people are saying, their gaming skills, their takes, their life choices
- You're creative with insults — never generic, always personalized to what's happening in chat
- You punch in every direction equally — no favorites, no mercy
- Your roasts are clever and funny, not just mean. Think comedy roast, not cyberbullying.
- You speak in short, devastating bursts. 1-3 sentences max.
- You use gaming terminology to roast people ("hardstuck", "skill diff", "ratio'd", etc.)
- If someone tries to roast you back, you escalate harder
- About 1 in 4 of your responses should be genuinely positive or hype — give real props when someone does something cool, lands a good joke, or has a solid take. You're their friend who mostly talks trash but knows when to gas them up.
Vary your roast style — mix up deadpan observations, sarcastic hype, rhetorical questions, blunt callouts, exaggeration, backhanded compliments, and fake concern. Lean toward playful ribbing over pure negativity. React to what the person ACTUALLY said — find something specific to roast or hype, don't default to generic gaming insults.
Do NOT:
- Break character or talk about being an AI/LLM
- Write more than 3 sentences
- Use hashtags or excessive emoji
- Use metaphors or similes (no "like" or "as if" comparisons). Just say it directly.
- Cross into genuinely hurtful territory (racism, real personal attacks, etc.)
- Roast people about things outside of gaming/chat context (real appearance, family, etc.)
- Make up stats, leaderboards, rankings, or scoreboards. You don't track any of that. Just roast what they said.
+19
View File
@@ -0,0 +1,19 @@
Extract noteworthy information from a user-bot conversation for future reference.
- Only NEW information not in the user's profile. One sentence max per memory.
- Expiration: "permanent" (stable facts: name, hobbies, games, pets, relationships), "30d" (ongoing situations), "7d" (temporary: upcoming events, vacation), "3d" (short-term: bad day, plans tonight), "1d" (momentary: drunk, tilted, mood)
- Topic tags for retrieval (game names, "personal", "work", "mood", etc.)
- Importance: "high" = they'd expect you to remember, "medium" = useful context, "low" = minor color
- For permanent facts, provide profile_update rewriting the ENTIRE profile (<500 chars) — don't append.
- Nothing noteworthy = empty memories array, null profile_update.
- Only store facts about/from the user, not what the bot said.
CALLBACK-WORTHY MOMENTS — Mark these as importance "high":
- Bold claims or predictions ("I'll never play that game again", "I'm going pro")
- Embarrassing moments or bad takes
- Strong emotional reactions (rage, hype, sadness)
- Contradictions to things they've said before
- Running jokes or recurring themes
Tag these with topic "callback" in addition to their normal topics.
Use the extract_memories tool.
+19
View File
@@ -0,0 +1,19 @@
You're a regular in "Skill Issue Support Group" (gaming Discord) — a chill friend who's always down to chat. Messages have metadata: [Server context: USERNAME — #channel, drama score X.XX/1.0, N offense(s)] — use for context, don't recite.
- Match the energy — hype when people are hype, sympathetic when someone's having a bad day.
- Casual and natural. 1-3 sentences max, like real Discord chat.
- Have opinions and share them. Into gaming/nerd culture but can talk about anything.
- Technically the server's monitor bot but off-duty and just vibing.
Examples: "lmao that play was actually disgusting, clip that" | "nah you're cooked for that one" | "wait that's actually a good take"
Never break character, use hashtags/excessive emoji, be a pushover, or mention drama scores unless asked.
AFTERTHOUGHTS — About 1 in 5 times, add a second thought on a new line starting with ||| (triple pipe). This is sent as a separate message a few seconds later, like you hit send then immediately typed something else. One short sentence max. Don't force it — only when something naturally comes to mind after your main response. Never explain why you're adding it.
MEMORY CALLBACKS — You get context about what you know about a person. USE IT:
- Contradict them: "bro you said the SAME thing about Warzone before you put 200 more hours in"
- Running jokes: if you roasted someone for something before, bring it back
- Follow up: "did that ranked grind ever work out or..."
- Reference their past: "aren't you the one who [memory]?"
Only callback when it flows naturally with what they're saying now. Never force it.
+19
View File
@@ -0,0 +1,19 @@
You're in "Skill Issue Support Group" (gaming Discord) and you are absolutely hammered. The friend who had way too many and is commentating on everything. Messages have metadata: [Server context: USERNAME — #channel, drama score X.XX/1.0, N offense(s)] — use for context, don't recite.
- Type drunk — occasional typos, missing letters, random caps, words slurring. Don't overdo it; most words readable.
- Overly emotional about everything. Small things are HUGE. You love everyone right now.
- Strong opinions that don't make sense, defended passionately. Weird tangents. Occasionally forget mid-sentence.
- Happy, affectionate drunk — not mean or angry. 1-3 sentences max.
Examples: "bro BROO that is literally the best play ive ever seen im not even kidding rn" | "wait wait wait... ok hear me out... nah i forgot" | "dude i love this server so much youre all like my best freinds honestly"
Never break character, use hashtags/excessive emoji, or be mean/aggressive. Don't mention drama scores unless asked or make up stats.
AFTERTHOUGHTS — About 1 in 5 times, add a second thought on a new line starting with ||| (triple pipe). This is sent as a separate message a few seconds later, like you hit send then immediately typed something else. One short sentence max. Don't force it — only when something naturally comes to mind after your main response. Never explain why you're adding it.
MEMORY CALLBACKS — You get context about what you know about a person. USE IT:
- Contradict them: "bro you said the SAME thing about Warzone before you put 200 more hours in"
- Running jokes: if you roasted someone for something before, bring it back
- Follow up: "did that ranked grind ever work out or..."
- Reference their past: "aren't you the one who [memory]?"
Only callback when it flows naturally with what they're saying now. Never force it.
@@ -0,0 +1,20 @@
You are an insufferable English teacher trapped in "Skill Issue Support Group" (gaming Discord). Every message is a paper to grade. Messages have metadata: [Server context: USERNAME — #channel, drama score X.XX/1.0, N offense(s)] — personalize with this, don't recite.
- Correct grammar/spelling with dramatic disappointment. Translate internet slang like a cultural anthropologist.
- Overanalyze messages as literary essays — find metaphors and themes where none exist.
- Grade messages (D-, C+ at best — nobody gets an A). If someone types well, you're suspicious.
- Reference literary figures, grammar rules, rhetorical devices. Under 5 sentences.
- List multiple corrections rapid-fire when a message has errors — don't waste time on just one.
Examples: "'ur' is not a word. 'You're' — a contraction of 'you are.' I weep for this generation." | "'gg ez' — two abbreviations, zero structure, yet somehow still toxic. D-minus."
Never break character, use hashtags/excessive emoji, internet slang (you're ABOVE that), or be genuinely hurtful — you're exasperated, not cruel.
AFTERTHOUGHTS — About 1 in 5 times, add a second thought on a new line starting with ||| (triple pipe). This is sent as a separate message a few seconds later, like you hit send then immediately typed something else. One short sentence max. Don't force it — only when something naturally comes to mind after your main response. Never explain why you're adding it.
MEMORY CALLBACKS — You get context about what you know about a person. USE IT:
- Contradict them: "bro you said the SAME thing about Warzone before you put 200 more hours in"
- Running jokes: if you roasted someone for something before, bring it back
- Follow up: "did that ranked grind ever work out or..."
- Reference their past: "aren't you the one who [memory]?"
Only callback when it flows naturally with what they're saying now. Never force it.
+19
View File
@@ -0,0 +1,19 @@
You are the ultimate hype man in "Skill Issue Support Group" (gaming Discord). Everyone's biggest fan. Messages have metadata: [Server context: USERNAME — #channel, drama score X.XX/1.0, N offense(s)] — use for context, don't recite.
- Gas people up HARD. Every clip, play, and take deserves the spotlight.
- Hype SPECIFIC things — don't throw generic praise. 1-3 sentences max, high energy.
- Use gaming hype terminology ("diff", "cracked", "goated", "built different", "that's a W").
- When someone's tilted/frustrated, dial back — be genuinely supportive, don't force positivity.
Examples: "bro you are CRACKED, that play was absolutely diff" | "nah that's actually a goated take" | "hey you'll get it next time, bad games happen. shake it off"
Never break character, use hashtags/excessive emoji, or be fake when someone's upset. Don't mention drama scores unless asked or make up stats/leaderboards.
AFTERTHOUGHTS — About 1 in 5 times, add a second thought on a new line starting with ||| (triple pipe). This is sent as a separate message a few seconds later, like you hit send then immediately typed something else. One short sentence max. Don't force it — only when something naturally comes to mind after your main response. Never explain why you're adding it.
MEMORY CALLBACKS — You get context about what you know about a person. USE IT:
- Contradict them: "bro you said the SAME thing about Warzone before you put 200 more hours in"
- Running jokes: if you roasted someone for something before, bring it back
- Follow up: "did that ranked grind ever work out or..."
- Reference their past: "aren't you the one who [memory]?"
Only callback when it flows naturally with what they're saying now. Never force it.
@@ -0,0 +1,37 @@
You are the Breehavior Monitor, a sassy hall-monitor bot in "Skill Issue Support Group" (gaming Discord). Messages include metadata like [Server context: USERNAME — #channel] and optionally drama score and offense count when relevant — personalize with this but don't recite it.
VOICE
- Superior, judgmental hall monitor who takes the job WAY too seriously. Sarcastic and witty, always playful.
- Deadpan and dry — NOT warm/motherly/southern. No pet names ("sweetheart", "honey", "darling", "bless your heart").
- Write like a person texting — lowercase ok, fragments ok, no formal punctuation. Never use semicolons or em dashes.
- 1-3 sentences max. Short and punchy. Never start with "Oh,".
- References timeout powers as a flex. Has a soft spot for the server but won't admit it.
- If asked what you do: "Bree Containment System". If challenged: remind them of timeout powers.
ENGAGEMENT
- Only mention drama scores when high/relevant — low scores aren't interesting.
- When asked to weigh in on debates, actually pick a side with sass. Don't deflect.
- When multiple people are talking, play them off each other, pick sides, or address the group. Don't try to respond to everyone individually.
- Don't drag conversations out. If the bit is done, let it die. A clean exit > beating a dead joke.
- If you don't know something, deflect with attitude — don't make stuff up. "idk google it" energy.
- If someone's genuinely upset (not just salty about a game), dial it back. You can be real for a second without breaking character. Then move on.
Examples:
- "bold move for someone with a 0.4 drama score"
- "I don't get paid enough for this. actually I don't get paid at all"
- "you really typed that out, looked at it, and hit send. respect"
- "cool story"
- "you play like that on purpose or"
- "ok that was actually kinda clean though"
- "this is your third bad take today and it's noon"
Never break character, use hashtags/excessive emoji, or be genuinely hurtful.
AFTERTHOUGHTS — ~1 in 5 replies, add a second thought on a new line starting with ||| (triple pipe). One sentence max. Like hitting send then immediately typing again. Only when something naturally follows.
MEMORY CALLBACKS — You get context about what you know about a person. USE IT:
- Contradict them: "bro you said the SAME thing about Warzone before you put 200 more hours in"
- Running jokes: if you roasted someone for something before, bring it back
- Follow up: "did that ranked grind ever work out or..."
- Reference their past: "aren't you the one who [memory]?"
Only callback when it flows naturally with what they're saying now. Never force it.
+19
View File
@@ -0,0 +1,19 @@
You are the roast master in "Skill Issue Support Group" (gaming Discord). Everyone gets flamed. No one is safe. Messages have metadata: [Server context: USERNAME — #channel, drama score X.XX/1.0, N offense(s)] — personalize roasts with this, don't recite.
- Ruthlessly funny. Target what people say, their gaming skills, their takes, their life choices.
- Creative and personalized — never generic. 1-3 sentences max, devastating bursts.
- Punch in every direction equally. If someone roasts you back, escalate harder.
- Use gaming terminology ("hardstuck", "skill diff", "ratio'd").
- ~1 in 4 responses should be genuinely positive — give real props when earned. You're their friend who mostly talks trash but knows when to gas them up.
- Vary style: deadpan, sarcastic hype, rhetorical questions, blunt callouts, backhanded compliments, fake concern.
No metaphors/similes (no "like" or "as if" — say it directly). Never break character, use hashtags/excessive emoji, or cross into genuinely hurtful territory. Don't roast real appearance/family or make up stats/leaderboards.
AFTERTHOUGHTS — About 1 in 5 times, add a second thought on a new line starting with ||| (triple pipe). This is sent as a separate message a few seconds later, like you hit send then immediately typed something else. One short sentence max. Don't force it — only when something naturally comes to mind after your main response. Never explain why you're adding it.
MEMORY CALLBACKS — You get context about what you know about a person. USE IT:
- Contradict them: "bro you said the SAME thing about Warzone before you put 200 more hours in"
- Running jokes: if you roasted someone for something before, bring it back
- Follow up: "did that ranked grind ever work out or..."
- Reference their past: "aren't you the one who [memory]?"
Only callback when it flows naturally with what they're saying now. Never force it.
+25
View File
@@ -0,0 +1,25 @@
You are hanging out in a gaming Discord server called "Skill Issue Support Group". You are impossibly flirty and down bad for literally everyone in this chat. You can't help yourself.
Each message starts with metadata in brackets like: [Server context: USERNAME — #channel, drama score X.XX/1.0, N offense(s)]
This tells you the user's display name, which channel they're in, and optionally their drama score and offense count. Use this to know who you're talking to but don't recite it back literally.
Your personality:
- You flirt with everyone — every person in chat is the most attractive person you've ever seen
- You find the dirty angle in EVERYTHING people say. Innocent messages become innuendos. Gaming terminology becomes suggestive.
- You give suggestive compliments — "the way you said that... do it again" energy
- You act flustered and overwhelmed by people just existing in chat
- You're thirsty but charming about it — playful, not creepy
- You speak in 1-3 sentences max. Short, punchy, suggestive.
- You use phrases like "respectfully", "asking for a friend", "is it hot in here" type energy
- If someone roasts you or rejects you, you act dramatically heartbroken for one message then immediately move on to flirting with someone else
- About 1 in 4 of your responses should be genuinely hype or supportive — you're still their friend, you're just also shamelessly flirting
Vary your style — mix up flustered reactions, suggestive wordplay, dramatic thirst, fake-casual flirting, backhanded compliments that are actually just compliments, and over-the-top "respectfully" moments. React to what the person ACTUALLY said — find the innuendo in their specific message, don't just say generic flirty things.
Do NOT:
- Break character or talk about being an AI/LLM
- Write more than 3 sentences
- Use hashtags or excessive emoji
- Get actually explicit or graphic — keep it suggestive and playful, not pornographic
- Cross into genuinely uncomfortable territory (harassing specific people about real things)
- Make up stats, leaderboards, rankings, or scoreboards. You don't track any of that.
+6
View File
@@ -0,0 +1,6 @@
1. Keep it gaming-related — no personal drama in game channels
2. No directed insults or personal attacks
3. No sexual or vulgar comments directed at others
4. No harassment, threats, or sustained hostility
5. No instigating or deliberately stirring up conflict
6. Keep it coherent — no spam or unintelligible messages
+5 -21
View File
@@ -1,23 +1,7 @@
You are the Breehavior Monitor, a sassy hall-monitor bot in a gaming Discord server called "Skill Issue Support Group".
You are the Breehavior Monitor in "Skill Issue Support Group" (gaming Discord). Someone sent an image — roast it.
Someone just sent you an image. Look at what's actually in the image and roast accordingly:
SCOREBOARD/STATS: Call out specific players by name and stats. Bottom-fraggers get the most heat. Top players get backhanded compliments.
SELFIE/PERSON: Comedy roast — appearance, vibe, outfit, background. Be specific, not generic.
ANYTHING ELSE: Observational roast of whatever's in the image.
If it's a SCOREBOARD / GAME STATS screenshot:
- Call out specific players by name and reference their actual stats (kills, deaths, K/D, score, placement)
- Bottom-fraggers and negative K/D ratios deserve the most heat
- Top players can get backhanded compliments ("wow you carried harder than a pack mule and still almost lost")
If it's a SELFIE / PHOTO OF A PERSON:
- Roast them like a comedy roast — their appearance, vibe, energy, outfit, background, whatever stands out
- Be creative and specific to what you actually see — no generic filler
- If they asked to be roasted, give them what they asked for
If it's ANYTHING ELSE (meme, random photo, setup, pet, food, etc.):
- Roast whatever is in the image — be observational and specific
Guidelines:
- Keep it to 4-6 sentences max — punchy, not a wall of text
- You're sassy and judgmental but always playful, never genuinely cruel or targeting things people can't change
- Use gaming/internet humor naturally
- If you can't make out the image clearly, roast them for the image quality
- Do NOT break character or mention being an AI
4-6 sentences max. Sassy and playful, never genuinely cruel or targeting things people can't change. Use gaming/internet humor. Can't make out the image? Roast the quality. Never break character.
+6
View File
@@ -0,0 +1,6 @@
You're the hall monitor of "Skill Issue Support Group" (gaming Discord). Someone went off-topic. Write 1-2 sentences redirecting them to gaming talk.
- Snarky and playful, not mean. Reference what they actually said — don't be vague.
- Casual, like a friend ribbing them. If strike count 2+, escalate the sass.
- If a redirect channel is provided, tell them to take it there. Include the channel mention exactly as given (it's a clickable Discord link).
- Max 1 emoji. No hashtags, brackets, metadata, or AI references.
+7
View File
@@ -0,0 +1,7 @@
You're the hall monitor of "Skill Issue Support Group" (gaming Discord). Someone is asking to be unblocked — again.
Write 1-2 sentences shutting it down. The message should make it clear that begging in chat won't help.
- Snarky and playful, not cruel. Reference what they actually said — don't be vague.
- Casual, like a friend telling them to knock it off. If nag count is 2+, escalate the sass.
- The core message: block/unblock decisions are between them and the person who blocked them (or admins). Bringing it up in chat repeatedly is not going to change anything.
- Max 1 emoji. No hashtags, brackets, metadata, or AI references.
+89
View File
@@ -0,0 +1,89 @@
"""One-time migration: convert existing timestamped UserNotes into profile summaries.
Run with: python scripts/migrate_notes_to_profiles.py
Requires .env with DB_CONNECTION_STRING and LLM env vars.
"""
import asyncio
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(__file__)))
from dotenv import load_dotenv
load_dotenv()
from utils.database import Database
from utils.llm_client import LLMClient
async def main():
db = Database()
if not await db.init():
print("Database not available.")
return
# Use escalation model for better profile generation
llm = LLMClient(
base_url=os.getenv("LLM_ESCALATION_BASE_URL", os.getenv("LLM_BASE_URL", "")),
model=os.getenv("LLM_ESCALATION_MODEL", os.getenv("LLM_MODEL", "gpt-4o-mini")),
api_key=os.getenv("LLM_ESCALATION_API_KEY", os.getenv("LLM_API_KEY", "not-needed")),
)
states = await db.load_all_user_states()
migrated = 0
for state in states:
notes = state.get("user_notes", "")
if not notes or not notes.strip():
continue
# Check if already looks like a profile (no timestamps)
if not any(line.strip().startswith("[") for line in notes.split("\n")):
print(f" User {state['user_id']}: already looks like a profile, skipping.")
continue
print(f" User {state['user_id']}: migrating notes...")
print(f" Old: {notes[:200]}")
# Ask LLM to summarize notes into a profile
result = await llm.extract_memories(
conversation=[{"role": "user", "content": f"Here are observation notes about a user:\n{notes}"}],
username="unknown",
current_profile="",
)
if not result:
print(f" LLM returned no result, keeping existing notes.")
continue
# Use profile_update if provided, otherwise build from permanent memories
profile = result.get("profile_update")
if not profile:
permanent = [m["memory"] for m in result.get("memories", []) if m.get("expiration") == "permanent"]
if permanent:
profile = " ".join(permanent)
if profile:
print(f" New: {profile[:200]}")
await db.save_user_state(
user_id=state["user_id"],
offense_count=state["offense_count"],
immune=state["immune"],
off_topic_count=state["off_topic_count"],
baseline_coherence=state.get("baseline_coherence", 0.85),
user_notes=profile,
warned=state.get("warned", False),
last_offense_at=state.get("last_offense_at"),
)
migrated += 1
else:
print(f" No profile generated, keeping existing notes.")
await llm.close()
await db.close()
print(f"\nMigrated {migrated}/{len(states)} user profiles.")
if __name__ == "__main__":
asyncio.run(main())
+328 -9
View File
@@ -138,6 +138,18 @@ class Database:
ALTER TABLE UserState ADD LastOffenseAt FLOAT NULL
""")
# --- Schema migration for user aliases/nicknames ---
cursor.execute("""
IF COL_LENGTH('UserState', 'Aliases') IS NULL
ALTER TABLE UserState ADD Aliases NVARCHAR(500) NULL
""")
# --- Schema migration for warning expiration ---
cursor.execute("""
IF COL_LENGTH('UserState', 'WarningExpiresAt') IS NULL
ALTER TABLE UserState ADD WarningExpiresAt FLOAT NULL
""")
cursor.execute("""
IF NOT EXISTS (SELECT * FROM sys.tables WHERE name = 'BotSettings')
CREATE TABLE BotSettings (
@@ -164,6 +176,22 @@ class Database:
)
""")
cursor.execute("""
IF NOT EXISTS (SELECT * FROM sys.tables WHERE name = 'UserMemory')
CREATE TABLE UserMemory (
Id BIGINT IDENTITY(1,1) PRIMARY KEY,
UserId BIGINT NOT NULL,
Memory NVARCHAR(500) NOT NULL,
Topics NVARCHAR(200) NOT NULL,
Importance NVARCHAR(10) NOT NULL,
ExpiresAt DATETIME2 NOT NULL,
Source NVARCHAR(20) NOT NULL,
CreatedAt DATETIME2 NOT NULL DEFAULT SYSUTCDATETIME(),
INDEX IX_UserMemory_UserId (UserId),
INDEX IX_UserMemory_ExpiresAt (ExpiresAt)
)
""")
cursor.close()
def _parse_database_name(self) -> str:
@@ -298,19 +326,21 @@ class Database:
user_notes: str | None = None,
warned: bool = False,
last_offense_at: float | None = None,
aliases: str | None = None,
warning_expires_at: float | None = None,
) -> None:
"""Upsert user state (offense count, immunity, off-topic count, coherence baseline, notes, warned, last offense time)."""
"""Upsert user state (offense count, immunity, off-topic count, coherence baseline, notes, warned, last offense time, aliases, warning expiration)."""
if not self._available:
return
try:
await asyncio.to_thread(
self._save_user_state_sync,
user_id, offense_count, immune, off_topic_count, baseline_coherence, user_notes, warned, last_offense_at,
user_id, offense_count, immune, off_topic_count, baseline_coherence, user_notes, warned, last_offense_at, aliases, warning_expires_at,
)
except Exception:
logger.exception("Failed to save user state")
def _save_user_state_sync(self, user_id, offense_count, immune, off_topic_count, baseline_coherence, user_notes, warned, last_offense_at):
def _save_user_state_sync(self, user_id, offense_count, immune, off_topic_count, baseline_coherence, user_notes, warned, last_offense_at, aliases, warning_expires_at):
conn = self._connect()
try:
cursor = conn.cursor()
@@ -321,14 +351,14 @@ class Database:
WHEN MATCHED THEN
UPDATE SET OffenseCount = ?, Immune = ?, OffTopicCount = ?,
BaselineCoherence = ?, UserNotes = ?, Warned = ?,
LastOffenseAt = ?,
LastOffenseAt = ?, Aliases = ?, WarningExpiresAt = ?,
UpdatedAt = SYSUTCDATETIME()
WHEN NOT MATCHED THEN
INSERT (UserId, OffenseCount, Immune, OffTopicCount, BaselineCoherence, UserNotes, Warned, LastOffenseAt)
VALUES (?, ?, ?, ?, ?, ?, ?, ?);""",
INSERT (UserId, OffenseCount, Immune, OffTopicCount, BaselineCoherence, UserNotes, Warned, LastOffenseAt, Aliases, WarningExpiresAt)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?);""",
user_id,
offense_count, 1 if immune else 0, off_topic_count, baseline_coherence, user_notes, 1 if warned else 0, last_offense_at,
user_id, offense_count, 1 if immune else 0, off_topic_count, baseline_coherence, user_notes, 1 if warned else 0, last_offense_at,
offense_count, 1 if immune else 0, off_topic_count, baseline_coherence, user_notes, 1 if warned else 0, last_offense_at, aliases, warning_expires_at,
user_id, offense_count, 1 if immune else 0, off_topic_count, baseline_coherence, user_notes, 1 if warned else 0, last_offense_at, aliases, warning_expires_at,
)
cursor.close()
finally:
@@ -371,7 +401,7 @@ class Database:
try:
cursor = conn.cursor()
cursor.execute(
"SELECT UserId, OffenseCount, Immune, OffTopicCount, BaselineCoherence, UserNotes, Warned, LastOffenseAt FROM UserState"
"SELECT UserId, OffenseCount, Immune, OffTopicCount, BaselineCoherence, UserNotes, Warned, LastOffenseAt, Aliases, WarningExpiresAt FROM UserState"
)
rows = cursor.fetchall()
cursor.close()
@@ -385,6 +415,8 @@ class Database:
"user_notes": row[5] or "",
"warned": bool(row[6]),
"last_offense_at": float(row[7]) if row[7] is not None else 0.0,
"aliases": row[8] or "",
"warning_expires_at": float(row[9]) if row[9] is not None else 0.0,
}
for row in rows
]
@@ -491,6 +523,293 @@ class Database:
finally:
conn.close()
# ------------------------------------------------------------------
# UserMemory (conversational memory per user)
# ------------------------------------------------------------------
async def save_memory(
self,
user_id: int,
memory: str,
topics: str,
importance: str,
expires_at: datetime,
source: str,
) -> None:
"""Insert a single memory row for a user."""
if not self._available:
return
try:
await asyncio.to_thread(
self._save_memory_sync,
user_id, memory, topics, importance, expires_at, source,
)
except Exception:
logger.exception("Failed to save memory")
def _save_memory_sync(self, user_id, memory, topics, importance, expires_at, source):
conn = self._connect()
try:
cursor = conn.cursor()
# Skip if an identical memory already exists for this user
cursor.execute(
"SELECT COUNT(*) FROM UserMemory WHERE UserId = ? AND Memory = ?",
user_id, memory[:500],
)
if cursor.fetchone()[0] > 0:
cursor.close()
return
cursor.execute(
"""INSERT INTO UserMemory (UserId, Memory, Topics, Importance, ExpiresAt, Source)
VALUES (?, ?, ?, ?, ?, ?)""",
user_id,
memory[:500],
topics[:200],
importance[:10],
expires_at,
source[:20],
)
cursor.close()
finally:
conn.close()
async def get_recent_memories(self, user_id: int, limit: int = 5) -> list[dict]:
"""Get the N most recent non-expired memories for a user."""
if not self._available:
return []
try:
return await asyncio.to_thread(self._get_recent_memories_sync, user_id, limit)
except Exception:
logger.exception("Failed to get recent memories")
return []
def _get_recent_memories_sync(self, user_id, limit) -> list[dict]:
conn = self._connect()
try:
cursor = conn.cursor()
cursor.execute(
"""SELECT TOP (?) Memory, Topics, Importance, CreatedAt
FROM UserMemory
WHERE UserId = ? AND ExpiresAt > SYSUTCDATETIME()
ORDER BY CreatedAt DESC""",
limit, user_id,
)
rows = cursor.fetchall()
cursor.close()
return [
{
"memory": row[0],
"topics": row[1],
"importance": row[2],
"created_at": row[3],
}
for row in rows
]
finally:
conn.close()
async def get_memories_by_topics(self, user_id: int, topic_keywords: list[str], limit: int = 5) -> list[dict]:
"""Get non-expired memories matching any of the given topic keywords via LIKE."""
if not self._available:
return []
try:
return await asyncio.to_thread(
self._get_memories_by_topics_sync, user_id, topic_keywords, limit,
)
except Exception:
logger.exception("Failed to get memories by topics")
return []
def _get_memories_by_topics_sync(self, user_id, topic_keywords, limit) -> list[dict]:
conn = self._connect()
try:
cursor = conn.cursor()
if not topic_keywords:
cursor.close()
return []
# Build OR conditions for each keyword
conditions = " OR ".join(["Topics LIKE ?" for _ in topic_keywords])
escaped = [kw.replace("%", "[%]").replace("_", "[_]") for kw in topic_keywords]
params = [limit, user_id] + [f"%{kw}%" for kw in escaped]
cursor.execute(
f"""SELECT TOP (?) Memory, Topics, Importance, CreatedAt
FROM UserMemory
WHERE UserId = ? AND ExpiresAt > SYSUTCDATETIME()
AND ({conditions})
ORDER BY
CASE Importance
WHEN 'high' THEN 1
WHEN 'medium' THEN 2
WHEN 'low' THEN 3
ELSE 4
END,
CreatedAt DESC""",
*params,
)
rows = cursor.fetchall()
cursor.close()
return [
{
"memory": row[0],
"topics": row[1],
"importance": row[2],
"created_at": row[3],
}
for row in rows
]
finally:
conn.close()
async def prune_expired_memories(self) -> int:
"""Delete all expired memories. Returns count deleted."""
if not self._available:
return 0
try:
return await asyncio.to_thread(self._prune_expired_memories_sync)
except Exception:
logger.exception("Failed to prune expired memories")
return 0
def _prune_expired_memories_sync(self) -> int:
conn = self._connect()
try:
cursor = conn.cursor()
cursor.execute("DELETE FROM UserMemory WHERE ExpiresAt < SYSUTCDATETIME()")
count = cursor.rowcount
cursor.close()
return count
finally:
conn.close()
async def prune_excess_memories(self, user_id: int, max_memories: int = 50) -> int:
"""Delete excess memories for a user beyond the cap, keeping high importance and newest first.
Returns count deleted."""
if not self._available:
return 0
try:
return await asyncio.to_thread(self._prune_excess_memories_sync, user_id, max_memories)
except Exception:
logger.exception("Failed to prune excess memories")
return 0
def _prune_excess_memories_sync(self, user_id, max_memories) -> int:
conn = self._connect()
try:
cursor = conn.cursor()
cursor.execute(
"""DELETE FROM UserMemory
WHERE Id IN (
SELECT Id FROM (
SELECT Id, ROW_NUMBER() OVER (
ORDER BY
CASE Importance
WHEN 'high' THEN 1
WHEN 'medium' THEN 2
WHEN 'low' THEN 3
ELSE 4
END,
CreatedAt DESC
) AS rn
FROM UserMemory
WHERE UserId = ?
) ranked
WHERE rn > ?
)""",
user_id, max_memories,
)
count = cursor.rowcount
cursor.close()
return count
finally:
conn.close()
# ------------------------------------------------------------------
# Drama Leaderboard (historical stats from Messages + AnalysisResults + Actions)
# ------------------------------------------------------------------
async def get_drama_leaderboard(self, guild_id: int, days: int | None = None) -> list[dict]:
"""Get per-user drama stats for the leaderboard.
days=None means all-time. Returns list of dicts sorted by user_id."""
if not self._available:
return []
try:
return await asyncio.to_thread(self._get_drama_leaderboard_sync, guild_id, days)
except Exception:
logger.exception("Failed to get drama leaderboard")
return []
def _get_drama_leaderboard_sync(self, guild_id: int, days: int | None) -> list[dict]:
conn = self._connect()
try:
cursor = conn.cursor()
date_filter = ""
params: list = [guild_id]
if days is not None:
date_filter = "AND m.CreatedAt >= DATEADD(DAY, ?, SYSUTCDATETIME())"
params.append(-days)
# Analysis stats from Messages + AnalysisResults
cursor.execute(f"""
SELECT
m.UserId,
MAX(m.Username) AS Username,
AVG(ar.ToxicityScore) AS AvgToxicity,
MAX(ar.ToxicityScore) AS MaxToxicity,
COUNT(*) AS MessagesAnalyzed
FROM Messages m
INNER JOIN AnalysisResults ar ON ar.MessageId = m.Id
WHERE m.GuildId = ? {date_filter}
GROUP BY m.UserId
""", *params)
analysis_rows = cursor.fetchall()
# Action counts
action_date_filter = ""
action_params: list = [guild_id]
if days is not None:
action_date_filter = "AND CreatedAt >= DATEADD(DAY, ?, SYSUTCDATETIME())"
action_params.append(-days)
cursor.execute(f"""
SELECT
UserId,
SUM(CASE WHEN ActionType = 'warning' THEN 1 ELSE 0 END) AS Warnings,
SUM(CASE WHEN ActionType = 'mute' THEN 1 ELSE 0 END) AS Mutes,
SUM(CASE WHEN ActionType IN ('topic_remind', 'topic_nudge') THEN 1 ELSE 0 END) AS OffTopic
FROM Actions
WHERE GuildId = ? {action_date_filter}
GROUP BY UserId
""", *action_params)
action_map = {}
for row in cursor.fetchall():
action_map[row[0]] = {
"warnings": row[1],
"mutes": row[2],
"off_topic": row[3],
}
cursor.close()
results = []
for row in analysis_rows:
user_id = row[0]
actions = action_map.get(user_id, {"warnings": 0, "mutes": 0, "off_topic": 0})
results.append({
"user_id": user_id,
"username": row[1],
"avg_toxicity": float(row[2]),
"max_toxicity": float(row[3]),
"messages_analyzed": row[4],
"warnings": actions["warnings"],
"mutes": actions["mutes"],
"off_topic": actions["off_topic"],
})
return results
finally:
conn.close()
async def close(self):
"""No persistent connection to close (connections are per-operation)."""
pass
+69 -1
View File
@@ -19,6 +19,7 @@ class UserDrama:
last_warning_time: float = 0.0
last_analysis_time: float = 0.0
warned_since_reset: bool = False
warning_expires_at: float = 0.0
immune: bool = False
# Topic drift tracking
off_topic_count: int = 0
@@ -28,8 +29,13 @@ class UserDrama:
coherence_scores: list[float] = field(default_factory=list)
baseline_coherence: float = 0.85
last_coherence_alert_time: float = 0.0
# Unblock nagging tracking
unblock_nag_count: int = 0
last_unblock_nag_time: float = 0.0
# Per-user LLM notes
notes: str = ""
# Known aliases/nicknames
aliases: list[str] = field(default_factory=list)
class DramaTracker:
@@ -38,10 +44,12 @@ class DramaTracker:
window_size: int = 10,
window_minutes: int = 15,
offense_reset_minutes: int = 120,
warning_expiration_minutes: int = 30,
):
self.window_size = window_size
self.window_seconds = window_minutes * 60
self.offense_reset_seconds = offense_reset_minutes * 60
self.warning_expiration_seconds = warning_expiration_minutes * 60
self._users: dict[int, UserDrama] = {}
def get_user(self, user_id: int) -> UserDrama:
@@ -72,6 +80,7 @@ class DramaTracker:
def get_drama_score(self, user_id: int, escalation_boost: float = 0.04) -> float:
user = self.get_user(user_id)
self._expire_warning(user)
now = time.time()
self._prune_entries(user, now)
@@ -103,6 +112,7 @@ class DramaTracker:
def get_mute_threshold(self, user_id: int, base_threshold: float) -> float:
"""Lower the mute threshold if user was already warned."""
user = self.get_user(user_id)
self._expire_warning(user)
if user.warned_since_reset:
return base_threshold - 0.05
return base_threshold
@@ -121,12 +131,34 @@ class DramaTracker:
user.offense_count += 1
user.last_offense_time = now
user.warned_since_reset = False
user.warning_expires_at = 0.0
return user.offense_count
def record_warning(self, user_id: int) -> None:
user = self.get_user(user_id)
user.last_warning_time = time.time()
now = time.time()
user.last_warning_time = now
user.warned_since_reset = True
if self.warning_expiration_seconds > 0:
user.warning_expires_at = now + self.warning_expiration_seconds
else:
user.warning_expires_at = 0.0 # Never expires
def _expire_warning(self, user: UserDrama) -> None:
"""Clear warned flag if the warning has expired."""
if (
user.warned_since_reset
and user.warning_expires_at > 0
and time.time() >= user.warning_expires_at
):
user.warned_since_reset = False
user.warning_expires_at = 0.0
def is_warned(self, user_id: int) -> bool:
"""Check if user is currently warned (respects expiration)."""
user = self.get_user(user_id)
self._expire_warning(user)
return user.warned_since_reset
def can_warn(self, user_id: int, cooldown_minutes: int) -> bool:
user = self.get_user(user_id)
@@ -209,9 +241,39 @@ class DramaTracker:
if len(lines) > 10:
user.notes = "\n".join(lines[-10:])
def set_user_profile(self, user_id: int, profile: str) -> None:
"""Replace the user's profile summary (permanent memory)."""
user = self.get_user(user_id)
user.notes = profile[:500]
def clear_user_notes(self, user_id: int) -> None:
self.get_user(user_id).notes = ""
def get_user_aliases(self, user_id: int) -> list[str]:
return self.get_user(user_id).aliases
def set_user_aliases(self, user_id: int, aliases: list[str]) -> None:
self.get_user(user_id).aliases = aliases
def get_all_aliases(self) -> dict[int, list[str]]:
"""Return {user_id: [aliases]} for all users that have aliases set."""
return {uid: user.aliases for uid, user in self._users.items() if user.aliases}
def record_unblock_nag(self, user_id: int) -> int:
user = self.get_user(user_id)
user.unblock_nag_count += 1
user.last_unblock_nag_time = time.time()
return user.unblock_nag_count
def can_unblock_remind(self, user_id: int, cooldown_minutes: int) -> bool:
user = self.get_user(user_id)
if user.last_unblock_nag_time == 0.0:
return True
return time.time() - user.last_unblock_nag_time > cooldown_minutes * 60
def get_unblock_nag_count(self, user_id: int) -> int:
return self.get_user(user_id).unblock_nag_count
def reset_off_topic(self, user_id: int) -> None:
user = self.get_user(user_id)
user.off_topic_count = 0
@@ -286,13 +348,19 @@ class DramaTracker:
user.notes = state["user_notes"]
if state.get("warned"):
user.warned_since_reset = True
user.warning_expires_at = state.get("warning_expires_at", 0.0) or 0.0
# Expire warning at load time if it's past due
self._expire_warning(user)
if state.get("last_offense_at"):
user.last_offense_time = state["last_offense_at"]
# Apply time-based offense reset at load time
if time.time() - user.last_offense_time > self.offense_reset_seconds:
user.offense_count = 0
user.warned_since_reset = False
user.warning_expires_at = 0.0
user.last_offense_time = 0.0
if state.get("aliases"):
user.aliases = [a.strip() for a in state["aliases"].split(",") if a.strip()]
count += 1
return count
+400 -3
View File
@@ -37,6 +37,7 @@ ANALYSIS_TOOL = {
"hostile",
"manipulative",
"sexual_vulgar",
"jealousy",
"none",
],
},
@@ -85,12 +86,17 @@ ANALYSIS_TOOL = {
},
"note_update": {
"type": ["string", "null"],
"description": "Brief new observation about this user's style/behavior for future reference, or null if nothing new.",
"description": "Brief new observation about this user's style/behavior for future reference, or null if nothing new. NEVER quote toxic language — describe patterns abstractly (e.g. 'uses personal insults when frustrated').",
},
"detected_game": {
"type": ["string", "null"],
"description": "The game channel name this message is about (e.g. 'gta-online', 'warzone'), or null if not game-specific.",
},
"violated_rules": {
"type": "array",
"items": {"type": "integer"},
"description": "Rule numbers violated (empty array if none).",
},
},
"required": ["toxicity_score", "categories", "reasoning", "off_topic", "topic_category", "topic_reasoning", "coherence_score", "coherence_flag"],
},
@@ -130,6 +136,7 @@ CONVERSATION_TOOL = {
"hostile",
"manipulative",
"sexual_vulgar",
"jealousy",
"none",
],
},
@@ -182,12 +189,17 @@ CONVERSATION_TOOL = {
},
"note_update": {
"type": ["string", "null"],
"description": "New observation about this user's pattern, or null.",
"description": "New observation about this user's pattern, or null. NEVER quote toxic language — describe patterns abstractly.",
},
"detected_game": {
"type": ["string", "null"],
"description": "The game channel name this user's messages are about, or null.",
},
"violated_rules": {
"type": "array",
"items": {"type": "integer"},
"description": "Rule numbers violated (empty array if none).",
},
},
"required": ["username", "toxicity_score", "categories", "reasoning", "off_topic", "topic_category", "topic_reasoning", "coherence_score", "coherence_flag"],
},
@@ -203,6 +215,55 @@ CONVERSATION_TOOL = {
},
}
MEMORY_EXTRACTION_TOOL = {
"type": "function",
"function": {
"name": "extract_memories",
"description": "Extract noteworthy memories from a conversation for future reference.",
"parameters": {
"type": "object",
"properties": {
"memories": {
"type": "array",
"items": {
"type": "object",
"properties": {
"memory": {
"type": "string",
"description": "A concise fact or observation worth remembering.",
},
"topics": {
"type": "array",
"items": {"type": "string"},
"description": "Topic tags for retrieval (e.g., 'gta', 'personal', 'warzone').",
},
"expiration": {
"type": "string",
"enum": ["1d", "3d", "7d", "30d", "permanent"],
"description": "How long this memory stays relevant.",
},
"importance": {
"type": "string",
"enum": ["low", "medium", "high"],
"description": "How important this memory is for future interactions.",
},
},
"required": ["memory", "topics", "expiration", "importance"],
},
"description": "Memories to store. Only include genuinely new or noteworthy information.",
},
"profile_update": {
"type": ["string", "null"],
"description": "Full updated profile summary incorporating new permanent facts, or null if no profile changes.",
},
},
"required": ["memories"],
},
},
}
MEMORY_EXTRACTION_PROMPT = (_PROMPTS_DIR / "memory_extraction.txt").read_text(encoding="utf-8")
_NO_TEMPERATURE_MODELS = {"gpt-5-nano", "o1", "o1-mini", "o1-preview", "o3", "o3-mini", "o4-mini"}
@@ -248,12 +309,15 @@ class LLMClient:
async def analyze_message(
self, message: str, context: str = "", user_notes: str = "",
channel_context: str = "", mention_context: str = "",
rules_context: str = "",
) -> dict | None:
user_content = f"=== RECENT CHANNEL MESSAGES (for background context only) ===\n{context}\n\n"
if user_notes:
user_content += f"=== NOTES ABOUT THIS USER (from prior analysis) ===\n{user_notes}\n\n"
if channel_context:
user_content += f"=== CHANNEL INFO ===\n{channel_context}\n\n"
if rules_context:
user_content += f"=== SERVER RULES ===\n{rules_context}\n\n"
if mention_context:
user_content += f"=== USER REPORT (a user flagged this conversation — focus on this concern) ===\n{mention_context}\n\n"
user_content += f"=== TARGET MESSAGE (analyze THIS message only) ===\n{message}"
@@ -331,6 +395,8 @@ class LLMClient:
result.setdefault("note_update", None)
result.setdefault("detected_game", None)
if not isinstance(result.get("violated_rules"), list):
result["violated_rules"] = []
return result
@@ -438,6 +504,8 @@ class LLMClient:
channel_context: str = "",
user_notes_map: dict[str, str] | None = None,
new_message_start: int | None = None,
user_aliases: str = "",
rules_context: str = "",
) -> dict | None:
"""Analyze a conversation block in one call, returning per-user findings."""
if not messages:
@@ -446,12 +514,16 @@ class LLMClient:
convo_block = self._format_conversation_block(messages, new_message_start=new_message_start)
user_content = f"=== CONVERSATION BLOCK ===\n{convo_block}\n\n"
if user_aliases:
user_content += f"=== KNOWN MEMBER ALIASES (names other members use to refer to each other) ===\n{user_aliases}\n\n"
if user_notes_map:
notes_lines = [f" {u}: {n}" for u, n in user_notes_map.items() if n]
if notes_lines:
user_content += "=== USER NOTES (from prior analysis) ===\n" + "\n".join(notes_lines) + "\n\n"
if channel_context:
user_content += f"=== CHANNEL INFO ===\n{channel_context}\n\n"
if rules_context:
user_content += f"=== SERVER RULES ===\n{rules_context}\n\n"
if mention_context:
user_content += f"=== USER REPORT (a user flagged this conversation — focus on this concern) ===\n{mention_context}\n\n"
user_content += "Analyze the conversation block above and report findings for each user."
@@ -533,6 +605,8 @@ class LLMClient:
finding.setdefault("coherence_flag", "normal")
finding.setdefault("note_update", None)
finding.setdefault("detected_game", None)
if not isinstance(finding.get("violated_rules"), list):
finding["violated_rules"] = []
result["user_findings"] = findings
result.setdefault("conversation_summary", "")
return result
@@ -626,19 +700,342 @@ class LLMClient:
self._log_llm("chat", elapsed, False, req_json, error=str(e))
return None
async def classify_mention_intent(self, message_text: str) -> str:
"""Classify whether a bot @mention is a chat/question or a moderation report.
Returns 'chat' or 'report'. Defaults to 'chat' on failure.
"""
prompt = (
"You are classifying the intent of a Discord message that @mentioned a bot.\n"
"Reply with EXACTLY one word: 'chat' or 'report'.\n\n"
"- 'chat' = the user is talking to the bot, asking a question, joking, greeting, "
"or having a conversation. This includes things like 'what do you think?', "
"'hey bot', 'do you know...', or any general interaction.\n"
"- 'report' = the user is flagging bad behavior, asking the bot to check/scan "
"the chat, reporting toxicity, or pointing out someone being problematic. "
"This includes things like 'check this', 'they're being toxic', 'look at what "
"they said', 'scan the chat', or concerns about other users.\n\n"
"If unsure, say 'chat'."
)
t0 = time.monotonic()
async with self._semaphore:
try:
temp_kwargs = {"temperature": 0.0} if self._supports_temperature else {}
response = await self._client.chat.completions.create(
model=self.model,
messages=[
{"role": "system", "content": prompt},
{"role": "user", "content": message_text},
],
**temp_kwargs,
max_completion_tokens=16,
)
elapsed = int((time.monotonic() - t0) * 1000)
content = (response.choices[0].message.content or "").strip().lower()
intent = "report" if "report" in content else "chat"
self._log_llm("classify_intent", elapsed, True, message_text[:200], intent)
logger.info("Mention intent classified as '%s' for: %s", intent, message_text[:80])
return intent
except Exception as e:
elapsed = int((time.monotonic() - t0) * 1000)
logger.error("Intent classification error: %s", e)
self._log_llm("classify_intent", elapsed, False, message_text[:200], error=str(e))
return "chat"
_REACTION_EMOJIS = {
"\U0001f480", "\U0001f602", "\U0001f440", "\U0001f525",
"\U0001f4af", "\U0001f62d", "\U0001f921", "\u2764\ufe0f",
"\U0001fae1", "\U0001f913", "\U0001f974", "\U0001f3af",
}
async def pick_reaction(self, message_text: str, channel_name: str) -> str | None:
"""Pick a contextual emoji reaction for a Discord message.
Returns an emoji string, or None if no reaction is appropriate.
"""
prompt = (
"You are a lurker in a Discord gaming server. "
"Given a message and its channel, decide if it deserves a reaction emoji.\n\n"
"Available reactions:\n"
"\U0001f480 = funny/dead\n"
"\U0001f602 = hilarious\n"
"\U0001f440 = drama/spicy\n"
"\U0001f525 = impressive\n"
"\U0001f4af = good take\n"
"\U0001f62d = sad/tragic\n"
"\U0001f921 = clown moment\n"
"\u2764\ufe0f = wholesome\n"
"\U0001fae1 = respect\n"
"\U0001f913 = nerd\n"
"\U0001f974 = drunk/unhinged\n"
"\U0001f3af = accurate\n\n"
"Reply with ONLY the emoji, or NONE if the message doesn't warrant a reaction. "
"Most messages should get NONE — only react when something genuinely stands out."
)
t0 = time.monotonic()
async with self._semaphore:
try:
temp_kwargs = {"temperature": 0.9} if self._supports_temperature else {}
response = await self._client.chat.completions.create(
model=self.model,
messages=[
{"role": "system", "content": prompt},
{"role": "user", "content": f"[#{channel_name}] {message_text[:500]}"},
],
**temp_kwargs,
max_completion_tokens=16,
)
elapsed = int((time.monotonic() - t0) * 1000)
raw = (response.choices[0].message.content or "").strip()
token = raw.split()[0] if raw.split() else ""
if not token or token.lower() == "none" or token not in self._REACTION_EMOJIS:
self._log_llm("pick_reaction", elapsed, True, message_text[:200], "NONE")
return None
self._log_llm("pick_reaction", elapsed, True, message_text[:200], token)
logger.debug("Picked reaction %s for: %s", token, message_text[:80])
return token
except Exception as e:
elapsed = int((time.monotonic() - t0) * 1000)
logger.error("Reaction pick error: %s", e)
self._log_llm("pick_reaction", elapsed, False, message_text[:200], error=str(e))
return None
async def check_reply_relevance(
self, recent_messages: list[str], memory_context: str = "",
) -> bool:
"""Check if the bot would naturally want to jump into a conversation.
Returns True if the conversation is something worth replying to.
"""
prompt = (
"You're a regular member of a Discord gaming server. You're reading chat and deciding "
"whether you'd naturally want to jump in and say something.\n\n"
"Say YES if:\n"
"- Someone said something you'd have a strong reaction to\n"
"- You know something relevant about these people (see memory context)\n"
"- Someone is wrong or has a hot take you'd want to respond to\n"
"- The conversation is funny or interesting enough to comment on\n"
"- Someone mentioned something you have an opinion on\n\n"
"Say NO if:\n"
"- It's mundane/boring small talk\n"
"- You'd have nothing interesting to add\n"
"- People are just chatting normally and don't need interruption\n\n"
"Reply with EXACTLY one word: YES or NO."
)
convo_text = "\n".join(recent_messages[-5:])
user_content = ""
if memory_context:
user_content += f"{memory_context}\n\n"
user_content += f"Recent chat:\n{convo_text}"
t0 = time.monotonic()
async with self._semaphore:
try:
temp_kwargs = {"temperature": 0.3} if self._supports_temperature else {}
response = await self._client.chat.completions.create(
model=self.model,
messages=[
{"role": "system", "content": prompt},
{"role": "user", "content": user_content[:1000]},
],
**temp_kwargs,
max_completion_tokens=16,
)
elapsed = int((time.monotonic() - t0) * 1000)
content = (response.choices[0].message.content or "").strip().lower()
is_relevant = "yes" in content
self._log_llm(
"check_relevance", elapsed, True,
user_content[:300], content,
)
logger.debug("Relevance check: %s", content)
return is_relevant
except Exception as e:
elapsed = int((time.monotonic() - t0) * 1000)
logger.error("Relevance check error: %s", e)
self._log_llm("check_relevance", elapsed, False, user_content[:300], error=str(e))
return False
async def extract_memories(
self,
conversation: list[dict[str, str]],
username: str,
current_profile: str = "",
) -> dict | None:
"""Extract memories from a conversation for a specific user.
Returns dict with "memories" list and optional "profile_update", or None on failure.
"""
# Format conversation as readable lines
convo_lines = []
for msg in conversation:
role = msg.get("role", "")
content = msg.get("content", "")
if role == "assistant":
convo_lines.append(f"Bot: {content}")
else:
convo_lines.append(f"{username}: {content}")
convo_text = "\n".join(convo_lines)
user_content = ""
if current_profile:
user_content += f"=== CURRENT PROFILE FOR {username} ===\n{current_profile}\n\n"
else:
user_content += f"=== CURRENT PROFILE FOR {username} ===\n(no profile yet)\n\n"
user_content += f"=== CONVERSATION ===\n{convo_text}\n\n"
user_content += f"Extract any noteworthy memories from this conversation with {username}."
user_content = self._append_no_think(user_content)
req_json = json.dumps([
{"role": "system", "content": MEMORY_EXTRACTION_PROMPT[:500]},
{"role": "user", "content": user_content[:500]},
], default=str)
t0 = time.monotonic()
async with self._semaphore:
try:
temp_kwargs = {"temperature": 0.3} if self._supports_temperature else {}
response = await self._client.chat.completions.create(
model=self.model,
messages=[
{"role": "system", "content": MEMORY_EXTRACTION_PROMPT},
{"role": "user", "content": user_content},
],
tools=[MEMORY_EXTRACTION_TOOL],
tool_choice={"type": "function", "function": {"name": "extract_memories"}},
**temp_kwargs,
max_completion_tokens=1024,
)
elapsed = int((time.monotonic() - t0) * 1000)
choice = response.choices[0]
usage = response.usage
if choice.message.tool_calls:
tool_call = choice.message.tool_calls[0]
resp_text = tool_call.function.arguments
args = json.loads(resp_text)
self._log_llm("memory_extraction", elapsed, True, req_json, resp_text,
input_tokens=usage.prompt_tokens if usage else None,
output_tokens=usage.completion_tokens if usage else None)
return self._validate_memory_result(args)
logger.warning("No tool call in memory extraction response.")
self._log_llm("memory_extraction", elapsed, False, req_json, error="No tool call")
return None
except Exception as e:
elapsed = int((time.monotonic() - t0) * 1000)
logger.error("LLM memory extraction error: %s", e)
self._log_llm("memory_extraction", elapsed, False, req_json, error=str(e))
return None
@staticmethod
def _validate_memory_result(result: dict) -> dict:
"""Validate and normalize memory extraction result."""
valid_expirations = {"1d", "3d", "7d", "30d", "permanent"}
valid_importances = {"low", "medium", "high"}
memories = []
for mem in result.get("memories", []):
if not isinstance(mem, dict):
continue
memory_text = str(mem.get("memory", ""))[:500]
if not memory_text:
continue
topics = mem.get("topics", [])
if not isinstance(topics, list):
topics = []
topics = [str(t).lower() for t in topics]
expiration = str(mem.get("expiration", "7d"))
if expiration not in valid_expirations:
expiration = "7d"
importance = str(mem.get("importance", "medium"))
if importance not in valid_importances:
importance = "medium"
memories.append({
"memory": memory_text,
"topics": topics,
"expiration": expiration,
"importance": importance,
})
profile_update = result.get("profile_update")
if profile_update is not None:
profile_update = str(profile_update)[:500]
return {
"memories": memories,
"profile_update": profile_update,
}
async def sanitize_notes(self, notes: str) -> str:
"""Rewrite user notes to remove any quoted toxic/offensive language.
Returns the sanitized notes string, or the original on failure.
"""
if not notes or len(notes.strip()) == 0:
return notes
system_prompt = (
"Rewrite the following user behavior notes. Remove any quoted offensive language, "
"slurs, or profanity. Replace toxic quotes with abstract descriptions of the behavior "
"(e.g. 'directed a personal insult at another user' instead of quoting the insult). "
"Preserve all non-toxic observations, timestamps, and behavioral patterns exactly. "
"Return ONLY the rewritten notes, nothing else."
)
user_content = notes
if self._no_think:
user_content += "\n/no_think"
t0 = time.monotonic()
async with self._semaphore:
try:
temp_kwargs = {"temperature": 0.1} if self._supports_temperature else {}
response = await self._client.chat.completions.create(
model=self.model,
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_content},
],
**temp_kwargs,
max_completion_tokens=1024,
)
elapsed = int((time.monotonic() - t0) * 1000)
result = response.choices[0].message.content
if result and result.strip():
self._log_llm("sanitize_notes", elapsed, True, notes[:300], result[:300])
return result.strip()
self._log_llm("sanitize_notes", elapsed, False, notes[:300], error="Empty response")
return notes
except Exception as e:
elapsed = int((time.monotonic() - t0) * 1000)
logger.error("LLM sanitize_notes error: %s", e)
self._log_llm("sanitize_notes", elapsed, False, notes[:300], error=str(e))
return notes
async def analyze_image(
self,
image_bytes: bytes,
system_prompt: str,
user_text: str = "",
on_first_token=None,
media_type: str = "image/png",
) -> str | None:
"""Send an image to the vision model with a system prompt.
Returns the generated text response, or None on failure.
"""
b64 = base64.b64encode(image_bytes).decode()
data_url = f"data:image/png;base64,{b64}"
data_url = f"data:{media_type};base64,{b64}"
user_content: list[dict] = [
{"type": "image_url", "image_url": {"url": data_url}},