Topic drift reminders and nudges now direct users to a specific
channel (configurable via redirect_channel). Both static templates
and LLM-generated redirects include the clickable channel mention.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Changed if to elif so detected_game redirect only fires when
the topic_drift branch wasn't taken.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix metadata description to match actual code behavior (optional fields)
- Add texting cadence guidance (lowercase, fragments, casual punctuation)
- Add multi-user conversation handling, conversation exit, deflection, and
genuine-upset guidance
- Expand examples from 3 to 7 covering varied response styles
- Organize into VOICE/ENGAGEMENT sections for clarity
- Trim over-explained AFTERTHOUGHTS section
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
New personality mode with 25% reply chance, very relaxed moderation
thresholds (0.85/0.90), suggestive but not explicit personality.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Parse display names with ': ' split to handle colons in names
- Reset cooldown to half instead of subtract-3 to reduce LLM call frequency
- Remove redundant message.guild check
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace random-only proactive reply logic with LLM relevance check.
The bot now evaluates recent conversation context and user memory
before deciding to jump in, then applies reply_chance as a second
gate. Bump reply_chance values higher since the relevance filter
prevents most irrelevant replies.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Extract _split_afterthought helper method
- Store cleaned content (no |||) in chat history to prevent LLM reinforcement
- Handle afterthought splitting in reaction-reply path too
- Log main_reply instead of raw response
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add triple-pipe afterthought splitting to chat replies so the bot can
send a follow-up message 2-5 seconds later, mimicking natural Discord
typing behavior. Update all 6 personality prompts with afterthought
instructions (~1 in 5 replies) and memory callback guidance so the bot
actively references what it knows about users. Enhance memory extraction
prompt to flag bold claims, contradictions, and embarrassing moments as
high-importance callback-worthy memories with a "callback" topic tag.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Use time.monotonic() at reaction time instead of stale message-receive timestamp
- Add excluded_channels config and filtering
- Truncate message content to 500 chars in pick_reaction
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add a new cog that gives the bot ambient presence by reacting to
messages with contextual emoji chosen by the triage LLM. Includes
RNG gating and per-channel cooldown to keep reactions sparse and
natural.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Prevents text words like "skull" from passing the filter and causing
Discord HTTPException noise.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Lightweight LLM call that picks a contextual emoji reaction for a
Discord message. Uses temperature 0.9 for variety, max 16 tokens,
and validates the response is a short emoji token or returns None.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix dirty-user flush race: discard IDs individually after successful save
- Escape LIKE wildcards in LLM-generated topic keywords for DB queries
- Anonymize absent-member aliases to prevent LLM de-anonymization
- Pass correct MIME type to vision model based on image file extension
- Use enumerate instead of list.index() in bcs-scan loop
- Allow bot @mentions with non-report intent to fall through to moderation
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Warning flag now auto-expires after a configurable duration
(warning_expiration_minutes, default 30m). After expiry, the user must
be re-warned before a mute can be issued.
Messages that triggered moderation actions (warnings/mutes) are now
excluded from the LLM context window in both buffered analysis and
mention scans, preventing already-actioned content from influencing
future scoring. Uses in-memory tracking plus bot reaction fallback
for post-restart coverage.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- LLM now evaluates messages against numbered server rules and reports
violated_rules in analysis output
- Warnings and mutes cite the specific rule(s) broken
- Rules extracted to prompts/rules.txt for prompt injection
- Personality prompts moved to prompts/personalities/ and compressed
(~63% reduction across all prompt files)
- All prompt files tightened: removed redundancy, consolidated Do NOT
sections, trimmed examples while preserving behavioral instructions
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Removed "Oh," from example lines that the model was mimicking, added
explicit DO NOT rule against "Oh" openers, and added more varied examples.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The LLM was interpreting "sassy hall monitor" as warm/motherly with pet
names like "oh sweetheart" and "bless your heart". Added explicit guidance
for deadpan, dry Discord mod energy instead.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Global sync can take up to an hour to propagate. Now also syncs commands
per-guild in on_ready for immediate availability.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Queries Messages, AnalysisResults, and Actions tables to rank users by a
composite drama score (weighted avg toxicity, peak toxicity, and action rate).
Public command with configurable time period (7d/30d/90d/all-time).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace static random templates with LLM-generated redirect messages that
reference what the user actually said and why it's off-topic. Sass escalates
with higher strike counts. Falls back to static templates if LLM fails or
use_llm is disabled in config.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Prevents inserting a memory if an identical one already exists for the
user. Also cleaned up 30 anonymized and 4 duplicate memories from DB.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The LLM returns note_update, reasoning, and worst_message with
anonymized names. These are now replaced with real display names
before storage, so user profiles no longer contain meaningless
User1/User2 references.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Aliases now stored in UserState table instead of config.yaml. Adds
Aliases column (NVARCHAR 500), loads on startup, persists via flush.
New /bcs-alias slash command (view/set/clear) for managing nicknames.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds user_aliases config section mapping Discord IDs to known nicknames.
Aliases are anonymized and injected into LLM analysis context so it can
recognize when someone name-drops another member (even absent ones).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
LLM can now flag possessive name-dropping, territorial behavior, and
jealousy signals when users mention others not in the conversation.
Scores feed into existing drama pipeline for warnings/mutes.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When @mentioned, fetch recent messages from ALL users in the channel
(up to 15 messages) instead of only the mentioner's messages. This lets
the bot understand debates and discussions it's asked to weigh in on.
Also update the personality prompt to engage with topics substantively
when asked for opinions, rather than deflecting with generic jokes.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds LLM triage on bot @mentions to determine if the user is chatting
or reporting bad behavior. Only 'report' intents trigger the 30-message
scan; 'chat' intents skip the scan and let ChatCog handle it.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Use LLM_ESCALATION_* env vars for better profile generation
- Fall back to joining permanent memories if profile_update is null
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Merge worktree: adds _extract_and_save_memories() method and fire-and-forget
extraction call after each chat reply. Combined with Task 4's memory
retrieval and injection.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replaces the entire notes field with an LLM-generated profile summary,
used by the memory extraction system for permanent facts.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
9-task step-by-step plan covering DB schema, LLM extraction tool, memory
retrieval/injection in chat, sentiment pipeline routing, background pruning,
and migration script.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>