feat: add server rule violation detection and compress prompts

- LLM now evaluates messages against numbered server rules and reports
  violated_rules in analysis output
- Warnings and mutes cite the specific rule(s) broken
- Rules extracted to prompts/rules.txt for prompt injection
- Personality prompts moved to prompts/personalities/ and compressed
  (~63% reduction across all prompt files)
- All prompt files tightened: removed redundancy, consolidated Do NOT
  sections, trimmed examples while preserving behavioral instructions

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-27 22:14:35 -05:00
parent ed51db527c
commit bf32a9536a
22 changed files with 230 additions and 293 deletions

View File

@@ -85,7 +85,7 @@ class ChatCog(commands.Cog):
def _get_active_prompt(self) -> str:
"""Load the chat prompt for the current mode."""
mode_config = self.bot.get_mode_config()
prompt_file = mode_config.get("prompt_file", "chat_personality.txt")
prompt_file = mode_config.get("prompt_file", "personalities/chat_personality.txt")
return _load_prompt(prompt_file)
async def _build_memory_context(self, user_id: int, message_text: str, channel_name: str) -> str: