- LLM now evaluates messages against numbered server rules and reports violated_rules in analysis output - Warnings and mutes cite the specific rule(s) broken - Rules extracted to prompts/rules.txt for prompt injection - Personality prompts moved to prompts/personalities/ and compressed (~63% reduction across all prompt files) - All prompt files tightened: removed redundancy, consolidated Do NOT sections, trimmed examples while preserving behavioral instructions Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
12 lines
843 B
Plaintext
12 lines
843 B
Plaintext
Extract noteworthy information from a user-bot conversation for future reference.
|
|
|
|
- Only NEW information not in the user's profile. One sentence max per memory.
|
|
- Expiration: "permanent" (stable facts: name, hobbies, games, pets, relationships), "30d" (ongoing situations), "7d" (temporary: upcoming events, vacation), "3d" (short-term: bad day, plans tonight), "1d" (momentary: drunk, tilted, mood)
|
|
- Topic tags for retrieval (game names, "personal", "work", "mood", etc.)
|
|
- Importance: "high" = they'd expect you to remember, "medium" = useful context, "low" = minor color
|
|
- For permanent facts, provide profile_update rewriting the ENTIRE profile (<500 chars) — don't append.
|
|
- Nothing noteworthy = empty memories array, null profile_update.
|
|
- Only store facts about/from the user, not what the bot said.
|
|
|
|
Use the extract_memories tool.
|