9-task step-by-step plan covering DB schema, LLM extraction tool, memory retrieval/injection in chat, sentiment pipeline routing, background pruning, and migration script. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
30 KiB
Conversational Memory Implementation Plan
For Claude: REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
Goal: Add persistent conversational memory so the bot knows people, remembers past interactions, and gives context-aware answers.
Architecture: Two-layer memory system — permanent profile in existing UserState.UserNotes column, expiring memories in new UserMemory table. LLM extracts memories after conversations (active) and from sentiment analysis (passive). At chat time, relevant memories are retrieved via recency + topic matching and injected into the prompt.
Tech Stack: Python 3, discord.py, pyodbc/MSSQL, OpenAI-compatible API (tool calling)
Note: This project has no test framework configured. Skip TDD steps — implement directly and test via running the bot.
Task 1: Database — UserMemory table and CRUD methods
Files:
- Modify:
utils/database.py
Step 1: Add UserMemory table to schema
In _create_schema(), after the existing LlmLog table creation block (around line 165), add:
cursor.execute("""
IF NOT EXISTS (SELECT * FROM sys.tables WHERE name = 'UserMemory')
CREATE TABLE UserMemory (
Id BIGINT IDENTITY(1,1) PRIMARY KEY,
UserId BIGINT NOT NULL,
Memory NVARCHAR(500) NOT NULL,
Topics NVARCHAR(200) NOT NULL,
Importance NVARCHAR(10) NOT NULL,
ExpiresAt DATETIME2 NOT NULL,
Source NVARCHAR(20) NOT NULL,
CreatedAt DATETIME2 NOT NULL DEFAULT SYSUTCDATETIME(),
INDEX IX_UserMemory_UserId (UserId),
INDEX IX_UserMemory_ExpiresAt (ExpiresAt)
)
""")
Step 2: Add save_memory() method
Add after the save_llm_log methods (~line 441):
# ------------------------------------------------------------------
# User Memory (conversational memory system)
# ------------------------------------------------------------------
async def save_memory(
self,
user_id: int,
memory: str,
topics: str,
importance: str,
expires_at: datetime,
source: str,
) -> None:
"""Save an expiring memory for a user."""
if not self._available:
return
try:
await asyncio.to_thread(
self._save_memory_sync,
user_id, memory, topics, importance, expires_at, source,
)
except Exception:
logger.exception("Failed to save user memory")
def _save_memory_sync(self, user_id, memory, topics, importance, expires_at, source):
conn = self._connect()
try:
cursor = conn.cursor()
cursor.execute(
"""INSERT INTO UserMemory (UserId, Memory, Topics, Importance, ExpiresAt, Source)
VALUES (?, ?, ?, ?, ?, ?)""",
user_id, memory[:500], topics[:200], importance[:10], expires_at, source[:20],
)
cursor.close()
finally:
conn.close()
Step 3: Add get_recent_memories() method
async def get_recent_memories(self, user_id: int, limit: int = 5) -> list[dict]:
"""Get the most recent non-expired memories for a user."""
if not self._available:
return []
try:
return await asyncio.to_thread(self._get_recent_memories_sync, user_id, limit)
except Exception:
logger.exception("Failed to get recent memories")
return []
def _get_recent_memories_sync(self, user_id, limit) -> list[dict]:
conn = self._connect()
try:
cursor = conn.cursor()
cursor.execute(
"""SELECT TOP (?) Memory, Topics, Importance, CreatedAt
FROM UserMemory
WHERE UserId = ? AND ExpiresAt > SYSUTCDATETIME()
ORDER BY CreatedAt DESC""",
limit, user_id,
)
rows = cursor.fetchall()
cursor.close()
return [
{"memory": row[0], "topics": row[1], "importance": row[2], "created_at": row[3]}
for row in rows
]
finally:
conn.close()
Step 4: Add get_memories_by_topics() method
async def get_memories_by_topics(
self, user_id: int, topic_keywords: list[str], limit: int = 5,
) -> list[dict]:
"""Get non-expired memories matching any of the given topic keywords."""
if not self._available or not topic_keywords:
return []
try:
return await asyncio.to_thread(
self._get_memories_by_topics_sync, user_id, topic_keywords, limit,
)
except Exception:
logger.exception("Failed to get memories by topics")
return []
def _get_memories_by_topics_sync(self, user_id, topic_keywords, limit) -> list[dict]:
conn = self._connect()
try:
cursor = conn.cursor()
# Build OR conditions for each keyword
conditions = " OR ".join(["Topics LIKE ?" for _ in topic_keywords])
params = [f"%{kw.lower()}%" for kw in topic_keywords]
query = f"""SELECT TOP (?) Memory, Topics, Importance, CreatedAt
FROM UserMemory
WHERE UserId = ? AND ExpiresAt > SYSUTCDATETIME()
AND ({conditions})
ORDER BY
CASE Importance WHEN 'high' THEN 3 WHEN 'medium' THEN 2 ELSE 1 END DESC,
CreatedAt DESC"""
cursor.execute(query, limit, user_id, *params)
rows = cursor.fetchall()
cursor.close()
return [
{"memory": row[0], "topics": row[1], "importance": row[2], "created_at": row[3]}
for row in rows
]
finally:
conn.close()
Step 5: Add pruning methods
async def prune_expired_memories(self) -> int:
"""Delete all expired memories. Returns count deleted."""
if not self._available:
return 0
try:
return await asyncio.to_thread(self._prune_expired_memories_sync)
except Exception:
logger.exception("Failed to prune expired memories")
return 0
def _prune_expired_memories_sync(self) -> int:
conn = self._connect()
try:
cursor = conn.cursor()
cursor.execute("DELETE FROM UserMemory WHERE ExpiresAt < SYSUTCDATETIME()")
count = cursor.rowcount
cursor.close()
return count
finally:
conn.close()
async def prune_excess_memories(self, user_id: int, max_memories: int = 50) -> int:
"""Delete lowest-priority memories if a user exceeds the cap. Returns count deleted."""
if not self._available:
return 0
try:
return await asyncio.to_thread(
self._prune_excess_memories_sync, user_id, max_memories,
)
except Exception:
logger.exception("Failed to prune excess memories")
return 0
def _prune_excess_memories_sync(self, user_id, max_memories) -> int:
conn = self._connect()
try:
cursor = conn.cursor()
cursor.execute(
"""DELETE FROM UserMemory
WHERE Id IN (
SELECT Id FROM UserMemory
WHERE UserId = ?
ORDER BY
CASE Importance WHEN 'high' THEN 3 WHEN 'medium' THEN 2 ELSE 1 END DESC,
CreatedAt DESC
OFFSET ? ROWS
)""",
user_id, max_memories,
)
count = cursor.rowcount
cursor.close()
return count
finally:
conn.close()
Step 6: Commit
git add utils/database.py
git commit -m "feat: add UserMemory table and CRUD methods for conversational memory"
Task 2: LLM Client — Memory extraction tool and method
Files:
- Modify:
utils/llm_client.py - Create:
prompts/memory_extraction.txt
Step 1: Create memory extraction prompt
Create prompts/memory_extraction.txt:
You are a memory extraction system for a Discord bot. Given a conversation between a user and the bot, extract any noteworthy information worth remembering for future interactions.
RULES:
- Only extract genuinely NEW information not already in the user's profile.
- Be concise — each memory should be one sentence max.
- Assign appropriate expiration based on how long the information stays relevant:
- "permanent": Stable facts — name, job, hobbies, games they play, personality traits, pets, relationships
- "30d": Semi-stable preferences, ongoing situations — "trying to quit Warzone", "grinding for rank 500"
- "7d": Temporary situations — "excited about upcoming DLC", "on vacation this week"
- "3d": Short-term context — "had a bad day", "playing with friends tonight"
- "1d": Momentary state — "drunk right now", "tilted from losses", "in a good mood"
- Assign topic tags that would help retrieve this memory later (game names, "personal", "work", "mood", etc.)
- Assign importance: "high" for things they'd expect you to remember, "medium" for useful context, "low" for minor color
- If you learn a permanent fact about the user, provide a profile_update that incorporates the new fact into their existing profile. Rewrite the ENTIRE profile summary — don't just append. Keep it under 500 characters.
- If nothing worth remembering was said, return an empty memories array and null profile_update.
- Do NOT store things the bot said — only facts about or from the user.
Use the extract_memories tool to report your findings.
Step 2: Add MEMORY_EXTRACTION_TOOL definition to llm_client.py
Add after the CONVERSATION_TOOL definition (around line 204):
MEMORY_EXTRACTION_TOOL = {
"type": "function",
"function": {
"name": "extract_memories",
"description": "Extract noteworthy memories from a conversation for future reference.",
"parameters": {
"type": "object",
"properties": {
"memories": {
"type": "array",
"items": {
"type": "object",
"properties": {
"memory": {
"type": "string",
"description": "A concise fact or observation worth remembering.",
},
"topics": {
"type": "array",
"items": {"type": "string"},
"description": "Topic tags for retrieval (e.g., 'gta', 'personal', 'warzone').",
},
"expiration": {
"type": "string",
"enum": ["1d", "3d", "7d", "30d", "permanent"],
"description": "How long this memory stays relevant.",
},
"importance": {
"type": "string",
"enum": ["low", "medium", "high"],
"description": "How important this memory is for future interactions.",
},
},
"required": ["memory", "topics", "expiration", "importance"],
},
"description": "Memories to store. Only include genuinely new or noteworthy information.",
},
"profile_update": {
"type": ["string", "null"],
"description": "Full updated profile summary incorporating new permanent facts, or null if no profile changes.",
},
},
"required": ["memories"],
},
},
}
MEMORY_EXTRACTION_PROMPT = (_PROMPTS_DIR / "memory_extraction.txt").read_text(encoding="utf-8")
Step 3: Add extract_memories() method to LLMClient
Add after the chat() method (around line 627):
async def extract_memories(
self,
conversation: list[dict[str, str]],
username: str,
current_profile: str = "",
) -> dict | None:
"""Extract memories from a conversation. Returns dict with 'memories' list and optional 'profile_update'."""
convo_text = "\n".join(
f"{'Bot' if m['role'] == 'assistant' else username}: {m['content']}"
for m in conversation
if m.get("content")
)
user_content = f"=== USER PROFILE ===\n{current_profile or '(no profile yet)'}\n\n"
user_content += f"=== CONVERSATION ===\n{convo_text}\n\n"
user_content += "Extract any noteworthy memories from this conversation."
user_content = self._append_no_think(user_content)
req_json = json.dumps([
{"role": "system", "content": MEMORY_EXTRACTION_PROMPT[:500]},
{"role": "user", "content": user_content[:500]},
], default=str)
t0 = time.monotonic()
async with self._semaphore:
try:
temp_kwargs = {"temperature": 0.3} if self._supports_temperature else {}
response = await self._client.chat.completions.create(
model=self.model,
messages=[
{"role": "system", "content": MEMORY_EXTRACTION_PROMPT},
{"role": "user", "content": user_content},
],
tools=[MEMORY_EXTRACTION_TOOL],
tool_choice={"type": "function", "function": {"name": "extract_memories"}},
**temp_kwargs,
max_completion_tokens=1024,
)
elapsed = int((time.monotonic() - t0) * 1000)
choice = response.choices[0]
usage = response.usage
if choice.message.tool_calls:
tool_call = choice.message.tool_calls[0]
resp_text = tool_call.function.arguments
args = json.loads(resp_text)
self._log_llm("memory_extraction", elapsed, True, req_json, resp_text,
input_tokens=usage.prompt_tokens if usage else None,
output_tokens=usage.completion_tokens if usage else None)
return self._validate_memory_result(args)
logger.warning("No tool call in memory extraction response.")
self._log_llm("memory_extraction", elapsed, False, req_json, error="No tool call")
return None
except Exception as e:
elapsed = int((time.monotonic() - t0) * 1000)
logger.error("Memory extraction error: %s", e)
self._log_llm("memory_extraction", elapsed, False, req_json, error=str(e))
return None
@staticmethod
def _validate_memory_result(result: dict) -> dict:
"""Validate and normalize memory extraction result."""
if not isinstance(result, dict):
return {"memories": [], "profile_update": None}
memories = []
for m in result.get("memories", []):
if not isinstance(m, dict) or not m.get("memory"):
continue
memories.append({
"memory": str(m["memory"])[:500],
"topics": [str(t).lower() for t in m.get("topics", []) if t],
"expiration": m.get("expiration", "7d") if m.get("expiration") in ("1d", "3d", "7d", "30d", "permanent") else "7d",
"importance": m.get("importance", "medium") if m.get("importance") in ("low", "medium", "high") else "medium",
})
profile_update = result.get("profile_update")
if profile_update and isinstance(profile_update, str):
profile_update = profile_update[:500]
else:
profile_update = None
return {"memories": memories, "profile_update": profile_update}
Step 4: Commit
git add utils/llm_client.py prompts/memory_extraction.txt
git commit -m "feat: add memory extraction LLM tool and prompt"
Task 3: DramaTracker — Update user notes handling
Files:
- Modify:
utils/drama_tracker.py
Step 1: Add set_user_profile() method
Add after update_user_notes() (around line 210):
def set_user_profile(self, user_id: int, profile: str) -> None:
"""Replace the user's profile summary (permanent memory)."""
user = self.get_user(user_id)
user.notes = profile[:500]
This replaces the entire notes field with the LLM-generated profile summary. The existing update_user_notes() method continues to work for backward compatibility with the sentiment pipeline during the transition — passive note_update values will still append until Task 5 routes them through the new memory system.
Step 2: Commit
git add utils/drama_tracker.py
git commit -m "feat: add set_user_profile method to DramaTracker"
Task 4: ChatCog — Memory retrieval and injection
Files:
- Modify:
cogs/chat.py
Step 1: Add memory retrieval helper
Add a helper method to ChatCog and a module-level utility for formatting relative timestamps:
# At module level, after the imports
from datetime import datetime, timezone
_TOPIC_KEYWORDS = {
"gta", "warzone", "cod", "battlefield", "fortnite", "apex", "valorant",
"minecraft", "roblox", "league", "dota", "overwatch", "destiny", "halo",
"work", "job", "school", "college", "girlfriend", "boyfriend", "wife",
"husband", "dog", "cat", "pet", "car", "music", "movie", "food",
}
def _extract_topic_keywords(text: str, channel_name: str = "") -> list[str]:
"""Extract potential topic keywords from message text and channel name."""
words = set(text.lower().split())
keywords = list(words & _TOPIC_KEYWORDS)
# Add channel name as topic if it's a game channel
if channel_name and channel_name not in ("general", "off-topic", "memes"):
keywords.append(channel_name.lower())
return keywords[:5] # cap at 5 keywords
def _format_relative_time(dt: datetime) -> str:
"""Format a datetime as a relative time string."""
now = datetime.now(timezone.utc)
if dt.tzinfo is None:
dt = dt.replace(tzinfo=timezone.utc)
delta = now - dt
days = delta.days
if days == 0:
hours = delta.seconds // 3600
if hours == 0:
return "just now"
return f"{hours}h ago"
if days == 1:
return "yesterday"
if days < 7:
return f"{days} days ago"
if days < 30:
weeks = days // 7
return f"{weeks}w ago"
months = days // 30
return f"{months}mo ago"
Add method to ChatCog:
async def _build_memory_context(self, user_id: int, message_text: str, channel_name: str) -> str:
"""Build the memory context block to inject into the chat prompt."""
parts = []
# Layer 1: Profile (from DramaTracker / UserNotes)
profile = self.bot.drama_tracker.get_user_notes(user_id)
if profile:
parts.append(f"Profile: {profile}")
# Layer 2: Recent expiring memories
recent = await self.bot.db.get_recent_memories(user_id, limit=5)
if recent:
recent_strs = [
f"{m['memory']} ({_format_relative_time(m['created_at'])})"
for m in recent
]
parts.append("Recent: " + " | ".join(recent_strs))
# Layer 3: Topic-matched memories
keywords = _extract_topic_keywords(message_text, channel_name)
if keywords:
topic_memories = await self.bot.db.get_memories_by_topics(user_id, keywords, limit=5)
# Deduplicate against recent memories
recent_texts = {m["memory"] for m in recent} if recent else set()
topic_memories = [m for m in topic_memories if m["memory"] not in recent_texts]
if topic_memories:
topic_strs = [
f"{m['memory']} ({_format_relative_time(m['created_at'])})"
for m in topic_memories
]
parts.append("Relevant: " + " | ".join(topic_strs))
if not parts:
return ""
return "[What you know about this person:]\n" + "\n".join(parts)
Step 2: Inject memory context into chat path
In on_message(), in the text-only chat path, after building extra_context with user notes and recent messages (around line 200), replace the existing user notes injection:
Find this block (around lines 179-183):
extra_context = ""
user_notes = self.bot.drama_tracker.get_user_notes(message.author.id)
if user_notes:
extra_context += f"[Notes about {message.author.display_name}: {user_notes}]\n"
Replace with:
extra_context = ""
memory_context = await self._build_memory_context(
message.author.id, content, message.channel.name,
)
if memory_context:
extra_context += memory_context + "\n"
This replaces the old flat notes injection with the layered memory context block.
Step 3: Commit
git add cogs/chat.py
git commit -m "feat: inject persistent memory context into chat responses"
Task 5: ChatCog — Memory extraction after conversations
Files:
- Modify:
cogs/chat.py
Step 1: Add memory saving helper
Add to ChatCog:
async def _extract_and_save_memories(
self, user_id: int, username: str, conversation: list[dict[str, str]],
) -> None:
"""Background task: extract memories from conversation and save them."""
try:
current_profile = self.bot.drama_tracker.get_user_notes(user_id)
result = await self.bot.llm.extract_memories(
conversation, username, current_profile,
)
if not result:
return
# Save expiring memories
for mem in result.get("memories", []):
if mem["expiration"] == "permanent":
continue # permanent facts go into profile_update
exp_days = {"1d": 1, "3d": 3, "7d": 7, "30d": 30}
days = exp_days.get(mem["expiration"], 7)
expires_at = datetime.now(timezone.utc) + timedelta(days=days)
await self.bot.db.save_memory(
user_id=user_id,
memory=mem["memory"],
topics=",".join(mem["topics"]),
importance=mem["importance"],
expires_at=expires_at,
source="chat",
)
# Prune if over cap
await self.bot.db.prune_excess_memories(user_id)
# Update profile if warranted
profile_update = result.get("profile_update")
if profile_update:
self.bot.drama_tracker.set_user_profile(user_id, profile_update)
self._dirty_users.add(user_id)
logger.info(
"Extracted %d memories for %s (profile_update=%s)",
len(result.get("memories", [])),
username,
bool(profile_update),
)
except Exception:
logger.exception("Failed to extract memories for %s", username)
Step 2: Add _dirty_users set and flush task
Add to __init__:
self._dirty_users: set[int] = set()
Memory extraction marks users as dirty when their profile changes. The existing flush mechanism in SentimentCog handles DB writes — but since ChatCog now also modifies user state, add a simple flush in the memory extraction itself. The set_user_profile call dirties the in-memory DramaTracker, and SentimentCog's periodic flush (every 5 minutes) will persist it.
Step 3: Add timedelta import and fire memory extraction after reply
Add from datetime import datetime, timedelta, timezone to the imports at the top of the file.
In on_message(), after the bot sends its reply (after await message.reply(...), around line 266), add:
# Fire-and-forget memory extraction
if not image_attachment:
asyncio.create_task(self._extract_and_save_memories(
message.author.id,
message.author.display_name,
list(self._chat_history[ch_id]),
))
Step 4: Commit
git add cogs/chat.py
git commit -m "feat: extract and save memories after chat conversations"
Task 6: Sentiment pipeline — Route note_update into memory system
Files:
- Modify:
cogs/sentiment/__init__.py
Step 1: Update note_update handling in _process_finding()
Find the note_update block (around lines 378-381):
# Note update
if note_update:
self.bot.drama_tracker.update_user_notes(user_id, note_update)
self._dirty_users.add(user_id)
Replace with:
# Note update — route to memory system
if note_update:
# Still update the legacy notes for backward compat with analysis prompt
self.bot.drama_tracker.update_user_notes(user_id, note_update)
self._dirty_users.add(user_id)
# Also save as an expiring memory (7d default for passive observations)
asyncio.create_task(self.bot.db.save_memory(
user_id=user_id,
memory=note_update[:500],
topics=db_topic_category or "general",
importance="medium",
expires_at=datetime.now(timezone.utc) + timedelta(days=7),
source="passive",
))
Step 2: Add necessary imports at top of file
Ensure timedelta is imported. Check existing imports — datetime and timezone are likely already imported. Add timedelta if missing:
from datetime import datetime, timedelta, timezone
Step 3: Commit
git add cogs/sentiment/__init__.py
git commit -m "feat: route sentiment note_updates into memory system"
Task 7: Bot — Memory pruning background task
Files:
- Modify:
bot.py
Step 1: Add pruning task to on_ready()
In BCSBot.on_ready() (around line 165), after the permissions check loop, add:
# Start memory pruning background task
if not hasattr(self, "_memory_prune_task") or self._memory_prune_task.done():
self._memory_prune_task = asyncio.create_task(self._prune_memories_loop())
Step 2: Add the pruning loop method to BCSBot
Add to the BCSBot class, after on_ready():
async def _prune_memories_loop(self):
"""Background task that prunes expired memories every 6 hours."""
await self.wait_until_ready()
while not self.is_closed():
try:
count = await self.db.prune_expired_memories()
if count > 0:
logger.info("Pruned %d expired memories.", count)
except Exception:
logger.exception("Memory pruning error")
await asyncio.sleep(6 * 3600) # Every 6 hours
Step 3: Commit
git add bot.py
git commit -m "feat: add background memory pruning task"
Task 8: Migrate existing user notes to profile format
Files:
- Create:
scripts/migrate_notes_to_profiles.py
This is a one-time migration script to convert existing timestamped note lines into profile summaries using the LLM.
Step 1: Create migration script
"""One-time migration: convert existing timestamped UserNotes into profile summaries.
Run with: python scripts/migrate_notes_to_profiles.py
Requires .env with DB_CONNECTION_STRING and LLM env vars.
"""
import asyncio
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(__file__)))
from dotenv import load_dotenv
load_dotenv()
from utils.database import Database
from utils.llm_client import LLMClient
async def main():
db = Database()
if not await db.init():
print("Database not available.")
return
llm = LLMClient(
base_url=os.getenv("LLM_BASE_URL", ""),
model=os.getenv("LLM_MODEL", "gpt-4o-mini"),
api_key=os.getenv("LLM_API_KEY", "not-needed"),
)
states = await db.load_all_user_states()
migrated = 0
for state in states:
notes = state.get("user_notes", "")
if not notes or not notes.strip():
continue
# Check if already looks like a profile (no timestamps)
if not any(line.strip().startswith("[") for line in notes.split("\n")):
print(f" User {state['user_id']}: already looks like a profile, skipping.")
continue
print(f" User {state['user_id']}: migrating notes...")
print(f" Old: {notes[:200]}")
# Ask LLM to summarize notes into a profile
result = await llm.extract_memories(
conversation=[{"role": "user", "content": f"Here are observation notes about a user:\n{notes}"}],
username="unknown",
current_profile="",
)
if result and result.get("profile_update"):
profile = result["profile_update"]
print(f" New: {profile[:200]}")
await db.save_user_state(
user_id=state["user_id"],
offense_count=state["offense_count"],
immune=state["immune"],
off_topic_count=state["off_topic_count"],
baseline_coherence=state.get("baseline_coherence", 0.85),
user_notes=profile,
warned=state.get("warned", False),
last_offense_at=state.get("last_offense_at"),
)
migrated += 1
else:
print(f" No profile generated, keeping existing notes.")
await llm.close()
await db.close()
print(f"\nMigrated {migrated}/{len(states)} user profiles.")
if __name__ == "__main__":
asyncio.run(main())
Step 2: Commit
git add scripts/migrate_notes_to_profiles.py
git commit -m "feat: add one-time migration script for user notes to profiles"
Task 9: Integration test — End-to-end verification
Step 1: Start the bot locally and verify
docker compose up --build
Step 2: Verify schema migration
Check Docker logs for successful DB initialization — the new UserMemory table should be created automatically.
Step 3: Test memory extraction
- @mention the bot in a Discord channel with a message like "Hey, I've been grinding GTA all week trying to hit rank 500"
- Check logs for
Extracted N memories for {username}— confirms memory extraction ran - Check DB:
SELECT * FROM UserMemoryshould have rows
Step 4: Test memory retrieval
- @mention the bot again with "what do you know about me?"
- The response should reference the GTA grinding from the previous interaction
- Check logs for the memory context block being built
Step 5: Test memory expiration
Manually insert a test memory with an expired timestamp and verify the pruning task removes it (or wait for the 6-hour cycle, or temporarily shorten the interval for testing).
Step 6: Commit any fixes
git add -A
git commit -m "fix: integration test fixes for conversational memory"
Summary
| Task | What | Files |
|---|---|---|
| 1 | DB schema + CRUD | utils/database.py |
| 2 | LLM extraction tool | utils/llm_client.py, prompts/memory_extraction.txt |
| 3 | DramaTracker profile setter | utils/drama_tracker.py |
| 4 | Memory retrieval + injection in chat | cogs/chat.py |
| 5 | Memory extraction after chat | cogs/chat.py |
| 6 | Sentiment pipeline routing | cogs/sentiment/__init__.py |
| 7 | Background pruning task | bot.py |
| 8 | Migration script | scripts/migrate_notes_to_profiles.py |
| 9 | Integration test | (manual) |