Send last ~8 messages from all users (not just others) as a
multi-line chat log with relative timestamps so the LLM can
better understand conversation flow and escalation patterns.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Detect when users discuss a game in the wrong channel (e.g. GTA talk
in #warzone) and send a friendly redirect to the correct channel.
Also add sexual_vulgar category and scoring rules so crude sexual
remarks directed at someone aren't softened by "lmao".
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When @mentioned with an image attachment, the bot now roasts players
based on scoreboard screenshots using the vision model. Text-only
mentions continue to work as before.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The analyze_message and raw_analyze methods had no max_tokens limit,
causing thinking models (Qwen3-VL-32B-Thinking) to generate unlimited
reasoning tokens before responding — taking 5+ minutes per message.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Serialize all LLM requests through an asyncio semaphore to prevent
overloading athena with concurrent requests
- Switch chat() to streaming so the typing indicator only appears once
the model starts generating (not during thinking/loading)
- Increase LLM timeout from 5 to 10 minutes for slow first loads
- Rename ollama_client.py to llm_client.py and self.ollama to self.llm
since the bot uses a generic OpenAI-compatible API
- Update embed labels from "Ollama" to "LLM"
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>