c258994a2e0a8cb3a867ea0cc4ef6dce5842add5
Add a separate llm_chat client so chat responses use a smarter model (gpt-4o-mini) while analysis stays on the cheap local Qwen3-8B. Falls back to llm_heavy if LLM_CHAT_MODEL is not set. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Description
No description provided
Languages
Python
98.8%
Shell
0.8%
Dockerfile
0.4%