Add a separate llm_chat client so chat responses use a smarter model (gpt-4o-mini) while analysis stays on the cheap local Qwen3-8B. Falls back to llm_heavy if LLM_CHAT_MODEL is not set. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
9.1 KiB
9.1 KiB