Add LLM configuration and update URL routing
- Introduced LLM configuration in rag_config.yaml and corresponding LLMConfig class in config.py. - Updated load_rag_config function to parse LLM settings from the configuration file. - Added new API route for RAG in urls.py to facilitate access to the chat model. - Modified QdrantVectorStore to use query_points method for improved functionality.
This commit is contained in:
@@ -18,6 +18,12 @@ chunking:
|
||||
max_chunk_tokens: 500
|
||||
overlap_tokens: 50
|
||||
|
||||
# تنظیمات مدل چت (LLM) — Avalai
|
||||
llm:
|
||||
model: "gpt-4o"
|
||||
base_url: "https://api.avalai.ir/v1"
|
||||
api_key_env: "AVALAI_API_KEY"
|
||||
|
||||
tone_file: "config/tone.txt"
|
||||
knowledge_base_path: "config/knowledge_base"
|
||||
user_info_path: "config/user_info"
|
||||
|
||||
Reference in New Issue
Block a user