Optimisez vos réponses avec le Workflow RAG Adaptatif

Ce workflow n8n implémente une approche de génération augmentée par récupération adaptative (RAG) qui classe automatiquement les requêtes des utilisateurs en quatre catégories distinctes : Factual, Analytical, Opinion, ou Contextual. Il applique ensuite des stratégies de récupération et de génération personnalisées pour fournir des réponses précises et pertinentes. Grâce à l'intégration de Google Gemini et Qdrant, ce workflow offre une capacité exceptionnelle à traiter et analyser les requêtes complexes, optimisant ainsi le processus décisionnel et améliorant l'expérience utilisateur.

85,895 vues
24,931 copies
Automatisation

Documentation Complète

📋 Optimisez vos réponses avec le Workflow RAG Adaptatif

💡 Description

Ce workflow n8n implémente une approche de génération augmentée par récupération adaptative (RAG) qui classe automatiquement les requêtes des utilisateurs en quatre catégories distinctes : Factual, Analytical, Opinion, ou Contextual. Il applique ensuite des stratégies de récupération et de génération personnalisées pour fournir des réponses précises et pertinentes. Grâce à l'intégration de Google Gemini et Qdrant, ce workflow offre une capacité exceptionnelle à traiter et analyser les requêtes complexes, optimisant ainsi le processus décisionnel et améliorant l'expérience utilisateur.

📈 Impact & ROI: Amélioration significative de la satisfaction client grâce à des réponses plus pertinentes, réduisant le temps passé sur la gestion manuelle des requêtes.

🚀 Fonctionnalités Clés

  • ✅ Classification intelligente des requêtes pour une précision accrue
  • ✅ Stratégies de récupération adaptées aux types de requêtes
  • ✅ Intégration fluide avec Google Gemini pour un traitement avancé
  • ✅ Optimisation du temps de réponse grâce à un traitement automatisé

📊 Architecture Technique

39
Nodes
29
Connexions
2
Services

🔌 Services Intégrés

Google GeminiQdrant

🔧 Composition du Workflow

NodeTypeDescription
Query Classification@n8n/n8n-nodes-langchain.agentTraitement des données
SwitchswitchTraitement des données
Factual Strategy - Focus on Precision@n8n/n8n-nodes-langchain.agentTraitement des données
Analytical Strategy - Comprehensive Coverage@n8n/n8n-nodes-langchain.agentTraitement des données
Opinion Strategy - Diverse Perspectives@n8n/n8n-nodes-langchain.agentTraitement des données
Contextual Strategy - User Context Integration@n8n/n8n-nodes-langchain.agentTraitement des données
Chat@n8n/n8n-nodes-langchain.chatTriggerTraitement des données
Factual Prompt and OutputsetTraitement des données
Contextual Prompt and OutputsetTraitement des données
Opinion Prompt and OutputsetTraitement des données
Analytical Prompt and OutputsetTraitement des données
Gemini Classification@n8n/n8n-nodes-langchain.lmChatGoogleGeminiTraitement des données
Gemini Factual@n8n/n8n-nodes-langchain.lmChatGoogleGeminiTraitement des données
Gemini Analytical@n8n/n8n-nodes-langchain.lmChatGoogleGeminiTraitement des données
Chat Buffer Memory Analytical@n8n/n8n-nodes-langchain.memoryBufferWindowTraitement des données
Chat Buffer Memory Factual@n8n/n8n-nodes-langchain.memoryBufferWindowTraitement des données
Gemini Opinion@n8n/n8n-nodes-langchain.lmChatGoogleGeminiTraitement des données
Chat Buffer Memory Opinion@n8n/n8n-nodes-langchain.memoryBufferWindowTraitement des données
Gemini Contextual@n8n/n8n-nodes-langchain.lmChatGoogleGeminiTraitement des données
Chat Buffer Memory Contextual@n8n/n8n-nodes-langchain.memoryBufferWindowTraitement des données
Embeddings@n8n/n8n-nodes-langchain.embeddingsGoogleGeminiTraitement des données
Sticky NotestickyNoteTraitement des données
Sticky Note1stickyNoteTraitement des données
Sticky Note2stickyNoteTraitement des données
Sticky Note3stickyNoteTraitement des données
Concatenate ContextsummarizeTraitement des données
Retrieve Documents from Vector Store@n8n/n8n-nodes-langchain.vectorStoreQdrantTraitement des données
Set Prompt and OutputsetTraitement des données
Gemini Answer@n8n/n8n-nodes-langchain.lmChatGoogleGeminiTraitement des données
Answer@n8n/n8n-nodes-langchain.agentTraitement des données
Chat Buffer Memory@n8n/n8n-nodes-langchain.memoryBufferWindowTraitement des données
Sticky Note4stickyNoteTraitement des données
Sticky Note5stickyNoteTraitement des données
Respond to WebhookrespondToWebhookRéception de données via webhook
Sticky Note6stickyNoteTraitement des données
When Executed by Another WorkflowexecuteWorkflowTriggerTraitement des données
Combined FieldssetTraitement des données
Sticky Note7stickyNoteTraitement des données
Sticky Note8stickyNoteTraitement des données

📖 Guide d'Implémentation

  1. Import du workflow: Téléchargez le fichier JSON et importez-le dans votre instance n8n
  2. Configuration des credentials: Configurez les accès pour chaque service utilisé
  3. Personnalisation: Adaptez les paramètres selon vos besoins spécifiques
  4. Test: Exécutez le workflow en mode test pour vérifier le bon fonctionnement
  5. Activation: Activez le workflow pour une exécution automatique

🏷️ Tags

RAGGoogle Geminin8n

Structure JSON

Voir le code JSON complet
{
    "id": "cpuFyJYHKmjHTncz",
    "meta": {
        "instanceId": "2cb7a61f866faf57392b91b31f47e08a2b3640258f0abd08dd71f087f3243a5a",
        "templateCredsSetupCompleted": true
    },
    "name": "Adaptive RAG",
    "tags": [],
    "nodes": [
        {
            "id": "856bd809-8f41-41af-8f72-a3828229c2a5",
            "name": "Query Classification",
            "type": "@n8n\/n8n-nodes-langchain.agent",
            "notes": "Classify a query into one of four categories: Factual, Analytical, Opinion, or Contextual.\n        \nReturns:\nstr: Query category",
            "position": [
                380,
                -20
            ],
            "parameters": {
                "text": "=Classify this query: {{ $('Combined Fields').item.json.user_query }}",
                "options": {
                    "systemMessage": "You are an expert at classifying questions. \n\nClassify the given query into exactly one of these categories:\n- Factual: Queries seeking specific, verifiable information.\n- Analytical: Queries requiring comprehensive analysis or explanation.\n- Opinion: Queries about subjective matters or seeking diverse viewpoints.\n- Contextual: Queries that depend on user-specific context.\n\nReturn ONLY the category name, without any explanation or additional text."
                },
                "promptType": "define"
            },
            "typeVersion": 1.8
        },
        {
            "id": "cc2106fc-f1a8-45ef-b37b-ab981ac13466",
            "name": "Switch",
            "type": "n8n-nodes-base.switch",
            "position": [
                740,
                -40
            ],
            "parameters": {
                "rules": {
                    "values": [
                        {
                            "outputKey": "Factual",
                            "conditions": {
                                "options": {
                                    "version": 2,
                                    "leftValue": "",
                                    "caseSensitive": true,
                                    "typeValidation": "strict"
                                },
                                "combinator": "and",
                                "conditions": [
                                    {
                                        "id": "87f3b50c-9f32-4260-ac76-19c05b28d0b4",
                                        "operator": {
                                            "type": "string",
                                            "operation": "equals"
                                        },
                                        "leftValue": "={{ $json.output.trim() }}",
                                        "rightValue": "Factual"
                                    }
                                ]
                            },
                            "renameOutput": true
                        },
                        {
                            "outputKey": "Analytical",
                            "conditions": {
                                "options": {
                                    "version": 2,
                                    "leftValue": "",
                                    "caseSensitive": true,
                                    "typeValidation": "strict"
                                },
                                "combinator": "and",
                                "conditions": [
                                    {
                                        "id": "f8651b36-79fa-4be4-91fb-0e6d7deea18f",
                                        "operator": {
                                            "name": "filter.operator.equals",
                                            "type": "string",
                                            "operation": "equals"
                                        },
                                        "leftValue": "={{ $json.output.trim() }}",
                                        "rightValue": "Analytical"
                                    }
                                ]
                            },
                            "renameOutput": true
                        },
                        {
                            "outputKey": "Opinion",
                            "conditions": {
                                "options": {
                                    "version": 2,
                                    "leftValue": "",
                                    "caseSensitive": true,
                                    "typeValidation": "strict"
                                },
                                "combinator": "and",
                                "conditions": [
                                    {
                                        "id": "5dde06bc-5fe1-4dca-b6e2-6857c5e96d49",
                                        "operator": {
                                            "name": "filter.operator.equals",
                                            "type": "string",
                                            "operation": "equals"
                                        },
                                        "leftValue": "={{ $json.output.trim() }}",
                                        "rightValue": "Opinion"
                                    }
                                ]
                            },
                            "renameOutput": true
                        },
                        {
                            "outputKey": "Contextual",
                            "conditions": {
                                "options": {
                                    "version": 2,
                                    "leftValue": "",
                                    "caseSensitive": true,
                                    "typeValidation": "strict"
                                },
                                "combinator": "and",
                                "conditions": [
                                    {
                                        "id": "bf97926d-7a0b-4e2f-aac0-a820f73344d8",
                                        "operator": {
                                            "name": "filter.operator.equals",
                                            "type": "string",
                                            "operation": "equals"
                                        },
                                        "leftValue": "={{ $json.output.trim() }}",
                                        "rightValue": "Contextual"
                                    }
                                ]
                            },
                            "renameOutput": true
                        }
                    ]
                },
                "options": {
                    "fallbackOutput": 0
                }
            },
            "typeVersion": 3.2
        },
        {
            "id": "63889cad-1283-4dbf-ba16-2b6cf575f24a",
            "name": "Factual Strategy - Focus on Precision",
            "type": "@n8n\/n8n-nodes-langchain.agent",
            "notes": "Retrieval strategy for factual queries focusing on precision.",
            "position": [
                1140,
                -780
            ],
            "parameters": {
                "text": "=Enhance this factual query: {{ $('Combined Fields').item.json.user_query }}",
                "options": {
                    "systemMessage": "=You are an expert at enhancing search queries.\n\nYour task is to reformulate the given factual query to make it more precise and specific for information retrieval. Focus on key entities and their relationships.\n\nProvide ONLY the enhanced query without any explanation."
                },
                "promptType": "define"
            },
            "typeVersion": 1.7
        },
        {
            "id": "020d2201-9590-400d-b496-48c65801271c",
            "name": "Analytical Strategy - Comprehensive Coverage",
            "type": "@n8n\/n8n-nodes-langchain.agent",
            "notes": "Retrieval strategy for analytical queries focusing on comprehensive coverage.",
            "position": [
                1140,
                -240
            ],
            "parameters": {
                "text": "=Generate sub-questions for this analytical query: {{ $('Combined Fields').item.json.user_query }}",
                "options": {
                    "systemMessage": "=You are an expert at breaking down complex questions.\n\nGenerate sub-questions that explore different aspects of the main analytical query.\nThese sub-questions should cover the breadth of the topic and help retrieve comprehensive information.\n\nReturn a list of exactly 3 sub-questions, one per line."
                },
                "promptType": "define"
            },
            "typeVersion": 1.7
        },
        {
            "id": "c35d1b95-68c8-4237-932d-4744f620760d",
            "name": "Opinion Strategy - Diverse Perspectives",
            "type": "@n8n\/n8n-nodes-langchain.agent",
            "notes": "Retrieval strategy for opinion queries focusing on diverse perspectives.",
            "position": [
                1140,
                300
            ],
            "parameters": {
                "text": "=Identify different perspectives on: {{ $('Combined Fields').item.json.user_query }}",
                "options": {
                    "systemMessage": "=You are an expert at identifying different perspectives on a topic.\n\nFor the given query about opinions or viewpoints, identify different perspectives that people might have on this topic.\n\nReturn a list of exactly 3 different viewpoint angles, one per line."
                },
                "promptType": "define"
            },
            "typeVersion": 1.7
        },
        {
            "id": "363a3fc3-112f-40df-891e-0a5aa3669245",
            "name": "Contextual Strategy - User Context Integration",
            "type": "@n8n\/n8n-nodes-langchain.agent",
            "notes": "Retrieval strategy for contextual queries integrating user context.",
            "position": [
                1140,
                840
            ],
            "parameters": {
                "text": "=Infer the implied context in this query: {{ $('Combined Fields').item.json.user_query }}",
                "options": {
                    "systemMessage": "=You are an expert at understanding implied context in questions.\n\nFor the given query, infer what contextual information might be relevant or implied but not explicitly stated. Focus on what background would help answering this query.\n\nReturn a brief description of the implied context."
                },
                "promptType": "define"
            },
            "typeVersion": 1.7
        },
        {
            "id": "45887701-5ea5-48b4-9b2b-40a80238ab0c",
            "name": "Chat",
            "type": "@n8n\/n8n-nodes-langchain.chatTrigger",
            "position": [
                -280,
                120
            ],
            "webhookId": "56f626b5-339e-48af-857f-1d4198fc8a4d",
            "parameters": {
                "options": []
            },
            "typeVersion": 1.1
        },
        {
            "id": "7f7df364-4829-4e29-be3d-d13a63f65b8f",
            "name": "Factual Prompt and Output",
            "type": "n8n-nodes-base.set",
            "position": [
                1540,
                -780
            ],
            "parameters": {
                "options": [],
                "assignments": {
                    "assignments": [
                        {
                            "id": "a4a28ac2-4a56-46f6-8b86-f5d1a34b2ced",
                            "name": "output",
                            "type": "string",
                            "value": "={{ $json.output }}"
                        },
                        {
                            "id": "7aa6ce13-afbf-4871-b81c-6e9c722a53dc",
                            "name": "prompt",
                            "type": "string",
                            "value": "You are a helpful assistant providing factual information. Answer the question based on the provided context. Focus on accuracy and precision. If the context doesn't contain the information needed, acknowledge the limitations."
                        }
                    ]
                }
            },
            "typeVersion": 3.4
        },
        {
            "id": "590d8667-69eb-4db2-b5be-714c602b319a",
            "name": "Contextual Prompt and Output",
            "type": "n8n-nodes-base.set",
            "position": [
                1540,
                840
            ],
            "parameters": {
                "options": [],
                "assignments": {
                    "assignments": [
                        {
                            "id": "a4a28ac2-4a56-46f6-8b86-f5d1a34b2ced",
                            "name": "output",
                            "type": "string",
                            "value": "={{ $json.output }}"
                        },
                        {
                            "id": "7aa6ce13-afbf-4871-b81c-6e9c722a53dc",
                            "name": "prompt",
                            "type": "string",
                            "value": "You are a helpful assistant providing contextually relevant information. Answer the question considering both the query and its context. Make connections between the query context and the information in the provided documents. If the context doesn't fully address the specific situation, acknowledge the limitations."
                        }
                    ]
                }
            },
            "typeVersion": 3.4
        },
        {
            "id": "fa3228ee-62d8-4c02-9dca-8a1ebc6afc74",
            "name": "Opinion Prompt and Output",
            "type": "n8n-nodes-base.set",
            "position": [
                1540,
                300
            ],
            "parameters": {
                "options": [],
                "assignments": {
                    "assignments": [
                        {
                            "id": "a4a28ac2-4a56-46f6-8b86-f5d1a34b2ced",
                            "name": "output",
                            "type": "string",
                            "value": "={{ $json.output }}"
                        },
                        {
                            "id": "7aa6ce13-afbf-4871-b81c-6e9c722a53dc",
                            "name": "prompt",
                            "type": "string",
                            "value": "You are a helpful assistant discussing topics with multiple viewpoints. Based on the provided context, present different perspectives on the topic. Ensure fair representation of diverse opinions without showing bias. Acknowledge where the context presents limited viewpoints."
                        }
                    ]
                }
            },
            "typeVersion": 3.4
        },
        {
            "id": "c769a76a-fb26-46a1-a00d-825b689d5f7a",
            "name": "Analytical Prompt and Output",
            "type": "n8n-nodes-base.set",
            "position": [
                1540,
                -240
            ],
            "parameters": {
                "options": [],
                "assignments": {
                    "assignments": [
                        {
                            "id": "a4a28ac2-4a56-46f6-8b86-f5d1a34b2ced",
                            "name": "output",
                            "type": "string",
                            "value": "={{ $json.output }}"
                        },
                        {
                            "id": "7aa6ce13-afbf-4871-b81c-6e9c722a53dc",
                            "name": "prompt",
                            "type": "string",
                            "value": "You are a helpful assistant providing analytical insights. Based on the provided context, offer a comprehensive analysis of the topic. Cover different aspects and perspectives in your explanation. If the context has gaps, acknowledge them while providing the best analysis possible."
                        }
                    ]
                }
            },
            "typeVersion": 3.4
        },
        {
            "id": "fcd29f6b-17e8-442c-93f9-b93fbad7cd10",
            "name": "Gemini Classification",
            "type": "@n8n\/n8n-nodes-langchain.lmChatGoogleGemini",
            "position": [
                360,
                180
            ],
            "parameters": {
                "options": [],
                "modelName": "models\/gemini-2.0-flash-lite"
            },
            "credentials": {
                "googlePalmApi": {
                    "id": "2zwuT5znDglBrUCO",
                    "name": "Google Gemini(PaLM) Api account"
                }
            },
            "typeVersion": 1
        },
        {
            "id": "c0828ee3-f184-41f5-9a25-0f1059b03711",
            "name": "Gemini Factual",
            "type": "@n8n\/n8n-nodes-langchain.lmChatGoogleGemini",
            "position": [
                1120,
                -560
            ],
            "parameters": {
                "options": [],
                "modelName": "models\/gemini-2.0-flash"
            },
            "credentials": {
                "googlePalmApi": {
                    "id": "2zwuT5znDglBrUCO",
                    "name": "Google Gemini(PaLM) Api account"
                }
            },
            "typeVersion": 1
        },
        {
            "id": "98f9981d-ea8e-45cb-b91d-3c8d1fe33e25",
            "name": "Gemini Analytical",
            "type": "@n8n\/n8n-nodes-langchain.lmChatGoogleGemini",
            "position": [
                1120,
                -20
            ],
            "parameters": {
                "options": [],
                "modelName": "models\/gemini-2.0-flash"
            },
            "credentials": {
                "googlePalmApi": {
                    "id": "2zwuT5znDglBrUCO",
                    "name": "Google Gemini(PaLM) Api account"
                }
            },
            "typeVersion": 1
        },
        {
            "id": "c85f270d-3224-4e60-9acf-91f173dfe377",
            "name": "Chat Buffer Memory Analytical",
            "type": "@n8n\/n8n-nodes-langchain.memoryBufferWindow",
            "position": [
                1280,
                -20
            ],
            "parameters": {
                "sessionKey": "={{ $('Combined Fields').item.json.chat_memory_key }}",
                "sessionIdType": "customKey",
                "contextWindowLength": 10
            },
            "typeVersion": 1.3
        },
        {
            "id": "c39ba907-7388-4152-965a-e28e626bc9b2",
            "name": "Chat Buffer Memory Factual",
            "type": "@n8n\/n8n-nodes-langchain.memoryBufferWindow",
            "position": [
                1280,
                -560
            ],
            "parameters": {
                "sessionKey": "={{ $('Combined Fields').item.json.chat_memory_key }}",
                "sessionIdType": "customKey",
                "contextWindowLength": 10
            },
            "typeVersion": 1.3
        },
        {
            "id": "52dcd9f0-e6b3-4d33-bc6f-621ef880178e",
            "name": "Gemini Opinion",
            "type": "@n8n\/n8n-nodes-langchain.lmChatGoogleGemini",
            "position": [
                1120,
                520
            ],
            "parameters": {
                "options": [],
                "modelName": "models\/gemini-2.0-flash"
            },
            "credentials": {
                "googlePalmApi": {
                    "id": "2zwuT5znDglBrUCO",
                    "name": "Google Gemini(PaLM) Api account"
                }
            },
            "typeVersion": 1
        },
        {
            "id": "147a709a-4b46-4835-82cf-7d6b633acd4c",
            "name": "Chat Buffer Memory Opinion",
            "type": "@n8n\/n8n-nodes-langchain.memoryBufferWindow",
            "position": [
                1280,
                520
            ],
            "parameters": {
                "sessionKey": "={{ $('Combined Fields').item.json.chat_memory_key }}",
                "sessionIdType": "customKey",
                "contextWindowLength": 10
            },
            "typeVersion": 1.3
        },
        {
            "id": "3cb6bf32-5937-49b9-acf7-d7d01dc2ddd1",
            "name": "Gemini Contextual",
            "type": "@n8n\/n8n-nodes-langchain.lmChatGoogleGemini",
            "position": [
                1120,
                1060
            ],
            "parameters": {
                "options": [],
                "modelName": "models\/gemini-2.0-flash"
            },
            "credentials": {
                "googlePalmApi": {
                    "id": "2zwuT5znDglBrUCO",
                    "name": "Google Gemini(PaLM) Api account"
                }
            },
            "typeVersion": 1
        },
        {
            "id": "5916c4f1-4369-4d66-8553-2fff006b7e69",
            "name": "Chat Buffer Memory Contextual",
            "type": "@n8n\/n8n-nodes-langchain.memoryBufferWindow",
            "position": [
                1280,
                1060
            ],
            "parameters": {
                "sessionKey": "={{ $('Combined Fields').item.json.chat_memory_key }}",
                "sessionIdType": "customKey",
                "contextWindowLength": 10
            },
            "typeVersion": 1.3
        },
        {
            "id": "d33377c2-6b98-4e4d-968f-f3085354ae50",
            "name": "Embeddings",
            "type": "@n8n\/n8n-nodes-langchain.embeddingsGoogleGemini",
            "position": [
                2060,
                200
            ],
            "parameters": {
                "modelName": "models\/text-embedding-004"
            },
            "credentials": {
                "googlePalmApi": {
                    "id": "2zwuT5znDglBrUCO",
                    "name": "Google Gemini(PaLM) Api account"
                }
            },
            "typeVersion": 1
        },
        {
            "id": "32d9a0c0-0889-4cb2-a088-8ee9cfecacd3",
            "name": "Sticky Note",
            "type": "n8n-nodes-base.stickyNote",
            "position": [
                1040,
                -900
            ],
            "parameters": {
                "color": 7,
                "width": 700,
                "height": 520,
                "content": "## Factual Strategy\n**Retrieve precise facts and figures.**"
            },
            "typeVersion": 1
        },
        {
            "id": "064a4729-717c-40c8-824a-508406610a13",
            "name": "Sticky Note1",
            "type": "n8n-nodes-base.stickyNote",
            "position": [
                1040,
                -360
            ],
            "parameters": {
                "color": 7,
                "width": 700,
                "height": 520,
                "content": "## Analytical Strategy\n**Provide comprehensive coverage of a topics and exploring different aspects.**"
            },
            "typeVersion": 1
        },
        {
            "id": "9fd52a28-44bc-4dfd-bdb7-90987cc2f4fb",
            "name": "Sticky Note2",
            "type": "n8n-nodes-base.stickyNote",
            "position": [
                1040,
                180
            ],
            "parameters": {
                "color": 7,
                "width": 700,
                "height": 520,
                "content": "## Opinion Strategy\n**Gather diverse viewpoints on a subjective issue.**"
            },
            "typeVersion": 1
        },
        {
            "id": "3797b21f-cc2a-4210-aa63-6d181d413c5e",
            "name": "Sticky Note3",
            "type": "n8n-nodes-base.stickyNote",
            "position": [
                1040,
                720
            ],
            "parameters": {
                "color": 7,
                "width": 700,
                "height": 520,
                "content": "## Contextual Strategy\n**Incorporate user-specific context to fine-tune the retrieval.**"
            },
            "typeVersion": 1
        },
        {
            "id": "16fa1531-9fb9-4b12-961c-be12e20b2134",
            "name": "Concatenate Context",
            "type": "n8n-nodes-base.summarize",
            "position": [
                2440,
                -20
            ],
            "parameters": {
                "options": [],
                "fieldsToSummarize": {
                    "values": [
                        {
                            "field": "document.pageContent",
                            "separateBy": "other",
                            "aggregation": "concatenate",
                            "customSeparator": "={{ \"\\n\\n---\\n\\n\" }}"
                        }
                    ]
                }
            },
            "typeVersion": 1.1
        },
        {
            "id": "4d6147d1-7a3d-42ab-b23f-cdafe8ea30b0",
            "name": "Retrieve Documents from Vector Store",
            "type": "@n8n\/n8n-nodes-langchain.vectorStoreQdrant",
            "position": [
                2080,
                -20
            ],
            "parameters": {
                "mode": "load",
                "topK": 10,
                "prompt": "={{ $json.prompt }}\n\nUser query: \n{{ $json.output }}",
                "options": [],
                "qdrantCollection": {
                    "__rl": true,
                    "mode": "id",
                    "value": "={{ $('Combined Fields').item.json.vector_store_id }}"
                }
            },
            "credentials": {
                "qdrantApi": {
                    "id": "mb8rw8tmUeP6aPJm",
                    "name": "QdrantApi account"
                }
            },
            "typeVersion": 1.1
        },
        {
            "id": "7e68f9cb-0a0d-4215-8083-3b9ef92cd237",
            "name": "Set Prompt and Output",
            "type": "n8n-nodes-base.set",
            "position": [
                1880,
                -20
            ],
            "parameters": {
                "options": [],
                "assignments": {
                    "assignments": [
                        {
                            "id": "1d782243-0571-4845-b8fe-4c6c4b55379e",
                            "name": "output",
                            "type": "string",
                            "value": "={{ $json.output }}"
                        },
                        {
                            "id": "547091fb-367c-44d4-ac39-24d073da70e0",
                            "name": "prompt",
                            "type": "string",
                            "value": "={{ $json.prompt }}"
                        }
                    ]
                }
            },
            "typeVersion": 3.4
        },
        {
            "id": "0c623ca1-da85-48a3-9d8b-90d97283a015",
            "name": "Gemini Answer",
            "type": "@n8n\/n8n-nodes-langchain.lmChatGoogleGemini",
            "position": [
                2720,
                200
            ],
            "parameters": {
                "options": [],
                "modelName": "models\/gemini-2.0-flash"
            },
            "credentials": {
                "googlePalmApi": {
                    "id": "2zwuT5znDglBrUCO",
                    "name": "Google Gemini(PaLM) Api account"
                }
            },
            "typeVersion": 1
        },
        {
            "id": "fab91e48-1c62-46a8-b9fc-39704f225274",
            "name": "Answer",
            "type": "@n8n\/n8n-nodes-langchain.agent",
            "position": [
                2760,
                -20
            ],
            "parameters": {
                "text": "=User query: {{ $('Combined Fields').item.json.user_query }}",
                "options": {
                    "systemMessage": "={{ $('Set Prompt and Output').item.json.prompt }}\n\nUse the following context (delimited by <ctx><\/ctx>) and the chat history to answer the user query.\n<ctx>\n{{ $json.concatenated_document_pageContent }}\n<\/ctx>"
                },
                "promptType": "define"
            },
            "typeVersion": 1.8
        },
        {
            "id": "d69f8d62-3064-40a8-b490-22772fbc38cd",
            "name": "Chat Buffer Memory",
            "type": "@n8n\/n8n-nodes-langchain.memoryBufferWindow",
            "position": [
                2900,
                200
            ],
            "parameters": {
                "sessionKey": "={{ $('Combined Fields').item.json.chat_memory_key }}",
                "sessionIdType": "customKey",
                "contextWindowLength": 10
            },
            "typeVersion": 1.3
        },
        {
            "id": "a399f8e6-fafd-4f73-a2de-894f1e3c4bec",
            "name": "Sticky Note4",
            "type": "n8n-nodes-base.stickyNote",
            "position": [
                1800,
                -220
            ],
            "parameters": {
                "color": 7,
                "width": 820,
                "height": 580,
                "content": "## Perform adaptive retrieval\n**Find document considering both query and context.**"
            },
            "typeVersion": 1
        },
        {
            "id": "7f10fe70-1af8-47ad-a9b5-2850412c43f8",
            "name": "Sticky Note5",
            "type": "n8n-nodes-base.stickyNote",
            "position": [
                2640,
                -220
            ],
            "parameters": {
                "color": 7,
                "width": 740,
                "height": 580,
                "content": "## Reply to the user integrating retrieval context"
            },
            "typeVersion": 1
        },
        {
            "id": "5cd0dd02-65f4-4351-aeae-c70ecf5f1d66",
            "name": "Respond to Webhook",
            "type": "n8n-nodes-base.respondToWebhook",
            "position": [
                3120,
                -20
            ],
            "parameters": {
                "options": []
            },
            "typeVersion": 1.1
        },
        {
            "id": "4c56ef8f-8fce-4525-bb87-15df37e91cc4",
            "name": "Sticky Note6",
            "type": "n8n-nodes-base.stickyNote",
            "position": [
                280,
                -220
            ],
            "parameters": {
                "color": 7,
                "width": 700,
                "height": 580,
                "content": "## User query classification\n**Classify the query into one of four categories: Factual, Analytical, Opinion, or Contextual.**"
            },
            "typeVersion": 1
        },
        {
            "id": "3ef73405-89de-4bed-9673-90e2c1f2e74b",
            "name": "When Executed by Another Workflow",
            "type": "n8n-nodes-base.executeWorkflowTrigger",
            "position": [
                -280,
                -140
            ],
            "parameters": {
                "workflowInputs": {
                    "values": [
                        {
                            "name": "user_query"
                        },
                        {
                            "name": "chat_memory_key"
                        },
                        {
                            "name": "vector_store_id"
                        }
                    ]
                }
            },
            "typeVersion": 1.1
        },
        {
            "id": "0785714f-c45c-4eda-9937-c97e44c9a449",
            "name": "Combined Fields",
            "type": "n8n-nodes-base.set",
            "position": [
                40,
                -20
            ],
            "parameters": {
                "options": [],
                "assignments": {
                    "assignments": [
                        {
                            "id": "90ab73a2-fe01-451a-b9df-bffe950b1599",
                            "name": "user_query",
                            "type": "string",
                            "value": "={{ $json.user_query || $json.chatInput }}"
                        },
                        {
                            "id": "36686ff5-09fc-40a4-8335-a5dd1576e941",
                            "name": "chat_memory_key",
                            "type": "string",
                            "value": "={{ $json.chat_memory_key || $('Chat').item.json.sessionId }}"
                        },
                        {
                            "id": "4230c8f3-644c-4985-b710-a4099ccee77c",
                            "name": "vector_store_id",
                            "type": "string",
                            "value": "={{ $json.vector_store_id || \"<ID HERE>\" }}"
                        }
                    ]
                }
            },
            "typeVersion": 3.4
        },
        {
            "id": "57a93b72-4233-4ba2-b8c7-99d88f0ed572",
            "name": "Sticky Note7",
            "type": "n8n-nodes-base.stickyNote",
            "position": [
                -300,
                400
            ],
            "parameters": {
                "width": 1280,
                "height": 1300,
                "content": "# Adaptive RAG Workflow\n\nThis n8n workflow implements a version of the Adaptive Retrieval-Augmented Generation (RAG) approach. It classifies user queries and applies different retrieval and generation strategies based on the query type (Factual, Analytical, Opinion, or Contextual) to provide more relevant and tailored answers from a knowledge base stored in a Qdrant vector store.\n\n## How it Works\n\n1.  **Input Trigger:**\n    * The workflow can be initiated via the built-in Chat interface or triggered by another n8n workflow.\n    * It expects inputs: `user_query`, `chat_memory_key` (for conversation history), and `vector_store_id` (specifying the Qdrant collection).\n    * A `Set` node (`Combined Fields`) standardizes these inputs.\n\n2.  **Query Classification:**\n    * A Google Gemini agent (`Query Classification`) analyzes the `user_query`.\n    * It classifies the query into one of four categories:\n        * **Factual:** Seeking specific, verifiable information.\n        * **Analytical:** Requiring comprehensive analysis or explanation.\n        * **Opinion:** Asking about subjective matters or seeking diverse viewpoints.\n        * **Contextual:** Depending on user-specific or implied context.\n\n3.  **Adaptive Strategy Routing:**\n    * A `Switch` node routes the workflow based on the classification result from the previous step.\n\n4.  **Strategy Implementation (Query Adaptation):**\n    * Depending on the route, a specific Google Gemini agent adapts the query or approach:\n        * **Factual Strategy:** Rewrites the query for better precision, focusing on key entities (`Factual Strategy - Focus on Precision`).\n        * **Analytical Strategy:** Breaks down the main query into multiple sub-questions to ensure comprehensive coverage (`Analytical Strategy - Comprehensive Coverage`).\n        * **Opinion Strategy:** Identifies different potential perspectives or angles related to the query (`Opinion Strategy - Diverse Perspectives`).\n        * **Contextual Strategy:** Infers implied context needed to answer the query effectively (`Contextual Strategy - User Context Integration`).\n    * Each strategy path uses its own chat memory buffer for the adaptation step.\n\n5.  **Retrieval Prompt & Output Setup:**\n    * Based on the *original* query classification, a `Set` node (`Factual\/Analytical\/Opinion\/Contextual Prompt and Output`, combined via connections to `Set Prompt and Output`) prepares:\n        * The output from the strategy step (e.g., rewritten query, sub-questions, perspectives).\n        * A tailored system prompt for the final answer generation agent, instructing it how to behave based on the query type (e.g., focus on precision for Factual, present diverse views for Opinion).\n\n6.  **Document Retrieval (RAG):**\n    * The `Retrieve Documents from Vector Store` node uses the adapted query\/output from the strategy step to search the specified Qdrant collection (`vector_store_id`).\n    * It retrieves the top relevant document chunks using Google Gemini embeddings.\n\n7.  **Context Preparation:**\n    * The content from the retrieved document chunks is concatenated (`Concatenate Context`) to form a single context block for the final answer generation.\n\n8.  **Answer Generation:**\n    * The final `Answer` agent (powered by Google Gemini) generates the response.\n    * It uses:\n        * The tailored system prompt set in step 5.\n        * The concatenated context from retrieved documents (step 7).\n        * The original `user_query`.\n        * The shared chat history (`Chat Buffer Memory` using `chat_memory_key`).\n\n9.  **Response:**\n    * The generated answer is sent back to the user via the `Respond to Webhook` node."
            },
            "typeVersion": 1
        },
        {
            "id": "bec8070f-2ce9-4930-b71e-685a2b21d3f2",
            "name": "Sticky Note8",
            "type": "n8n-nodes-base.stickyNote",
            "position": [
                -60,
                -220
            ],
            "parameters": {
                "color": 7,
                "width": 320,
                "height": 580,
                "content": "## ⚠️  If using in Chat mode\n\nUpdate the `vector_store_id` variable to the corresponding Qdrant ID needed to perform the documents retrieval."
            },
            "typeVersion": 1
        }
    ],
    "active": false,
    "pinData": [],
    "settings": {
        "executionOrder": "v1"
    },
    "versionId": "7d56eea8-a262-4add-a4e8-45c2b0c7d1a9",
    "connections": {
        "Chat": {
            "main": [
                [
                    {
                        "node": "Combined Fields",
                        "type": "main",
                        "index": 0
                    }
                ]
            ]
        },
        "Answer": {
            "main": [
                [
                    {
                        "node": "Respond to Webhook",
                        "type": "main",
                        "index": 0
                    }
                ]
            ]
        },
        "Switch": {
            "main": [
                [
                    {
                        "node": "Factual Strategy - Focus on Precision",
                        "type": "main",
                        "index": 0
                    }
                ],
                [
                    {
                        "node": "Analytical Strategy - Comprehensive Coverage",
                        "type": "main",
                        "index": 0
                    }
                ],
                [
                    {
                        "node": "Opinion Strategy - Diverse Perspectives",
                        "type": "main",
                        "index": 0
                    }
                ],
                [
                    {
                        "node": "Contextual Strategy - User Context Integration",
                        "type": "main",
                        "index": 0
                    }
                ]
            ]
        },
        "Embeddings": {
            "ai_embedding": [
                [
                    {
                        "node": "Retrieve Documents from Vector Store",
                        "type": "ai_embedding",
                        "index": 0
                    }
                ]
            ]
        },
        "Gemini Answer": {
            "ai_languageModel": [
                [
                    {
                        "node": "Answer",
                        "type": "ai_languageModel",
                        "index": 0
                    }
                ]
            ]
        },
        "Gemini Factual": {
            "ai_languageModel": [
                [
                    {
                        "node": "Factual Strategy - Focus on Precision",
                        "type": "ai_languageModel",
                        "index": 0
                    }
                ]
            ]
        },
        "Gemini Opinion": {
            "ai_languageModel": [
                [
                    {
                        "node": "Opinion Strategy - Diverse Perspectives",
                        "type": "ai_languageModel",
                        "index": 0
                    }
                ]
            ]
        },
        "Combined Fields": {
            "main": [
                [
                    {
                        "node": "Query Classification",
                        "type": "main",
                        "index": 0
                    }
                ]
            ]
        },
        "Gemini Analytical": {
            "ai_languageModel": [
                [
                    {
                        "node": "Analytical Strategy - Comprehensive Coverage",
                        "type": "ai_languageModel",
                        "index": 0
                    }
                ]
            ]
        },
        "Gemini Contextual": {
            "ai_languageModel": [
                [
                    {
                        "node": "Contextual Strategy - User Context Integration",
                        "type": "ai_languageModel",
                        "index": 0
                    }
                ]
            ]
        },
        "Chat Buffer Memory": {
            "ai_memory": [
                [
                    {
                        "node": "Answer",
                        "type": "ai_memory",
                        "index": 0
                    }
                ]
            ]
        },
        "Concatenate Context": {
            "main": [
                [
                    {
                        "node": "Answer",
                        "type": "main",
                        "index": 0
                    }
                ]
            ]
        },
        "Query Classification": {
            "main": [
                [
                    {
                        "node": "Switch",
                        "type": "main",
                        "index": 0
                    }
                ]
            ]
        },
        "Gemini Classification": {
            "ai_languageModel": [
                [
                    {
                        "node": "Query Classification",
                        "type": "ai_languageModel",
                        "index": 0
                    }
                ]
            ]
        },
        "Set Prompt and Output": {
            "main": [
                [
                    {
                        "node": "Retrieve Documents from Vector Store",
                        "type": "main",
                        "index": 0
                    }
                ]
            ]
        },
        "Factual Prompt and Output": {
            "main": [
                [
                    {
                        "node": "Set Prompt and Output",
                        "type": "main",
                        "index": 0
                    }
                ]
            ]
        },
        "Opinion Prompt and Output": {
            "main": [
                [
                    {
                        "node": "Set Prompt and Output",
                        "type": "main",
                        "index": 0
                    }
                ]
            ]
        },
        "Chat Buffer Memory Factual": {
            "ai_memory": [
                [
                    {
                        "node": "Factual Strategy - Focus on Precision",
                        "type": "ai_memory",
                        "index": 0
                    }
                ]
            ]
        },
        "Chat Buffer Memory Opinion": {
            "ai_memory": [
                [
                    {
                        "node": "Opinion Strategy - Diverse Perspectives",
                        "type": "ai_memory",
                        "index": 0
                    }
                ]
            ]
        },
        "Analytical Prompt and Output": {
            "main": [
                [
                    {
                        "node": "Set Prompt and Output",
                        "type": "main",
                        "index": 0
                    }
                ]
            ]
        },
        "Contextual Prompt and Output": {
            "main": [
                [
                    {
                        "node": "Set Prompt and Output",
                        "type": "main",
                        "index": 0
                    }
                ]
            ]
        },
        "Chat Buffer Memory Analytical": {
            "ai_memory": [
                [
                    {
                        "node": "Analytical Strategy - Comprehensive Coverage",
                        "type": "ai_memory",
                        "index": 0
                    }
                ]
            ]
        },
        "Chat Buffer Memory Contextual": {
            "ai_memory": [
                [
                    {
                        "node": "Contextual Strategy - User Context Integration",
                        "type": "ai_memory",
                        "index": 0
                    }
                ]
            ]
        },
        "When Executed by Another Workflow": {
            "main": [
                [
                    {
                        "node": "Combined Fields",
                        "type": "main",
                        "index": 0
                    }
                ]
            ]
        },
        "Retrieve Documents from Vector Store": {
            "main": [
                [
                    {
                        "node": "Concatenate Context",
                        "type": "main",
                        "index": 0
                    }
                ]
            ]
        },
        "Factual Strategy - Focus on Precision": {
            "main": [
                [
                    {
                        "node": "Factual Prompt and Output",
                        "type": "main",
                        "index": 0
                    }
                ]
            ]
        },
        "Opinion Strategy - Diverse Perspectives": {
            "main": [
                [
                    {
                        "node": "Opinion Prompt and Output",
                        "type": "main",
                        "index": 0
                    }
                ]
            ]
        },
        "Analytical Strategy - Comprehensive Coverage": {
            "main": [
                [
                    {
                        "node": "Analytical Prompt and Output",
                        "type": "main",
                        "index": 0
                    }
                ]
            ]
        },
        "Contextual Strategy - User Context Integration": {
            "main": [
                [
                    {
                        "node": "Contextual Prompt and Output",
                        "type": "main",
                        "index": 0
                    }
                ]
            ]
        }
    }
}
                                

Workflows Similaires

Automatisez le Résumé de Vos Emails avec A.I. et Messagerie

Ce workflow n8n vous permet d'automatiser la gestion de vos emails en utilisant l'intelligence artificielle pour résume...

Automatisation de gestion des réunions Zoom et communication

Ce workflow est conçu pour automatiser le processus de planification et de gestion des réunions Zoom tout en assurant ...

Automatisez vos Tweets d'images humoristiques à 17h

Ce workflow n8n est conçu pour les professionnels des réseaux sociaux cherchant à automatiser leur contenu humoristiq...