Optimisez votre chat avec le modèle Ollama LLM

Ce workflow n8n intègre le modèle de langage Llama 3.2 d'Ollama pour offrir une expérience de chat enrichie et structurée. En automatisant la réception et le traitement des messages, ce workflow permet de répondre efficacement aux demandes des utilisateurs. Grâce à une configuration simple et à l'utilisation d'un modèle avancé, il s'adapte facilement aux besoins des entreprises cherchant à améliorer leur communication client. Profitez d'une réponse rapide et précise tout en optimisant vos processus internes.

47,365 vues
18,604 copies
Communication

Documentation Complète

📋 Optimisez votre chat avec le modèle Ollama LLM

💡 Description

Ce workflow n8n intègre le modèle de langage Llama 3.2 d'Ollama pour offrir une expérience de chat enrichie et structurée. En automatisant la réception et le traitement des messages, ce workflow permet de répondre efficacement aux demandes des utilisateurs. Grâce à une configuration simple et à l'utilisation d'un modèle avancé, il s'adapte facilement aux besoins des entreprises cherchant à améliorer leur communication client. Profitez d'une réponse rapide et précise tout en optimisant vos processus internes.

📈 Impact & ROI: Amélioration significative de l'efficacité du service client avec un retour sur investissement rapide grâce à l'automatisation des réponses et à la réduction du temps de traitement manuel.

🚀 Fonctionnalités Clés

  • ✅ Automatisation complète du traitement des messages
  • ✅ Réponses personnalisées basées sur un modèle avancé
  • ✅ Structure les données en JSON pour une intégration facile
  • ✅ Gestion des erreurs pour une fiabilité accrue

📊 Architecture Technique

14
Nodes
4
Connexions
3
Services

🔌 Services Intégrés

n8nLangChainOllama

🔧 Composition du Workflow

NodeTypeDescription
When chat message received@n8n/n8n-nodes-langchain.chatTriggerTraitement des données
Basic LLM Chain@n8n/n8n-nodes-langchain.chainLlmTraitement des données
Ollama Model@n8n/n8n-nodes-langchain.lmOllamaTraitement des données
Sticky NotestickyNoteTraitement des données
Sticky Note1stickyNoteTraitement des données
Sticky Note2stickyNoteTraitement des données
Sticky Note3stickyNoteTraitement des données
Sticky Note4stickyNoteTraitement des données
Sticky Note5stickyNoteTraitement des données
Sticky Note6stickyNoteTraitement des données
Structured ResponsesetTraitement des données
Error ResponsesetTraitement des données
Sticky Note7stickyNoteTraitement des données
JSON to ObjectsetTraitement des données

📖 Guide d'Implémentation

  1. Import du workflow: Téléchargez le fichier JSON et importez-le dans votre instance n8n
  2. Configuration des credentials: Configurez les accès pour chaque service utilisé
  3. Personnalisation: Adaptez les paramètres selon vos besoins spécifiques
  4. Test: Exécutez le workflow en mode test pour vérifier le bon fonctionnement
  5. Activation: Activez le workflow pour une exécution automatique

🏷️ Tags

chatautomatisationOllama

Structure JSON

Voir le code JSON complet
{
    "id": "Telr6HU0ltH7s9f7",
    "meta": {
        "instanceId": "31e69f7f4a77bf465b805824e303232f0227212ae922d12133a0f96ffeab4fef"
    },
    "name": "🗨️Ollama Chat",
    "tags": [],
    "nodes": [
        {
            "id": "9560e89b-ea08-49dc-924e-ec8b83477340",
            "name": "When chat message received",
            "type": "@n8n\/n8n-nodes-langchain.chatTrigger",
            "position": [
                280,
                60
            ],
            "webhookId": "4d06a912-2920-489c-a33c-0e3ea0b66745",
            "parameters": {
                "options": []
            },
            "typeVersion": 1.1
        },
        {
            "id": "c7919677-233f-4c48-ba01-ae923aef511e",
            "name": "Basic LLM Chain",
            "type": "@n8n\/n8n-nodes-langchain.chainLlm",
            "onError": "continueErrorOutput",
            "position": [
                640,
                60
            ],
            "parameters": {
                "text": "=Provide the users prompt and response as a JSON object with two fields:\n- Prompt\n- Response\n\nAvoid any preample or further explanation.\n\nThis is the question: {{ $json.chatInput }}",
                "promptType": "define"
            },
            "typeVersion": 1.5
        },
        {
            "id": "b9676a8b-f790-4661-b8b9-3056c969bdf5",
            "name": "Ollama Model",
            "type": "@n8n\/n8n-nodes-langchain.lmOllama",
            "position": [
                740,
                340
            ],
            "parameters": {
                "model": "llama3.2:latest",
                "options": []
            },
            "credentials": {
                "ollamaApi": {
                    "id": "IsSBWGtcJbjRiKqD",
                    "name": "Ollama account"
                }
            },
            "typeVersion": 1
        },
        {
            "id": "61dfcda5-083c-43ff-8451-b2417f1e4be4",
            "name": "Sticky Note",
            "type": "n8n-nodes-base.stickyNote",
            "position": [
                -380,
                -380
            ],
            "parameters": {
                "color": 4,
                "width": 520,
                "height": 860,
                "content": "# 🦙 Ollama Chat Workflow\n\nA simple N8N workflow that integrates Ollama LLM for chat message processing and returns a structured JSON object.\n\n## Overview\nThis workflow creates a chat interface that processes messages using the Llama 3.2 model through Ollama. When a chat message is received, it gets processed through a basic LLM chain and returns a response.\n\n## Components\n- **Trigger Node**\n- **Processing Node**\n- **Model Node**\n- **JSON to Object Node**\n- **Structured Response Node**\n- **Error Response Node**\n\n## Workflow Structure\n1. The chat trigger node receives incoming messages\n2. Messages are passed to the Basic LLM Chain\n3. The Ollama Model processes the input using Llama 3.2\n4. Responses are returned through the chain\n\n## Prerequisites\n- N8N installation\n- Ollama setup with Llama 3.2 model\n- Valid Ollama API credentials\n\n## Configuration\n1. Set up the Ollama API credentials in N8N\n2. Ensure the Llama 3.2 model is available in your Ollama installation\n\n"
            },
            "typeVersion": 1
        },
        {
            "id": "64f60ee1-7870-461e-8fac-994c9c08b3f9",
            "name": "Sticky Note1",
            "type": "n8n-nodes-base.stickyNote",
            "position": [
                340,
                280
            ],
            "parameters": {
                "width": 560,
                "height": 200,
                "content": "## Model Node\n- Name: Ollama Model\n- Type: LangChain Ollama Integration\n- Model: llama3.2:latest\n- Purpose: Provides the language model capabilities"
            },
            "typeVersion": 1
        },
        {
            "id": "bb46210d-450c-405b-a451-42458b3af4ae",
            "name": "Sticky Note2",
            "type": "n8n-nodes-base.stickyNote",
            "position": [
                200,
                -160
            ],
            "parameters": {
                "color": 6,
                "width": 280,
                "height": 400,
                "content": "## Trigger Node\n- Name: When chat message received\n- Type: Chat Trigger\n- Purpose: Initiates the workflow when a new chat message arrives"
            },
            "typeVersion": 1
        },
        {
            "id": "7f21b9e6-6831-4117-a2e2-9c9fb6edc492",
            "name": "Sticky Note3",
            "type": "n8n-nodes-base.stickyNote",
            "position": [
                520,
                -380
            ],
            "parameters": {
                "color": 3,
                "width": 500,
                "height": 620,
                "content": "## Processing Node\n- Name: Basic LLM Chain\n- Type: LangChain LLM Chain\n- Purpose: Handles the processing of messages through the language model and returns a structured JSON object.\n\n"
            },
            "typeVersion": 1
        },
        {
            "id": "871bac4e-002f-4a1d-b3f9-0b7d309db709",
            "name": "Sticky Note4",
            "type": "n8n-nodes-base.stickyNote",
            "position": [
                560,
                -200
            ],
            "parameters": {
                "color": 7,
                "width": 420,
                "height": 200,
                "content": "### Prompt (Change this for your use case)\nProvide the users prompt and response as a JSON object with two fields:\n- Prompt\n- Response\n\n\nAvoid any preample or further explanation.\nThis is the question: {{ $json.chatInput }}"
            },
            "typeVersion": 1
        },
        {
            "id": "c9e1b2af-059b-4330-a194-45ae0161aa1c",
            "name": "Sticky Note5",
            "type": "n8n-nodes-base.stickyNote",
            "position": [
                1060,
                -280
            ],
            "parameters": {
                "color": 5,
                "width": 420,
                "height": 520,
                "content": "## JSON to Object Node\n- Type: Set Node\n- Purpose: A node designed to transform and structure response data in a specific format before sending it through the workflow. It operates in manual mapping mode to allow precise control over the response format.\n\n**Key Features**\n- Manual field mapping capabilities\n- Object transformation and restructuring\n- Support for JSON data formatting\n- Field-to-field value mapping\n- Includes option to add additional input fields\n"
            },
            "typeVersion": 1
        },
        {
            "id": "3fb912b8-86ac-42f7-a19c-45e59898a62e",
            "name": "Sticky Note6",
            "type": "n8n-nodes-base.stickyNote",
            "position": [
                1520,
                -180
            ],
            "parameters": {
                "color": 6,
                "width": 460,
                "height": 420,
                "content": "## Structured Response Node\n- Type: Set Node\n- Purpose: Controls how the workflow responds to users chat prompt.\n\n**Response Mode**\n- Manual Mapping: Allows custom formatting of response data\n- Fields to Set: Specify which data fields to include in response\n\n"
            },
            "typeVersion": 1
        },
        {
            "id": "fdfd1a5c-e1a6-4390-9807-ce665b96b9ae",
            "name": "Structured Response",
            "type": "n8n-nodes-base.set",
            "position": [
                1700,
                60
            ],
            "parameters": {
                "options": [],
                "assignments": {
                    "assignments": [
                        {
                            "id": "13c4058d-2d50-46b7-a5a6-c788828a1764",
                            "name": "text",
                            "type": "string",
                            "value": "=Your prompt was: {{ $json.response.Prompt }}\n\nMy response is: {{ $json.response.Response }}\n\nThis is the JSON object:\n\n{{ $('Basic LLM Chain').item.json.text }}"
                        }
                    ]
                }
            },
            "typeVersion": 3.4
        },
        {
            "id": "76baa6fc-72dd-41f9-aef9-4fd718b526df",
            "name": "Error Response",
            "type": "n8n-nodes-base.set",
            "position": [
                1460,
                660
            ],
            "parameters": {
                "options": [],
                "assignments": {
                    "assignments": [
                        {
                            "id": "13c4058d-2d50-46b7-a5a6-c788828a1764",
                            "name": "text",
                            "type": "string",
                            "value": "=There was an error."
                        }
                    ]
                }
            },
            "typeVersion": 3.4
        },
        {
            "id": "bde3b9df-af55-451b-b287-1b5038f9936c",
            "name": "Sticky Note7",
            "type": "n8n-nodes-base.stickyNote",
            "position": [
                1240,
                280
            ],
            "parameters": {
                "color": 2,
                "width": 540,
                "height": 560,
                "content": "## Error Response Node\n- Type: Set Node\n- Purpose: Handles error cases when the Basic LLM Chain fails to process the chat message properly. It provides a fallback response mechanism to ensure the workflow remains robust.\n\n**Key Features**\n- Provides default error messaging\n- Maintains consistent response structure\n- Connects to the error output branch of the LLM Chain\n- Ensures graceful failure handling\n\nThe Error Response node activates when the main processing chain encounters issues, ensuring users always receive feedback even when errors occur in the language model processing.\n"
            },
            "typeVersion": 1
        },
        {
            "id": "b9b2ab8d-9bea-457a-b7bf-51c8ef0de69f",
            "name": "JSON to Object",
            "type": "n8n-nodes-base.set",
            "position": [
                1220,
                60
            ],
            "parameters": {
                "options": [],
                "assignments": {
                    "assignments": [
                        {
                            "id": "12af1a54-62a2-44c3-9001-95bb0d7c769d",
                            "name": "response",
                            "type": "object",
                            "value": "={{ $json.text }}"
                        }
                    ]
                }
            },
            "typeVersion": 3.4
        }
    ],
    "active": false,
    "pinData": [],
    "settings": {
        "executionOrder": "v1"
    },
    "versionId": "5175454a-91b7-4c57-890d-629bd4e8d2fd",
    "connections": {
        "Ollama Model": {
            "ai_languageModel": [
                [
                    {
                        "node": "Basic LLM Chain",
                        "type": "ai_languageModel",
                        "index": 0
                    }
                ]
            ]
        },
        "JSON to Object": {
            "main": [
                [
                    {
                        "node": "Structured Response",
                        "type": "main",
                        "index": 0
                    }
                ]
            ]
        },
        "Basic LLM Chain": {
            "main": [
                [
                    {
                        "node": "JSON to Object",
                        "type": "main",
                        "index": 0
                    }
                ],
                [
                    {
                        "node": "Error Response",
                        "type": "main",
                        "index": 0
                    }
                ]
            ]
        },
        "When chat message received": {
            "main": [
                [
                    {
                        "node": "Basic LLM Chain",
                        "type": "main",
                        "index": 0
                    }
                ]
            ]
        }
    }
}
                                

Workflows Similaires

Automatisez vos appels vocaux avec la synthèse vocale

Ce workflow n8n vous permet de transformer des messages texte en appels vocaux automatisés grâce à l'API ClickSend. I...

Automatisez vos Emails avec l'Agent IA Professionnel

Optimisez la gestion de vos emails grâce à ce workflow d'automatisation avancé. Conçu pour les entreprises cherchant...

Surveillez les Mentions Twitter avec RocketChat

Ce workflow automatise la surveillance des mentions Twitter de votre marque, en l'occurrence @n8n_io, et vous informe in...