Darmowy szablon automatyzacji

馃攼馃馃 Prywatny i lokalny Ollama Samodzielnie hostowany asystent AI

35702
27 dni temu
14
blok贸w

Transformacja lokalnej instancji N8N w interfejs czatu

Przekszta艂膰 swoj膮 lokaln膮 instancj臋 N8N w pot臋偶ny interfejs czatu, wykorzystuj膮c dowolny lokalny i prywatny model Ollama, bez zale偶no艣ci od chmury. Ten szablon tworzy ustrukturyzowane 艣rodowisko czatu, kt贸re przetwarza wiadomo艣ci lokalnie poprzez 艂a艅cuch modeli j臋zykowych i zwraca sformatowane odpowiedzi.

Jak to dzia艂a

  • Wiadomo艣ci czatu uruchamiaj膮 workflow
  • Wiadomo艣ci s膮 przetwarzane przez model Llama 3.2 via Ollama (lub inny kompatybilny model)
  • Odpowiedzi s膮 formatowane jako strukturalne JSON
  • Obs艂uga b艂臋d贸w zapewnia stabilne dzia艂anie

Konfiguracja

  • Zainstaluj N8N i Ollama
  • Pobierz model Ollama 3.2 (lub inny wybrany model)
  • Skonfiguruj dane dost臋powe do API Ollama
  • Zaimportuj i aktywuj workflow

Przyk艂ady zastosowa艅

Ten szablon automatyzacji otwiera szerokie mo偶liwo艣ci zastosowa艅 w r贸偶nych obszarach, szczeg贸lnie tam, gdzie wa偶na jest prywatno艣膰 danych i lokalne przetwarzanie. Oto kilka potencjalnych zastosowa艅:

  • Prywatne asystenty wirtualne dla firm
  • Lokalne systemy wsparcia klienta
  • Narz臋dzia edukacyjne do nauki j臋zyk贸w
  • Automatyzacja wewn臋trznej dokumentacji
  • Systemy zarz膮dzania wiedz膮 w organizacjach
  • Prywatne chatboty terapeutyczne
  • Narz臋dzia wspieraj膮ce procesy decyzyjne

Korzy艣ci

Szablon zapewnia podstaw臋 do budowania aplikacji czatu wspieranych przez sztuczn膮 inteligencj臋, przy jednoczesnym zachowaniu pe艂nej kontroli nad danymi i infrastruktur膮.

   Skopiuj kod szablonu   
{"id":"Telr6HU0ltH7s9f7","meta":{"instanceId":"31e69f7f4a77bf465b805824e303232f0227212ae922d12133a0f96ffeab4fef"},"name":"馃棬锔廜llama Chat","tags":[],"nodes":[{"id":"9560e89b-ea08-49dc-924e-ec8b83477340","name":"When chat message received","type":"@n8n/n8n-nodes-langchain.chatTrigger","position":[280,60],"webhookId":"4d06a912-2920-489c-a33c-0e3ea0b66745","parameters":{"options":{}},"typeVersion":1.1},{"id":"c7919677-233f-4c48-ba01-ae923aef511e","name":"Basic LLM Chain","type":"@n8n/n8n-nodes-langchain.chainLlm","onError":"continueErrorOutput","position":[640,60],"parameters":{"text":"=Provide the users prompt and response as a JSON object with two fields:n- Promptn- ResponsennAvoid any preample or further explanation.nnThis is the question: {{ $json.chatInput }}","promptType":"define"},"typeVersion":1.5},{"id":"b9676a8b-f790-4661-b8b9-3056c969bdf5","name":"Ollama Model","type":"@n8n/n8n-nodes-langchain.lmOllama","position":[740,340],"parameters":{"model":"llama3.2:latest","options":{}},"credentials":{"ollamaApi":{"id":"IsSBWGtcJbjRiKqD","name":"Ollama account"}},"typeVersion":1},{"id":"61dfcda5-083c-43ff-8451-b2417f1e4be4","name":"Sticky Note","type":"n8n-nodes-base.stickyNote","position":[-380,-380],"parameters":{"color":4,"width":520,"height":860,"content":"# 馃 Ollama Chat WorkflownnA simple N8N workflow that integrates Ollama LLM for chat message processing and returns a structured JSON object.nn## OverviewnThis workflow creates a chat interface that processes messages using the Llama 3.2 model through Ollama. When a chat message is received, it gets processed through a basic LLM chain and returns a response.nn## Componentsn- **Trigger Node**n- **Processing Node**n- **Model Node**n- **JSON to Object Node**n- **Structured Response Node**n- **Error Response Node**nn## Workflow Structuren1. The chat trigger node receives incoming messagesn2. Messages are passed to the Basic LLM Chainn3. The Ollama Model processes the input using Llama 3.2n4. Responses are returned through the chainnn## Prerequisitesn- N8N installationn- Ollama setup with Llama 3.2 modeln- Valid Ollama API credentialsnn## Configurationn1. Set up the Ollama API credentials in N8Nn2. Ensure the Llama 3.2 model is available in your Ollama installationnn"},"typeVersion":1},{"id":"64f60ee1-7870-461e-8fac-994c9c08b3f9","name":"Sticky Note1","type":"n8n-nodes-base.stickyNote","position":[340,280],"parameters":{"width":560,"height":200,"content":"## Model Noden- Name: Ollama Modeln- Type: LangChain Ollama Integrationn- Model: llama3.2:latestn- Purpose: Provides the language model capabilities"},"typeVersion":1},{"id":"bb46210d-450c-405b-a451-42458b3af4ae","name":"Sticky Note2","type":"n8n-nodes-base.stickyNote","position":[200,-160],"parameters":{"color":6,"width":280,"height":400,"content":"## Trigger Noden- Name: When chat message receivedn- Type: Chat Triggern- Purpose: Initiates the workflow when a new chat message arrives"},"typeVersion":1},{"id":"7f21b9e6-6831-4117-a2e2-9c9fb6edc492","name":"Sticky Note3","type":"n8n-nodes-base.stickyNote","position":[520,-380],"parameters":{"color":3,"width":500,"height":620,"content":"## Processing Noden- Name: Basic LLM Chainn- Type: LangChain LLM Chainn- Purpose: Handles the processing of messages through the language model and returns a structured JSON object.nn"},"typeVersion":1},{"id":"871bac4e-002f-4a1d-b3f9-0b7d309db709","name":"Sticky Note4","type":"n8n-nodes-base.stickyNote","position":[560,-200],"parameters":{"color":7,"width":420,"height":200,"content":"### Prompt (Change this for your use case)nProvide the users prompt and response as a JSON object with two fields:n- Promptn- ResponsennnAvoid any preample or further explanation.nThis is the question: {{ $json.chatInput }}"},"typeVersion":1},{"id":"c9e1b2af-059b-4330-a194-45ae0161aa1c","name":"Sticky Note5","type":"n8n-nodes-base.stickyNote","position":[1060,-280],"parameters":{"color":5,"width":420,"height":520,"content":"## JSON to Object Noden- Type: Set Noden- Purpose: A node designed to transform and structure response data in a specific format before sending it through the workflow. It operates in manual mapping mode to allow precise control over the response format.nn**Key Features**n- Manual field mapping capabilitiesn- Object transformation and restructuringn- Support for JSON data formattingn- Field-to-field value mappingn- Includes option to add additional input fieldsn"},"typeVersion":1},{"id":"3fb912b8-86ac-42f7-a19c-45e59898a62e","name":"Sticky Note6","type":"n8n-nodes-base.stickyNote","position":[1520,-180],"parameters":{"color":6,"width":460,"height":420,"content":"## Structured Response Noden- Type: Set Noden- Purpose: Controls how the workflow responds to users chat prompt.nn**Response Mode**n- Manual Mapping: Allows custom formatting of response datan- Fields to Set: Specify which data fields to include in responsenn"},"typeVersion":1},{"id":"fdfd1a5c-e1a6-4390-9807-ce665b96b9ae","name":"Structured Response","type":"n8n-nodes-base.set","position":[1700,60],"parameters":{"options":{},"assignments":{"assignments":[{"id":"13c4058d-2d50-46b7-a5a6-c788828a1764","name":"text","type":"string","value":"=Your prompt was: {{ $json.response.Prompt }}nnMy response is: {{ $json.response.Response }}nnThis is the JSON object:nn{{ $('Basic LLM Chain').item.json.text }}"}]}},"typeVersion":3.4},{"id":"76baa6fc-72dd-41f9-aef9-4fd718b526df","name":"Error Response","type":"n8n-nodes-base.set","position":[1460,660],"parameters":{"options":{},"assignments":{"assignments":[{"id":"13c4058d-2d50-46b7-a5a6-c788828a1764","name":"text","type":"string","value":"=There was an error."}]}},"typeVersion":3.4},{"id":"bde3b9df-af55-451b-b287-1b5038f9936c","name":"Sticky Note7","type":"n8n-nodes-base.stickyNote","position":[1240,280],"parameters":{"color":2,"width":540,"height":560,"content":"## Error Response Noden- Type: Set Noden- Purpose: Handles error cases when the Basic LLM Chain fails to process the chat message properly. It provides a fallback response mechanism to ensure the workflow remains robust.nn**Key Features**n- Provides default error messagingn- Maintains consistent response structuren- Connects to the error output branch of the LLM Chainn- Ensures graceful failure handlingnnThe Error Response node activates when the main processing chain encounters issues, ensuring users always receive feedback even when errors occur in the language model processing.n"},"typeVersion":1},{"id":"b9b2ab8d-9bea-457a-b7bf-51c8ef0de69f","name":"JSON to Object","type":"n8n-nodes-base.set","position":[1220,60],"parameters":{"options":{},"assignments":{"assignments":[{"id":"12af1a54-62a2-44c3-9001-95bb0d7c769d","name":"response","type":"object","value":"={{ $json.text }}"}]}},"typeVersion":3.4}],"active":false,"pinData":{},"settings":{"executionOrder":"v1"},"versionId":"5175454a-91b7-4c57-890d-629bd4e8d2fd","connections":{"Ollama Model":{"ai_languageModel":[[{"node":"Basic LLM Chain","type":"ai_languageModel","index":0}]]},"JSON to Object":{"main":[[{"node":"Structured Response","type":"main","index":0}]]},"Basic LLM Chain":{"main":[[{"node":"JSON to Object","type":"main","index":0}],[{"node":"Error Response","type":"main","index":0}]]},"When chat message received":{"main":[[{"node":"Basic LLM Chain","type":"main","index":0}]]}}}
  • LangChain
Planeta AI 2025 
magic-wandmenu linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram