by AI/ML API | D1m7asis
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Generate Veo3 Videos via AI/ML API, Save to Google Drive and Upload to YouTube Transform your content creation process by automating video generation with AI, publishing to YouTube, and logging results in Google Sheets. This workflow is ideal for content creators, marketers, and social media managers looking to streamline video production and distribution. How it works Trigger: Start the workflow manually or on a scheduled interval (e.g., every hour). Generate Video: Send a request to the AI/ML API to create video content based on predefined prompts and settings. Monitor Status: Poll the AI/ML API until the video generation is completed, with retry logic for reliability. Download & Upload: Retrieve the generated video file and upload it to your connected YouTube channel. Title Generation: Generate a YouTube title using AI to optimize for engagement. Log Results: Update a Google Sheet with video metadata and the YouTube URL for tracking and analytics. Set up steps Connect Credentials: Add OAuth2 credentials for AI/ML API, YouTube, and Google Sheets in n8n Credentials. Configure Nodes: Rename nodes for clarity (e.g., Generate Video, Upload to YouTube) and set up the HTTP Request node to use your AI/ML API credential. Sheet Preparation: Create a Google Sheet with columns for Date, Prompt, Video ID, and YouTube URL. Schedule: If using scheduling, configure the Schedule Trigger interval (e.g., every 60 minutes). Test & Deploy: Run a manual trigger, verify video generation, upload, and sheet entry, then activate the workflow for automation.
by Simon
Address Validation Workflow About This workflow automates the process of validating and correcting client shipping addresses in Billbee, ensuring accurate delivery information. It's ideal for e-commerce businesses looking to save time and reduce errors in their order fulfillment process. The workflow uses Billbee, an order management platform for small to medium-sized online retailers, and the Endereco API for address validation. Who Is This For? E-Commerce Businesses**: Streamline order fulfillment by automatically correcting common shipping address errors. Warehouse Teams**: Reduce manual work and ensure packages are shipped to the correct address. Small to Medium-Sized Retailers**: Businesses using Billbee to manage orders and requiring efficient, automated solutions for address validation. How it Works Trigger: Workflow starts via a Billbee Webhook when an order is imported. Fetch Data: Retrieve the client's shipping address using the Order ID. Validate Address: Send the address to the Endereco API for validation and correction (e.g., house number errors). Conditional Actions: Valid Address: Update the address in Billbee. Invalid Address: Tag the order with "Validation Error." Track Status: Add tags in Billbee for processed orders. Setup Steps API Keys: Obtain Billbee Developer/User API Keys and Endereco API Key. Billbee Rule: Create an automation rule: Trigger: Order imported. Action: Call External URL with OrderId to trigger n8n workflow. Optional: Use a secondary trigger (e.g., order state changes to "gepackt") for manual corrections. Customization Options Filter Delivery Addresses: Customize filters to exclude specific delivery types, such as pickup shops ("Postfiliale," "Paketshop," or "Packstation"). Filters can be adjusted within Billbee or in the workflow. Error Handling: Configure additional actions for orders that fail validation, such as notifying your team or flagging orders for manual review. Order Tags: Define custom tags in Billbee to better track order statuses (e.g., "Address Corrected," "Validation Error"). Trigger Types: Use additional triggers such as changes to order states (e.g., "gepackt" or "In Fulfillment") for manual corrections or validations. Address Fields: Modify the workflow to focus on specific address components, such as postal codes, city names, or country codes. Validation Rules: Adjust Endereco API settings or add custom logic to refine validation criteria based on your business needs. API Documentation Endereco**: Endereco API Docs Billbee**: Billbee API Docs
by Mark Shcherbakov
Video Guide I prepared a detailed guide explaining how to set up and implement this scenario, enabling you to chat with your documents stored in Supabase using n8n. Youtube Link Who is this for? This workflow is ideal for researchers, analysts, business owners, or anyone managing a large collection of documents. It's particularly beneficial for those who need quick contextual information retrieval from text-heavy files stored in Supabase, without needing additional services like Google Drive. What problem does this workflow solve? Manually retrieving and analyzing specific information from large document repositories is time-consuming and inefficient. This workflow automates the process by vectorizing documents and enabling AI-powered interactions, making it easy to query and retrieve context-based information from uploaded files. What this workflow does The workflow integrates Supabase with an AI-powered chatbot to process, store, and query text and PDF files. The steps include: Fetching and comparing files to avoid duplicate processing. Handling file downloads and extracting content based on the file type. Converting documents into vectorized data for contextual information retrieval. Storing and querying vectorized data from a Supabase vector store. File Extraction and Processing: Automates handling of multiple file formats (e.g., PDFs, text files), and extracts document content. Vectorized Embeddings Creation: Generates embeddings for processed data to enable AI-driven interactions. Dynamic Data Querying: Allows users to query their document repository conversationally using a chatbot. Setup N8N Workflow Fetch File List from Supabase: Use Supabase to retrieve the stored file list from a specified bucket. Add logic to manage empty folder placeholders returned by Supabase, avoiding incorrect processing. Compare and Filter Files: Aggregate the files retrieved from storage and compare them to the existing list in the Supabase files table. Exclude duplicates and skip placeholder files to ensure only unprocessed files are handled. Handle File Downloads: Download new files using detailed storage configurations for public/private access. Adjust the storage settings and GET requests to match your Supabase setup. File Type Processing: Use a Switch node to target specific file types (e.g., PDFs or text files). Employ relevant tools to process the content: For PDFs, extract embedded content. For text files, directly process the text data. Content Chunking: Break large text data into smaller chunks using the Text Splitter node. Define chunk size (default: 500 tokens) and overlap to retain necessary context across chunks. Vector Embedding Creation: Generate vectorized embeddings for the processed content using OpenAI's embedding tools. Ensure metadata, such as file ID, is included for easy data retrieval. Store Vectorized Data: Save the vectorized information into a dedicated Supabase vector store. Use the default schema and table provided by Supabase for seamless setup. AI Chatbot Integration: Add a chatbot node to handle user input and retrieve relevant document chunks. Use metadata like file ID for targeted queries, especially when multiple documents are involved. Testing Upload sample files to your Supabase bucket. Verify if files are processed and stored successfully in the vector store. Ask simple conversational questions about your documents using the chatbot (e.g., "What does Chapter 1 say about the Roman Empire?"). Test for accuracy and contextual relevance of retrieved results.
by Mark Shcherbakov
Video Guide I prepared a detailed guide to help you set up your workflow effectively, enabling you to extract insights from YouTube for content generation using an AI agent. Youtube Link Who is this for? This workflow is ideal for content creators, marketers, and analysts looking to enhance their YouTube strategies through data-driven insights. It’s particularly beneficial for individuals wanting to understand audience preferences and improve their video content. What problem does this workflow solve? Navigating the content generation and optimization process can be complex, especially without significant audience insight. This workflow automates insights extraction from YouTube videos and comments, empowering users to create more engaging and relevant content effectively. What this workflow does The workflow integrates various APIs to gather insights from YouTube videos, enabling automated commentary analysis, video transcription, and thumbnail evaluation. The main functionalities include: Extracting user preferences from comments. Transcribing video content for enhanced understanding. Analyzing thumbnails via AI for maximum viewer engagement insights. AI Insights Extraction: Automatically pulls comments and metrics from selected YouTube creators to evaluate trends and gaps. Dynamic Video Planning: Uses transcriptions to help creators outline video scripts and topics based on audience interest. Thumbnail Assessment: Provides analysis on thumbnail designs to improve click-through rates and viewer attraction. Setup N8N Workflow API Setup: Create a Google Cloud project and enable the YouTube Data API. Generate an API key to be included in your workflow requests. YouTube Creator and Video Selection: Start by defining a request to identify top creators based on their video views. Capture the YouTube video IDs for further analysis of comments and other video metrics. Comment Analysis: Gather comments associated with the selected videos and analyze them for user insights. Video Transcription: Utilize the insights from transcriptions to formulate content plans. Thumbnail Analysis: Evaluate your video thumbnails by submitting the URL through the OpenAI API to gain insights into their effectiveness.
by Hojjat Jashnniloofar
Overview This n8n templates helps you to authomatically search Linkeding jobs. It uses AI (Gemini or OpenAPI) to match your resume with each job description and write a sample cover letter for each job and update the job google sheet. You can receive daily matched linkedin job alerts by telegram. Prerequisites AI API Key from one model like: Google Gemini OpenAI Telegram Bot Token - Create via @BotFather Google Sheets - OAuth2 credentials Google Drive - OAuth2 credentials Setup 1. Upload your resume Upload your CV in PDF format in google drive and configure google drive node to read your resume from list of google drive files. You need to configure Google Drive OAuth2 and grant access to your drive before that. You can find useful infomration about how to configure Googel OAuth2 API key in n8n documents. 2. Create Google sheet You need to create a google sheet document consist of two sheets, one sheet for define job filter criteria and second sheet to store job search result. You can download this Google Sheet Template and copy in your personal space. Then you can add your job filter in Google sheet. You can search job by keywords, location, remote type, job type and easy apply. You need to configure Google Sheet OAuth2 and grant access to your drive before that. 3. Conifgure Telegram Bot You need to create a new Telegram Bot in @BotFather and insert API Key in Telegram node and you need to TELEGRAM_CHAT_ID to your telegram ID.
by Oneclick AI Squad
This automated n8n workflow sets up a complete MERN Stack development environment on a Linux server by installing core technologies, development tools, package managers, global npm packages, deployment tools, build tools, and security configurations. It creates a dedicated developer user and configures essential settings for MERN projects. What is MERN Stack Setup? MERN Stack setup involves installing and configuring Node.js, MongoDB, Express.js, and React, along with additional tools and packages, to create a fully functional development environment for building MERN-based applications on a Linux system. Good to Know The workflow triggers manually and takes 10-15 minutes to complete A dedicated developer user with proper permissions is created Firewall configuration secures development ports The environment variables template is provided All tools are installed and ready for immediate use How It Works Set Parameters** - Configures server host, user, password, setup type, Node.js version, MongoDB version, username, and user password System Preparation** - Prepares the system for installation Install Node.js** - Installs Node.js (v20 by default) with npm Install MongoDB** - Installs MongoDB (v7.0 by default) with Compass & Shell Install Git & GitHub CLI** - Installs Git and GitHub CLI Install Development Tools** - Installs VS Code, Docker, Docker Compose, Postman, Nginx, Redis, and PostgreSQL Create Dev User** - Creates a development user account Install Additional Tools** - Installs package managers (npm, Yarn, pnpm), global npm packages, deployment tools, build tools, and security tools Final Configuration** - Configures firewall, SSH keys, and environment variables template Setup Complete** - Marks the completion of the setup process How to Use Import the workflow into n8n Configure parameters in the Set Parameters node (server_host, server_user, server_password, setup_type, node_version, mongodb_version, username, user_password) Run the workflow SSH into the server as the new developer user Start building MERN applications Requirements Linux server access with SSH Administrative privileges (root access) Customizing This Workflow Adjust the setup_type parameter to customize the installation scope Modify node_version or mongodb_version to use different versions Change the username and user_password for the developer account
by David Ashby
🛠️ SIGNL4 Tool MCP Server Complete MCP server exposing all SIGNL4 Tool operations to AI agents. Zero configuration needed - all 2 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every SIGNL4 Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n SIGNL4 Tool tool with full error handling 📋 Available Operations (2 total) Every possible SIGNL4 Tool operation is included: 🔧 Alert (2 operations) • Send an alert • Resolve an alert 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native SIGNL4 Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every SIGNL4 Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Daniel Nolde
What it is: An automation to plan→draft→finalize and publish your textual blog post ideas to your wordpress blog Works in stages and hand back control to you in between those You can use a Google Spreadsheet for planning topics and configuring LLM models and prompts What it does: plans→drafts→finalizes blog post topics you specify in a Google Spreadsheet using an LLM with prompts that also ar configured in that spreadsheet (even which model to use) savs the results in the corresponding columns of the "Schedule" sheet in the spreadsheet hands control back to the user for inspecting or changing the results and for setting the next "Action" for th workflow Finally publishes the blog post to your Wordpress instance Limitations Probably slightly over-engineered ;-) No media generation yet some LLM models don't work because of their output format How it works: The Workflow is triggered manually or scheduled every hour It ingests a Google Spreadsheet to get Config for prompts/context tc Blog-Topics and their status and next action Depending on each blog topics "Status" and "Action" it then either uses an LLM for th next action ("plan"→"draft"→"final" actions) or publishes the written content to your Wordpress instance ("publish" actions) Set up steps: Import the workflow Make your own copy of the Google Spreadsheet Update the credentials using your individual credentials for: Google Spreadsheets OpenRouter Edit the "Settings" node and enter your individual values for Your spreadsheet copy URL Your wordpress blog URL Your wordpress blog username Your wordpress blog app password (a 4x4 alphanumeric sequence), that you probably have to create first, for which your wordpress user has to have 2-factor-authentication enabled. In your own copy of the spreadsheet: individualize the "Config" sheet's "Value" column for the prompts/context/etc Populate the "Schedule" sheet with at least one line in which you specify a "Topic" a "Schedulded" date (YYYY-MM-DD HH:mm:ss) a "Status" of "idea" an "Action" of "plan" (to kick off that action)
by Pablo
What this template does The Ultimate Scraper for n8n uses Selenium and AI to retrieve any information displayed on a webpage. You can also use session cookies to log in to the targeted webpage for more advanced scraping needs. ⚠️ Important: This project requires specific setup instructions. Please follow the guidelines provided in the GitHub repository: n8n Ultimate Scraper Setup : https://github.com/Touxan/n8n-ultimate-scraper/tree/main. The workflow version on n8n and the GitHub project may differ; however, the most up-to-date version will always be the one available on the GitHub repository : https://github.com/Touxan/n8n-ultimate-scraper/tree/main. How to use Deploy the project with all the requirements and request your webhook. Example of request: curl -X POST http://localhost:5678/webhook-test/yourwebhookid \ -H "Content-Type: application/json" \ -d '{ "subject": "Hugging Face", "Url": "github.com", "Target data": [ { "DataName": "Followers", "description": "The number of followers of the GitHub page" }, { "DataName": "Total Stars", "description": "The total numbers of stars on the different repos" } ], "cookie": [] }' Or to just scrap a url : curl -X POST http://localhost:5678/webhook-test/67d77918-2d5b-48c1-ae73-2004b32125f0 \ -H "Content-Type: application/json" \ -d '{ "Target Url": "https://github.com", "Target data": [ { "DataName": "Followers", "description": "The number of followers of the GitHub page" }, { "DataName": "Total Stars", "description": "The total numbers of stars on the different repo" } ], "cookies": [] }' `
by Łukasz
Who is it for This workflow is for anyone who is using N8N. It's especially helpful if you are a DevOps and your N8N instance is self hosted. If you carea lot about security and number of failed executions and at the same time you are using InfluxDB to monitor status of your systems, this will perfectly fit in your stack. How it works This automation is fairly simple. It uses native N8N nodes to gather data from itself. Then it is parsing this data to be compatible with InfluxDB input. And finally it is sending this data to InfluxDB for further processing. Remember to set up Setup is really simple and you just need to provide just three variables. First is your InfluxDB URL, second is your InfluxDB organization, and third is your InfluxDB bucket name. Of course, to set up N8N nodes and gather data from them, you will need your instance API key. And that's all. How it looks in InfluxDB? See below Schedule Audits Audits don't need to be run often, but I would recommend it to be run on regular basis. This way you can see real data series in InfluxDB. I think that once a day should be enough, but it depends on your N8N usage of course Thank you, perfect! Glad I could help. Visit my profile for other automations for businesses. And if you are looking for dedicated software development, do not hesitate to reach out! You can also see automations on my Sailing Byte's GitHub N8N repository.
by Ozgur Karateke
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 1 — What Does It Do / Which Problem Does It Solve? This workflow turns Google Docs-based contract & form templates into ready-to-sign PDFs in minutes—all from a single chat flow. Automates repetitive document creation.** Instead of copying a rental, sales, or NDA template and filling it by hand every time, the bot asks for the required values and fills them in. Eliminates human error.** It lists every mandatory field so nothing is missed, and removes unnecessary clauses via conditional blocks. Speeds up approvals.** The final draft arrives as a direct PDF link—one click to send for signing. One template → unlimited variations.* Every new template you drop in Drive is auto-listed with *zero workflow edits—**it scales effortlessly. 100 % no-code.** Runs on n8n + Google Apps Script—no extra backend, self-hosted or cloud. 2 — How It Works (Detailed Flow) 📝 Template Discovery 📂 The TemplateList node scans the Drive folder you specify via the ?mode=meta endpoint and returns an id / title / desc list. The bot shows this list in chat. 🎯 Selection & Metadata Fetch The user types a template name. 🔍 GetMetaData opens the chosen Doc, extracts META_JSON, placeholders, and conditional blocks, then lists mandatory & optional fields. 🗣 Data-Collection Loop The bot asks for every placeholder value. For each conditional block it asks 🟢 Yes / 🔴 No. Answers are accumulated in a data JSON object. ✅ Final Confirmation The bot summarizes the inputs → when the user clicks Confirm, the DocProcess sub-workflow starts. ⚙️ DocProcess Sub-Workflow | 🔧 Step | Node | Task | | --- | --- | --- | | 1 | User Choice Match Check | Verifies name–ID match; throws if wrong | | 2 | GetMetaData (renew) | Gets the latest placeholder list | | 3 | Validate JSON Format | Checks for missing / unknown fields | | 4 | CopyTemplate | Copies the Doc via Drive API | | 5 | FillDocument | Apps Script fills placeholders & removes blocks | | 6 | Generate PDF Link | Builds an export?format=pdf URL | 📎 Delivery The master agent sends 🔗 Download PDF & ✏️ Open Google Doc links. 🚫 Error Paths status:"ERROR", missing:[…] → bot lists missing fields and re-asks. unknown:[…] → template list is outdated; rerun TemplateList. Any Apps Script error → the returned message is shown verbatim in chat. 3 — 🚀 Setup Steps (Full Checklist) > Goal: Get a flawless PDF on the first run. > > > Mentally tick the ☑️ in front of every line as you go. > ☁️ A. Google Drive Preparation | Step | Do This | Watch Out For | | --- | --- | --- | | 1 | Create a Templates/ folder → put every template Doc inside | Exactly one folder; no sub-folders | | 2 | Placeholders in every Doc are {{UPPER_CASE}} | No Turkish chars or spaces | | 3 | Wrap optional clauses with [[BLOCK_NAME:START]]…[[BLOCK_NAME:END]] | The START tag must have a blank line above | | 4 | Add a META_JSON block at the very end | Script deletes it automatically after fill | | 5 | Right-click Doc > Details ▸ Description = 1-line human description | Shown by the bot in the list | | 6 | Create a second Generated/ folder (for copies) | Keeps Drive tidy | > 🔑 Folder ID (long alphanumerical) = <TEMPLATE_PARENT_ID> > > > We’ll paste this into the TemplateList node next. > Simple sample template → Template Link 🛠 B. Import the Workflow into n8n Settings ▸ Import Workflow ▸ DocAgent.json If nodes look Broken afterwards → no community-node problem; you only need to select credentials. 📑 C. Customize the TemplateList Node Open Template List node ⚙️ → replace '%3CYOUR_PARENT_ID%3E' in parents with the real folder ID in the URL. Right-click node > Execute Node. Copy the entire JSON response. In the editor paste it into: DocAgent → System Prompt (top) User Choice Match Check → System Prompt (top) Save. > ⚠️ Why manual? Caching the list saves LLM tokens. Whenever you add a template, rerun the node and update the prompts. > 🔗 D. Deploy the Apps Script | Step | Screen | Note | | --- | --- | --- | | 1 | Open Gist files GetMetaData.gs + FillDocument.gs → File ▸ Make a copy | Both files may live in one project | | 2 | Project Settings > enable Google Docs API ✔️ & Google Drive API ✔️ | Otherwise you’ll see 403 errors | | 3 | Deploy ▸ New deployment ▸ Web app | | | • Execute as | Me | | | • Who has access | Anyone | | | 4 | On the consent screen allow scopes:• …/auth/documents• …/auth/drive | Click Advanced › Go if Google warns | | 5 | Copy the Web App URL (e.g. https://script.google.com/macros/s/ABC123/exec) | If this URL changes, update n8n | Apps Script source code → Notion Link 🔧 E. Wire the Script URL in n8n | Node | Field | Action | | --- | --- | --- | | GetMetaData | URL | <WEB_APP_URL>?mode=meta&id={{ $json["id"] }} | | FillDocument | URL | <WEB_APP_URL> | > 💡 Prefer using an .env file? Add GAS_WEBAPP_URL=… and reference it as {{ $env.GAS_WEBAPP_URL }}. > 🔐 F. Add Credentials Google Drive OAuth2* → *Drive API (v3) Full Access Google Docs OAuth2** → same account LLM key** (OpenAI / Gemini) (Optional) Postgres Chat Memory credential for the corresponding node 🧪 G. First Run (Smoke Test) Switch the workflow Active. In the chat panel type /start. Bot lists templates → pick one. Fill mandatory fields, optionally toggle blocks → Confirm. 🔗 Download PDF link appears → ☑️ setup complete. ❌ H. Common Errors & Fixes | 🆘 Error | Likely Cause | Remedy | | --- | --- | --- | | 403: Apps Script permission denied | Web app access set to User | Redeploy as Anyone, re-authorize scopes | | placeholder validation failed | Missing required field | Provide the listed values → rerun DocProcess | | unknown placeholders: … | Template vs. agent mismatch | Check placeholder spelling (UPPER_CASE ASCII) | | Template ID not found | Prompt list is old | Rerun TemplateList → update both prompts | | Cannot find META_JSON | No meta block / wrong tag | Add [[META_JSON_START]] … [[META_JSON_END]], retry | ✅ Final Checklist [ ] Drive folder structure & template rules ready [ ] Workflow imported, folder ID set in node [ ] TemplateList output pasted into both prompts [ ] Apps Script deployed, URL set in nodes [ ] OAuth credentials & LLM key configured [ ] /start test passes, PDF link received 🙋♂️ Need Help with Customizations? Reach out for consulting & support on LinkedIn: Özgür Karateke Full Documentation → Notion Simple sample template → Template Link Apps Script source code → Notion Link
by Ferenc Erb
Overview Transform your Bitrix24 Open Line channels with an intelligent chatbot that leverages Retrieval-Augmented Generation (RAG) technology to provide accurate, document-based responses to customer inquiries in real-time. Use Case This workflow is designed for organizations that want to enhance their customer support capabilities in Bitrix24 by providing automated, knowledge-based responses to customer inquiries. It's particularly useful for: Customer service teams handling repetitive questions Support departments with extensive documentation Sales teams needing quick access to product information Organizations looking to provide 24/7 customer support What This Workflow Does Smart Document Processing Automatically processes uploaded PDF documents Splits documents into manageable chunks Generates vector embeddings for semantic understanding Indexes content for efficient retrieval AI-Powered Responses Utilizes Google Gemini AI to generate natural language responses Constructs answers based on relevant document content Maintains conversation context for coherent interactions Provides fallback responses when information is not available Vector Database Integration Stores document embeddings in Qdrant vector database Enables semantic search beyond simple keyword matching Retrieves the most relevant information for each query Maintains a persistent knowledge base that grows over time Webhook Handler Processes incoming messages from Bitrix24 Open Line channels Handles authentication and security validation Routes different types of events to appropriate handlers Manages session and conversation state Event Routing Intelligently routes different event types: ONIMBOTMESSAGEADD: Processes new user messages ONIMBOTJOINCHAT: Handles bot joining a conversation ONAPPINSTALL: Manages application installation ONIMBOTDELETE: Handles bot deletion Document Management Organizes processed documents in designated folders Tracks document processing status Moves indexed documents to appropriate locations Maintains document metadata for reference Interactive Menu Provides menu-based options for common user requests Customizable menu items and responses Easy navigation for users seeking specific information Fallback to operator option when needed Technical Architecture Components Webhook Handler: Receives and validates incoming requests from Bitrix24 Credential Manager: Securely manages authentication tokens and API keys Event Router: Directs events to appropriate processing functions Document Processor: Handles document loading, chunking, and embedding Vector Store: Qdrant database for storing and retrieving document embeddings Retrieval System: Searches for relevant document chunks based on user queries LLM Integration: Google Gemini model for generating natural language responses Response Manager: Formats and sends responses back to Bitrix24 Integration Points Bitrix24 API**: For bot registration, message handling, and user interaction Ollama API**: For generating document embeddings Qdrant API**: For vector storage and retrieval Google Gemini API**: For AI-powered response generation Setup Instructions Prerequisites Active Bitrix24 account with Open Line channels enabled Access to n8n workflow system Ollama API credentials Qdrant vector database access Google Gemini API key Configuration Steps Initial Setup Import the workflow into your n8n instance Configure credentials for all services Set up webhook endpoints Bitrix24 Configuration Create a new Bitrix24 application Configure webhook URLs Set appropriate permissions Install the application to your Bitrix24 account Document Storage Create a designated folder in Bitrix24 for knowledge base documents Configure folder paths in the workflow settings Upload initial documents to be processed Bot Configuration Customize bot name, avatar, and description Configure welcome messages and menu options Set up fallback responses Testing Verify successful installation Test document processing pipeline Send test queries to evaluate response qu