by Davide
This workflow integrates Flowise Multi-Agent Chatflows into a custom-branded n8n chatbot, enabling real-time interaction between users and AI agents powered by large language models (LLMs). Key Advantages: ✅ Easy Integration with Flowise: Uses a low-code HTTP node to send user questions to Flowise's API (/api/v1/prediction/FLOWISE_ID) and receive intelligent responses. Supports multi-agent chatflows, allowing for complex, dynamic interactions. 🎨 Customizable Chatbot UI: Includes pre-built JavaScript for embedding the n8n chatbot into any website. Provides customization options such as welcome messages, branding, placeholder text, chat modes (e.g., popup or embedded), and language support. 🔐 Secure & Configurable: Authorization via Bearer token headers for Flowise API access. Clearly marked notes in the workflow for setting environment variables like FLOWISE_URL and FLOW_ID. How It Works Chat Trigger: The workflow starts with the When chat message received node, which acts as a webhook to receive incoming chat messages from users. HTTP Request to Flowise: The received message is forwarded to the Flowise node, which sends a POST request to a Flowise API endpoint (https://FLOWISEURL/api/v1/prediction/FLOWISE_ID). The request includes the user's input as a JSON payload ({"question": "{{ $json.chatInput }}"}) and uses HTTP header authentication (e.g., Authorization: Bearer FLOWSIE_API). Response Handling: The response from Flowise is passed to the Edit Fields node, which maps the output ($json.text) for further processing or display. Set Up Steps Configure Flowise Integration: Replace FLOWISEURL and FLOWISE_ID in the HTTP Request node with your Flowise instance URL and flow ID. Ensure the Authorization header is set correctly in the credentials (e.g., Bearer FLOWSIE_API). Embed n8n Chatbot: Use the provided JavaScript snippet in the sticky notes to embed the n8n chatbot on your website. Replace YOUR_PRODUCTION_WEBHOOK_URL with the webhook URL generated by the When chat message received node. Customize the chatbot's appearance and behavior (e.g., welcome messages, language, UI elements) using the createChat configuration options. Optional Branding: Adjust the sticky note examples to include branding details, such as custom messages, colors, or metadata for the chatbot. Activate Workflow: Toggle the workflow to "Active" in n8n and test the chat functionality end-to-end. Ideal Use Cases: Embedding branded AI assistants into websites. Connecting Flowise-powered agents with customer support chatbots. Creating dynamic, smart conversational flows with LLMs via n8n automation. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Le Thua Phu
Overview This n8n workflow automates the process of crawling a website's sitemap to extract URLs, which is particularly useful for SEO analysis, website auditing, or content monitoring. By leveraging n8n's nodes, the workflow fetches the sitemap from a specified URL, processes the XML data, and extracts individual URLs, which can then be converted into a downloadable file or integrated with tools like Google Sheets. How It Works The workflow operates in a sequential manner, utilizing a series of nodes to fetch, parse, and process sitemap data: Trigger: Initiates when the user clicks "Test workflow" (Manual Trigger node). Set URL: Defines the base domain (e.g., https://phu.io.vn/) for the sitemap (Set URL node). Crawl Sitemap: Fetches the main sitemap file (sitemap.xml) from the specified domain using an HTTP request (Crawl sitemap node). Parse XML: Converts the sitemap XML into a JSON format for easier processing (XML node). Split Sitemap: Extracts individual sitemap entries (e.g., <sitemap> tags) from the parsed data (Split Out node). Crawl Sub-Sitemap: Fetches each sub-sitemap URL listed in the main sitemap (Crawl sitemap 2 node). Parse Sub-Sitemap XML: Converts the sub-sitemap XML into JSON (XML 2 node). Split URLs: Extracts individual URLs (e.g., <url> tags) from the sub-sitemap (Split Out 2 node). Convert to File: Saves the extracted URLs into a file for download or further use (Convert to File node). This workflow supports both single sitemap files and sitemap indexes that reference multiple sub-sitemaps, ensuring comprehensive URL extraction. How to Use To implement this workflow in n8n, follow these steps: Set Up n8n: Ensure you have an active n8n instance (Cloud, npm, or self-hosted). Refer to the n8n documentation for setup instructions. Import Workflow: Copy the JSON from the provided Extract Website URLs from Sitemap.XML for SEO Analysis.json file and import it into your n8n instance via the workflow editor. Configure the Domain: In the Set URL node, update the Domain parameter with the target website's base URL (e.g., https://example.com/). Alternatively, in the Crawl sitemap node, directly paste the full sitemap URL if known (e.g., https://example.com/sitemap.xml). Test the Workflow: Click "Test workflow" to execute the Manual Trigger node. Verify that the workflow fetches the sitemap and processes the URLs correctly. Download or Integrate: The Convert to File node generates a file containing the extracted URLs. Optionally, replace this node with a Google Sheets node to append URLs to a spreadsheet. Refer to the Google Sheets node documentation for setup. Save and Activate: Save the workflow and activate it for production use if needed, using a trigger like a schedule or webhook (see Trigger Node). Requirements n8n Instance**: An active n8n instance (version 1.0 or later recommended) on n8n Cloud, npm, or self-hosted (Docker). See Choose your n8n for details. Technical Knowledge**: Basic understanding of n8n's editor UI and node configuration. Familiarity with XML sitemaps is helpful but not mandatory. Permissions**: For self-hosted setups, ensure the n8n process has network access to fetch the sitemap URL. For Docker deployments, verify permissions as outlined in the n8n v1.0 migration guide. Optional**: If integrating with Google Sheets, valid Google Sheets credentials are required (see Credentials). Timeout Configuration**: The HTTP Request nodes (Crawl sitemap and Crawl sitemap 2) have a 10-second timeout. Adjust the timeout parameter in the node settings if dealing with slow-responding servers. FAQ Q: What happens if the sitemap is large or contains many sub-sitemaps? A: The workflow handles sitemap indexes by splitting and processing each sub-sitemap individually. For very large sitemaps, ensure your n8n instance has sufficient resources (memory and CPU) to avoid performance issues. See Scaling n8n for optimization tips. Q: Can I use this workflow with a specific sitemap URL instead of a domain? A: Yes, in the Crawl sitemap node, replace the url parameter ({{ $json.Domain }}sitemap.xml) with the direct sitemap URL (e.g., https://example.com/sitemap.xml). Update the node’s notes for clarity. Q: Why am I getting a timeout error? A: The HTTP Request nodes have a default timeout of 10 seconds. If the target server is slow, increase the timeout value in the options parameter of the Crawl sitemap or Crawl sitemap 2 nodes. Q: How can I save the URLs to Google Sheets instead of a file? A: Replace the Convert to File node with a Google Sheets node. Configure it with your Google Sheets credentials and map the loc field from the Split Out 2 node to the desired spreadsheet column. Refer to the Google Sheets node documentation. Q: Is this workflow compatible with older n8n versions? A: The workflow uses nodes compatible with n8n version 1.0 and later. For older versions, check for deprecated features (e.g., MySQL support) in the n8n v1.0 migration guide. Q: Can I automate this workflow to run periodically? A: Yes, replace the Manual Trigger node with a Schedule Trigger node to run the workflow at set intervals. See Trigger Nodes for configuration details. For further assistance, consult the n8n Community Forum or submit an issue on the n8n GitHub repository. Need help customizing? Contact me for consulting and support or add me on Facebook or email.
by Anthony
Use Case It is very convenient to add expenses via simple chat message. This workflow attempts to do exactly this using AI-powered n8n magic! Send message to a chat, something like "car wash; 59.3 usd; 25 jan 2024" And get a response: Your expense saved, here is the output of save sub-workflow:{"cost":59.3,"descr":"car wash","date":"2024-01-25","msg":"car wash; 59.3 usd; 25 jan 2024"} LLM will smartly parse your message to structured JSON and save the expense as a new row into Google Sheet! Installation 1. Set up Google Sheets: Clone this Sheet: https://docs.google.com/spreadsheets/d/1D0r3tun7LF7Ypb21CmbTKEtn76WE-kaHvBCM5NdgiPU/edit?gid=0#gid=0 (File -> Make a copy) Choose this sheet into "Save expense into Google Sheets" node. 2. Fix sub-workflow dropdown: open "Parse msg and save to Sheets" node (which is an n8n sub-workflow executor tool) and make sure the SAME workflow is chosen in the dropdown. it will allow n8n to locate and call "Workflow Input Trigger" properly when needed. 3. Activate the workflow to make chat work properly. Sent message to chat, something like "car wash; 59.3 usd; 25 jan 2024" you should get a response: Your expense saved, here is the output of save sub-workflow:{"cost":59.3,"descr":"car wash","date":"2024-01-25","msg":"car wash; 59.3 usd; 25 jan 2024"} and new row in Google sheets should be inserted!
by ConvertAPI
Who is this for? For developers and organizations that need to convert DOCX files to PDF. What problem is this workflow solving? The file format conversion problem. What this workflow does Downloads the DOCX file from the web. Converts the DOCX file to PDF. Stores the PDF file in the local file system. How to customize this workflow to your needs Open the HTTP Request node. Adjust the URL parameter (all endpoints can be found here). Add your secret to the Query Auth account parameter. Please create a ConvertAPI account to get an authentication secret. Adjust url_to_file in the Config node to URL pointing to your file. Optionally, additional Body Parameters can be added for the converter.
by Jaruphat J.
This workflow integrates LINE BOT, AI Agent (GPT), Google Sheets, and Google Drive to enable users to search for file URLs using natural language. The AI Agent extracts the filename from the message, searches for the file in Google Sheets, and returns the corresponding Google Drive URL via LINE BOT. Supports natural language queries (e.g., "Find file 1.pdf for me") AI-powered filename extraction Google Sheets Lookup for file URLs Auto-response via LINE BOT How to Use This Template 1. Download & Import Copy and save the Template Code as a .json file. Go to n8n Editor → Click Import → Upload the file. 2. Update Required Fields Replace YOUR_GOOGLE_SHEET_ID with your actual Google Sheet ID. Replace YOUR_LINE_ACCESS_TOKEN with your LINE BOT Channel Access Token. 3. Activate & Test Click Execute Workflow to test manually. Set Webhook URL in LINE Developer Console. Features of This Template Supports Natural Language Queries (e.g., “Find file 1.pdf for me”) AI-powered filename extraction using OpenAI (GPT-4/3.5) Real-time file lookup in Google Sheets Automatic LINE BOT Response Fully Automated Workflow
by scrapeless official
> ⚠️ Disclaimer: This workflow uses Scrapeless and Claude AI via community nodes, which require n8n self-hosted to work properly. 🔁 How It Works This intelligent B2B lead generation workflow combines search automation, website crawling, AI analysis, and multi-channel output: It starts by using Scrapeless’s Deep Serp API to find company websites from targeted Google Search queries. Each result is then individually crawled using Scrapeless's Crawler module, retrieving key business information from pages like /about, /contact, /services. The raw web content is processed via a Code node to clean, extract, and prepare structured data. The cleaned data is passed to Claude Sonnet (Anthropic) which analyzes and qualifies the lead based on content richness, contact data, and relevance. A filter step ensures only high-quality leads (e.g. lead score ≥ 6) are kept. Sent via Discord webhook for real-time notification (can be replaced with Slack, email, or CRM tools). > 📌 The result is a fully automated system that finds, qualifies, and organizes B2B leads with high efficiency and minimal manual input. ✅ Pre-Conditions Before using this workflow, make sure you have: An n8n self-hosted instance A Scrapeless account and API key (get it here) An Anthropic Claude API key A configured Discord webhook URL (or alternative notification service) ⚙️ Workflow Overview Manual Trigger → Scrapeless Google Search → Item Lists → Scrapeless Crawler → Code (Data Cleaning) → Claude Sonnet → Code (Response Parser) → Filter → Discord Notification 🔨 Step-by-Step Breakdown Manual Trigger – For testing purposes (can be replaced with Cron or Webhook) Scrapeless Google Search – Queries target B2B topics via Scrapeless’s Deep SERP API Item Lists – Splits search results into individual items Scrapeless Crawler – Visits each company domain and scrapes structured content Code Node (Data Cleaner) – Extracts and formats content for LLM input Claude Sonnet (via HTTP Request) – Evaluates lead quality, relevance, and contact info Code Node (Parser) – Parses Claude’s JSON response IF Filter – Filters leads based on score threshold Discord Webhook – Sends formatted message with company info 🧩 Customization Guidance You can easily adjust the workflow to match your needs: Lead Criteria**: Modify the Claude prompt and scoring logic in the Code node Output Channels**: Replace the Discord webhook with Slack, Email, Airtable, or any CRM node Search Topics**: Change your query in the Scrapeless SERP node to find leads in different niches or countries Scoring Threshold**: Adjust the filter logic (lead_score >= 6) to match your quality tolerance 🧪 How to Use Insert your Scrapeless and Claude API credentials in the designated nodes Replace or configure the Discord webhook (or alternative outputs) Run the workflow manually (or schedule it) View qualified leads directly in your chosen notification channel 📦 Output Example Each qualified lead includes: 🏢 Company Name 🌐 Website ✉️ Email(s) 📞 Phone(s) 📍 Location 📈 Lead Score 📝 Summary of relevant content 👥 Ideal Users This workflow is perfect for: AI SaaS companies** targeting mid-market & enterprise leads Marketing agencies** looking for B2B-qualified leads Automation consultants** building scraping solutions No-code developers** working with n8n, Make, Pipedream Sales teams** needing enriched prospecting data
by Viktor Klepikovskyi
Advanced Retry and Delay Logic This template provides a robust solution for handling API rate limits and temporary service outages in n8n workflows. It overcomes the limitations of the default node retry settings, which cap retries at 5 and delays at 5 seconds. By using a custom loop with a Set, If, and Wait node, this workflow gives you complete control over the number of retries and the delay between them. Instructions: Replace the placeholder HTTP Request node with your target node (the one that might fail). In the initial Set Fields node, modify the max_tries value to set the total number of attempts for your workflow. Adjust the delay_seconds value to define the initial delay between retries. Optionally, configure the Edit Fields node to implement exponential backoff by adjusting the delay_seconds expression (e.g., {{$json.delay_seconds * 2}}). For a more detailed breakdown and tutorial of this template, you can find additional information here.
by Peter
Store a key with a value in a local json file. Multiple keys could be saved in a single file. Related workflow: GetKey Create a subfolder in your n8n homedir: /home/node/.n8n/local-files. In docker look at the data path and create a subfolder local-files. Set the correct access rights chmod 1000.1000 local-files. Put the workflow code in a new workflow named WriteKey. Create another workflow with a function item: return { file: '/4711.json', // 4711 should be your workflow id key: 'MyKey', value: 'MyValue' } Pipe the function item to an Execution Workflow that calls the WriteKey workflow. It would be nice if we could get someday a shiny built-in n8n node that does the job. :)
by JPres
n8n Template: Store Chat Data in Supabase PostgreSQL for WhatsApp/Slack Integration This n8n template captures chat data (like user ID, name, or address) and saves it to a Supabase PostgreSQL database. It’s built for testing now but designed to work with WhatsApp, Slack, or similar platforms later, where chat inputs aren’t predefined. Guide with images can be found on: https://github.com/JimPresting/Supabase-n8n-Self-Hosted-Integration/ Step 1: Configure Firewall Rules in Your VPC Network To let your n8n instance talk to Supabase, add a firewall rule in your VPC network settings (e.g., Google Cloud, AWS, etc.). Go to VPC Network settings. Add a new firewall rule: Name: allow-postgres-outbound Direction: Egress (outbound traffic) Destination Filter: IPv4 ranges Destination IPv4 Ranges: 0.0.0.0/0 (allows all; restrict to Supabase IPs for security) Source Filter: Pick IPv4 ranges and add the n8n VM’s IP range, or Pick None if any VM can connect Protocols and Ports: Protocol: TCP Port: 5432 (default PostgreSQL port) Save the rule. Step 2: Get the Supabase Connection String Log into your Supabase Dashboard. Go to your project, click the Connect button in the header. Copy the PostgreSQL connection string: postgresql://postgres.fheraruzdahjd:[YOUR-PASSWORD]@aws-0-eu-central-1.pooler.supabase.com:6543/postgres Replace [YOUR-PASSWORD] with your Supabase account password (no brackets) and replace the string before that with your actual unique identifier. Note the port (6543 or 5432)—use what’s in the string. Step 3: Set Up the n8n Workflow This workflow takes chat data, maps it to variables, and stores it in Supabase. It’s built to handle messy chat inputs from platforms like WhatsApp or Slack in production. Workflow Steps Trigger Node: "When clicking 'Test workflow'" (manual trigger). For now, it’s manual. In production, this will be a WhatsApp or Slack message trigger, which won’t have a fixed input format. Set Node: "Set sample input variables (manual)". This node sets variables like id, name, address to mimic chat data. Why? Chat platforms send unstructured data (e.g., a message with a user’s name or address). We map it to variables so we can store it properly. The id will be something unique like a phone number, account ID, or account number. Sample Agent Node: Uses a model (e.g., GeminiFlash2.0 but doesn't matter). This is a placeholder to process data (e.g., clean or validate it) before saving. You can skip or customize it. Supabase PostgreSQL Node: "Supabase PostgreSQL Database". Connects to Supabase using the connection string from Step 2. Saves the variables (id, name, address) to a table. Why store extra fields? The id (like a phone number or account ID) is the key. Extra fields like name or address let us keep all user info in one place for later use (e.g., analytics or replies). Output Node: "Update additional values e.g., name, address". Confirms the data is saved. In production, this could send a reply to the chat platform. Why This Design? Handles Unstructured Chat Data**: WhatsApp or Slack messages don’t have a fixed format. The "Set" node lets us map any incoming data (e.g., id, name) to our database fields. Scales for Production**: Using id as a key (phone number, account ID, etc.) with extra fields like name makes this workflow flexible for many use cases, like user profiles or support logs. Future-Ready**: It’s built to swap the manual trigger for a real chat platform trigger without breaking. Step 4: Configure the Supabase PostgreSQL Node In the n8n workflow, set up the Supabase PostgreSQL node: Host: aws-0-eu-central-1.pooler.supabase.com (from the connection string) Port: 6543 (or what’s in the connection string) Database: postgres User: postgres.fhspudlibstmpgwqmumo (from the connection string) Password: Your Supabase password SSL: Enable (Supabase usually requires it) Set the node to Insert or Update: Map id to a unique column in your Supabase table (e.g., phone number, account ID). Map fields like name, address to their columns. Test the workflow to confirm data saves correctly. Security Tips Limit Firewall Rules**: Don’t use 0.0.0.0/0. Find Supabase’s IP ranges in their docs and use those. Hide Passwords**: Store your Supabase password in n8n’s environment variables. Use SSL**: Enable SSL in the n8n node for secure data transfer.
by Jonathan
This workflow will take an alert from Syncro, determine if it's an agent_offline_trigger type, then determine if it's a new alert or a close to an existing alert, and then submit it to OpsGenie. New alerts will create a new alert in OpsGenie and resolved alerts will close the alert in OpsGenie. It doesn't require any kind of Google Sheets because OpsGenie allows you to submit a unique ID (known as an alias) along with the alert, which can be referenced later when closing the alert. The trigger type can be changed to suit your needs. You will need to create an API integration in OpsGenie. In Syncro, in addition to setting up the appropriate notification to webhook, you will also need a script that closes the agent_offline_trigger alert and an automated remediation to trigger that script when the asset goes offline (the script is queued and run when the asset comes back online). > This workflow is part of an MSP collection, The original can be found here: https://github.com/bionemesis/n8nsyncro
by Yaron Been
Ndreca Hunyuan3d 2 Test AI Generator Description None Overview This n8n workflow integrates with the Replicate API to use the ndreca/hunyuan3d-2-test model. This powerful AI model can generate high-quality other content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters image** (string): Input image for generating 3D shape Optional Parameters seed** (integer, default: 1234): Random seed for generation steps** (integer, default: 50): Number of inference steps num_chunks** (integer, default: 200000): Number of chunks for mesh generation max_facenum** (integer, default: 40000): Maximum number of faces for mesh generation guidance_scale** (number, default: 5.5): Guidance scale for generation octree_resolution** (string, default: 512): Octree resolution for mesh generation remove_background** (boolean, default: True): Whether to remove background from input image How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate other content Access the generated output from the final node API Reference Model: ndreca/hunyuan3d-2-test API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of other generation parameters
by David Olusola
🚀 Automated Lead Scraper Workflow (Apify + n8n + Google Sheets) 🧠 What It Does This n8n workflow automates the process of scraping leads using Apify, cleaning the extracted data, and exporting it to Google Sheets—ready for use in outreach, prospecting, or CRM pipelines. 🔄 Workflow Steps ✅ Start – Manually triggers the workflow. 🧩 Set Variables – Stores required Apify credentials: APIFY_TOKEN: Your Apify token. APIFY_TASK_ID: The Apify task to run. 🕸️ Run Apify Scraper – Launches the scraper and fetches the dataset. 🧹 Clean Data – Processes scraped results to: ✂️ Strip non-numeric characters from phone numbers. ✉️ Format emails (lowercase + trimmed). 📊 Export to Google Sheets – Appends clean data to your spreadsheet: 🏢 company name → from title 📞 phone → cleaned number 📍 address → from scraped info 🛠️ Requirements 🕷️ Apify Account A valid APIFY_TOKEN An existing Apify task (APIFY_TASK_ID) 📗 Google Sheets Access OAuth2 credentials set up in n8n (e.g., "Google Sheets account 2") 🚦 How to Use ⚙️ Open the Variables node and plug in your Apify credentials. 📄 Confirm the Google Sheets node points to your desired spreadsheet. ▶️ Run the workflow manually from the Start node. 📥 Output A ready-to-use sheet of cleaned lead data containing: Company names Phone numbers Addresses 💼 Perfect For: Sales teams doing outbound prospecting Marketers building lead lists Agencies running data aggregation tasks