by Rahul Joshi
Description Automatically analyze incoming lead replies from Google Sheets using Azure OpenAI GPT-4, classify their intent (Demo Request, Pricing, Objection, etc.), and create actionable follow-up tasks in ClickUp — all without manual intervention. Streamline your sales response workflow and never miss a lead again. 🤖📩📈 What This Template Does Triggers every 15 minutes to check for new lead replies in Google Sheets. ⏰ Prepares lead data for AI analysis by standardizing input fields. 🧩 Uses Azure OpenAI GPT-4 to classify lead intent (Demo Request, Pricing Inquiry, Objection, etc.). 🧠 Routes leads based on intent to the corresponding follow-up handler. 🔀 Creates new ClickUp tasks with calculated due dates, descriptions, and pipeline stages. 🗂️ Adds structured checklists to each task for consistent sales follow-ups. ✅ Loops through multiple tasks while respecting ClickUp API rate limits. 🔁 Key Benefits ✅ Saves hours of manual lead qualification and task creation. ✅ Ensures no lead reply is ignored or delayed. ✅ Standardizes intent-based follow-ups for sales teams. ✅ Enhances productivity with AI-driven decision logic. ✅ Maintains clear visibility across CRM and task systems. Features 15-minute recurring trigger to monitor new replies. AI-powered intent classification using Azure OpenAI GPT-4. Multi-category routing logic for personalized next steps. Seamless ClickUp integration for automated task generation. Smart checklist creation for follow-up management. Batch loop processing to avoid rate-limit errors. Requirements n8n instance (cloud or self-hosted). Google Sheets OAuth2 credentials with read access. Azure OpenAI GPT-4 API credentials. ClickUp API token with workspace permissions. Target Audience Sales and marketing teams managing inbound leads. 💼 Agencies automating client qualification workflows. 🏢 Startups improving lead follow-up efficiency. 🚀 Teams integrating AI-driven insights into CRM processes. 🌐 Step-by-Step Setup Instructions Connect Google Sheets with your lead replies document. 📊 Add Azure OpenAI GPT-4 API credentials for intent analysis. 🧠 Configure ClickUp workspace details — team, space, folder, and list IDs. ⚙️ Set your preferred trigger interval (default: every 15 minutes). ⏰ Run a test with sample data to confirm intent mapping and task creation. ✅ Activate the workflow to automatically classify leads and create ClickUp tasks. 🚀
by Geoffroy
This n8n template demonstrates how to automatically generate and publish SEO/AEO-optimized Shopify blog articles from a list of keywords using AI for content creation, image generation, and metadata optimization. Who’s it for Shopify marketers, content teams, and solo founders who want consistent, hands-off blog production with built-in SEO/AEO hygiene and internal linking. What it does The workflow picks a keyword from your Google Sheet based on priority, search volume, and difficulty. It then checks your Shopify blog for existing slugs to avoid duplicate, drafts a 900+ word article optimized for SEO/AEO, generates a hero image, creates the article in Shopify, sets SEO metafields (title/description), and logs the result to your Sheets for tracking and future internal links. How it works Google Sheets → Candidate selection:* Reads *Keywords, **Links, and Published tabs: ranks by priority → volume → difficulty. (In the workflow it is explained how to exactly set up the Google Sheets) De-dupe slugs:** Paginates your blog via Shopify GraphQL to collect existing handles and make sure to use a different one. OpenAI content + image:** Builds a structured prompt (SEO/AEO and internal linking), calls Chat Completions and Image Generation for a hero image. Shopify publish:** Creates the article via REST and updates title_tag / description_tag metafields via GraphQL. Log + link graph:* Appends to *Published* tab to keep track of articles posted and *Links** tab for ongoing internal-link suggestions. How to set up Open Set – Config and fill: shopDomain, siteBaseUrl, blogId, blogHandle, sheetId, author. Optional: autoPublish, maxPerRun, tz. Create the Google Sheet with Keywords, Links, Published tabs using the provided column structure. I have personally used Semrush to generate that list of keywords. Add credentials: Shopify Admin token (Header/Bearer), OpenAI API key, and Google Service Account. Requirements Shopify store with Blog API access OpenAI API key Google Service Account with access to Google Sheets API (can be activated here here) How to customize Change the cron in Schedule Trigger for different days/times. Adjust maxPerRun, autoPublish, language or any other variables in the "Set - Config" node. Adjust the prompt from the "Code - Build Prompt" node. Extend the Sheets schema with extra scoring signals if needed.
by Hussam Muhammad Kazim
How it works: This Telegram automation works with voice and text messages given to the Telegram bot, and it returns the response in voice form if the input is in voice form. If the input is in text form, it will return a response in text form. Use Cases: Customer Support Personal Chatbot Prerequisites: OpenAI API Key Gemini API Key Telegram Bot built by BotFather Telegram Bot's API Key Target Audience: AI Automation learners who want to learn how to build and set up a basic Telegram Bot using n8n. How to set up: Create a telegram bot using BotFather, and the BotFather will give you an API key Copy the API key and set it up in a Telegram node inside n8n Get a free gemini api from https://aistudio.google.com/ Set up the Gemini API in the Transcribe recording node Get an OpenAI API key from https://platform.openai.com/docs/overview and make sure to top up your credits Copy the API key from the OpenAI platform and set it up in any OpenAI Chat Model, and it will be configured for all other nodes automatically by n8n That's it! Now you can activate the workflow and test it by sending a simple message to your Telegram bot
by Robert Breen
This workflow fetches deals and their notes from Pipedrive, cleans up stage IDs into names, aggregates the information, and uses OpenAI to generate a daily summary of your funnel. ⚙️ Setup Instructions 1️⃣ Set Up OpenAI Connection Go to OpenAI Platform Navigate to OpenAI Billing Add funds to your billing account Copy your API key into the OpenAI credentials in n8n 2️⃣ Connect Pipedrive In Pipedrive → Personal preferences → API → copy your API token URL shortcut: https://{your-company}.pipedrive.com/settings/personal/api In n8n → Credentials → New → Pipedrive API Company domain: {your-company} (the subdomain in your Pipedrive URL) API Token: paste the token from step 1 → Save In the Pipedrive nodes, select your Pipedrive credential and (optionally) set filters (e.g., owner, label, created time). 🧠 How It Works Trigger**: Workflow runs on manual execution (can be scheduled). Get many deals**: Pulls all deals from your Pipedrive. Code node**: Maps stage_id numbers into friendly stage names (Prospecting, Qualified, Proposal Sent, etc.). Get many notes**: Fetches notes attached to each deal. Combine Notes**: Groups notes by deal, concatenates content, and keeps deal titles. Set Field Names**: Normalizes the fields for summarization. Aggregate for Agent**: Collects data into one object. Turn Objects to Text**: Prepares text data for AI. OpenAI Chat Model + Summarize Agent: Generates a **daily natural-language summary of deals and their current stage. 💬 Example Prompts “Summarize today’s deal activity.” “Which deals are still in negotiation?” “What updates were added to closed-won deals this week?” 📬 Contact Need help extending this (e.g., send summaries by Slack/Email, or auto-create tasks in Pipedrive)? 📧 rbreen@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com
by Cheng Siong Chin
How It Works Automates monthly payroll processing and tax compliance by calculating employee payroll, applying accurate withholdings, generating comprehensive tax summaries, and producing compliance-ready documentation. The system fetches revenue and payroll data, performs detailed payroll calculations, applies AI-driven tax withholding rules, aggregates tax summary information, and verifies compliance using GPT-4 tax analysis. It generates structured HTML documents, converts them to PDF format, stores records in Google Sheets for audit trails, archives files to Google Drive, and sends summaries to tax agents. Designed for HR departments and payroll processing teams seeking automated, accurate, and fully compliant payroll management. Setup Steps Connect payroll data source and configure revenue fetch parameters. Set up OpenAI GPT-4 API for tax withholding logic and compliance analysis. Configure Google Sheets for audit storage and Google Drive for long-term archiving. Define tax withholding rules, compliance thresholds, and tax agent. Prerequisites Payroll data source; OpenAI API key; Google Sheets and Drive accounts Use Cases HR departments automating monthly payroll processing and tax compliance; Customization Adjust withholding rules by jurisdiction Benefits Eliminates manual payroll calculations
by Shinji Watanabe
Who’s it for Learners, teachers, and content creators who track German vocabulary in Google Sheets and want automatic enrichment with synonyms, example sentences, and basic lexical info—without copy-and-paste. How it works / What it does When a new row is added to your sheet (column vocabulary), the workflow looks up the word in OpenThesaurus and checks if any entries are found. If so, an LLM generates a strict JSON object containing: natural_sentence (a clear German example), part_of_speech, translation_ja (concise Japanese gloss), and level (CEFR estimate). The JSON is parsed and written back to the same row, keeping your spreadsheet the single source of truth. If no entry is found, the workflow writes a helpful “not found” note. How to set up Connect Google Sheets and select your spreadsheet/tab. Confirm a vocabulary column exists. Configure OpenThesaurus (no API key required). Add your LLM credentials and keep the prompt’s “JSON only” constraint. Rename nodes clearly and add a yellow sticky note with this description. Requirements Access to Google Sheets LLM credentials (e.g., OpenAI) A tab containing a vocabulary column How to customize the workflow Adjust the If condition (e.g., require terms.length > 1 or fall back to the headword). Tweak the LLM prompt for tone, length, or level policy. Map extra fields in the Set node; add columns for difficulty tags or usage notes. Follow security best practices (no hardcoded secrets in HTTP nodes).
by Nitin Garg
How it works Schedule Trigger runs every 6 hours (customizable) Apify Scraper fetches Upwork jobs matching your criteria Deduplication filters out jobs you've already seen AI Scoring (GPT-4) evaluates fit, client quality, budget (0-100 score) Filter keeps only jobs scoring 60+ Proposal Generator creates personalized proposals Google Sheets logs all results Telegram sends summary notification Setup steps Time: ~15 minutes Create Google Sheet with "Job ID" column Get Apify account + Upwork scraper actor Get OpenAI API key Set environment variables: GOOGLE_SHEETS_DOC_ID APIFY_ACTOR_ID TELEGRAM_CHAT_ID Create credentials: Google Sheets, Apify (Header Auth), OpenAI, Telegram Connect credentials to workflow nodes Who is this for? Freelancers actively applying to Upwork jobs Agencies monitoring multiple job categories Consultants prioritizing high-quality leads Estimated costs Per run:** $0.50-3.00 (Apify + OpenAI) Monthly (4x/day):** $50-200
by Konstantin
How it works This workflow creates an intelligent Telegram bot with a knowledge base powered by Qdrant vector database. The bot automatically processes documents uploaded to Google Drive, stores them as embeddings, and uses this knowledge to answer questions in Telegram. It consists of two independent flows: document processing (Google Drive → Qdrant) and chat interaction (Telegram → AI Agent → Telegram). Step-by-step Document Processing Flow: New File Trigger:* The workflow starts when the *New File Trigger** node detects a new file created in the specified Google Drive folder (polling every 15 minutes). Download File:* The *Download File** (Google Drive) node downloads the detected file from Google Drive. Text Splitting:* The *Split Text into Chunks** node splits the document text into chunks of 3000 characters with 300 character overlap for optimal embedding. Load Document Data:* The *Load Document Data** node processes the binary file data and prepares it for vectorization. OpenAI Embeddings:* The *OpenAI Embeddings** node generates vector embeddings for each text chunk. Insert into Qdrant:* The *Insert into Qdrant** node stores the embeddings in the Qdrant vector database collection. Move to Processed Folder:* After successful processing, the *Move to Processed Folder** (Google Drive) node moves the file to a "Qdrant Ready" folder to keep files organized. Telegram Chat Flow: Telegram Message Trigger:* The *Telegram Message Trigger** node receives new messages from the Telegram bot. Filter Authorized User:* The *Filter Authorized User** node checks if the message is from an authorized chat ID (26899549) to restrict bot access. AI Agent Processing:* The *AI Agent** receives the user's message text and processes it using the fine-tuned GPT-4.1 model with access to the Qdrant knowledge base tool. Qdrant Knowledge Base:* The *Qdrant Knowledge Base** node retrieves relevant information from the vector database to provide context for the AI agent's responses. Conversation Memory:* The *Conversation Memory** node maintains conversation history per chat ID, allowing the bot to remember context. Send Response to Telegram:* The *Send Response to Telegram** node sends the AI-generated response back to the user in Telegram. Set up steps Estimated set up time: 15 minutes Google Drive Setup: Add your Google Drive OAuth2 credentials to the New File Trigger, Download File, and Move to Processed Folder nodes. Create two folders in your Google Drive: one for incoming files and one for processed files. Copy the folder IDs from the URLs and update them in the New File Trigger (folderToWatch) and Move to Processed Folder (folderId) nodes. Qdrant Setup: Add your Qdrant API credentials to the Insert into Qdrant and Qdrant Knowledge Base nodes. Create a collection in your Qdrant instance (e.g., "Test-youtube-adept-ecom"). Update the collection name in both Qdrant nodes. OpenAI Setup: Add your OpenAI API credentials to the OpenAI Chat Model and OpenAI Embeddings nodes. (Optional) Replace the fine-tuned model ID in OpenAI Chat Model with your own model or use a standard model like gpt-4-turbo. Telegram Setup: Create a Telegram bot via @BotFather and obtain the bot token. Add your Telegram bot credentials to the Telegram Message Trigger and Send Response to Telegram nodes. Update the authorized chat ID in the Filter Authorized User node (replace 26899549 with your Telegram user ID). Customize System Prompt (Optional): Modify the system message in the AI Agent node to customize your bot's personality and behavior. The current prompt is configured for an n8n automation expert creating social media content. Activate the Workflow: Toggle "Active" in the top-right to enable both the Google Drive trigger and Telegram trigger. Upload a document to your Google Drive folder to test the document processing flow. Send a message to your Telegram bot to test the chat interaction flow.
by SendPulse
How it works This n8n template automates lead processing from your website. It receives customer data via a Webhook, stores the customer's contact (email or phone number) in the respective SendPulse address books, and uses the SendPulse MCP Server to send personalized welcome messages (email or SMS) generated using AI. The template also includes built-in SendPulse token management logic with caching in the Data Table, which reduces the number of unnecessary API requests. SendPulse’s MCP server is a tool that helps you manage your account through a chat with an AI assistant. It uses SendPulse API methods to get information and perform actions, such as request statistics, run message campaigns, or update user data. MCP server acts as middleware between your AI assistant and your SendPulse account. It processes requests through the SendPulse API and sends results back to chat, so you can manage everything without leaving the conversation. Once connected, the MCP server operates as follows: You ask your AI assistant something in chat. It forwards your request to the MCP server. The MCP server calls the API to get data or perform an action. The AI assistant sends the result back to your chat. Set up Requirements: An active SendPulse account. Client ID and Client Secret from your SendPulse account. An API key from your OpenAI account to power the AI agent. Set up steps: Get your OpenAI API Key - https://platform.openai.com/api-keys Add your OpenAI API Key to OpenAI Chat Model node in n8n workflow. Get your Client ID and Client Secret from your SendPulse account - https://login.sendpulse.com/settings/#api Add your Client ID and Client Secret to Workflow Configuration node. Add your Client ID and Client Secret to SendPulse MCP Client node as headers X-SP-ID і X-SP-SECRET in Multiple Headers Auth. In the Workflow Configuration node, change the names of the mailing lists, senderName, senderEmail, smsSender, routeCountryCode and routeType fileds as needed. Create a tokens table with the columns: hash (string), accessToken (string), tokenExpiry (string) in the Data tables section of your n8n platform account.
by Davide
This workflow integrates a Retrieval-Augmented Generation (RAG) system with a post-sales AI agent for WooCommerce. It combines vector-based search (Qdrant + OpenAI embeddings) with LLMs (Google Gemini and GPT-4o-mini) to provide accurate and contextual responses. Both systems are connected to VAPI webhooks, making the workflow usable in a voice AI assistant via Twilio phone numbers. The workflow receives JSON payloads from VAPI via webhooks, processes the request through the appropriate chain (Agent or RAG), and sends a structured response back to VAPI to be read out to the user. Advantages ✅ Unified AI Support System: Combines knowledge retrieval (RAG) with transactional support (WooCommerce). ✅ Data Privacy & Security: Enforces strict email/order verification before sharing information. ✅ Multi-Model Power: Leverages both Google Gemini and OpenAI GPT-4o-mini for optimal responses. ✅ Scalable Knowledge Base: Qdrant vector database ensures fast and accurate context retrieval. ✅ Customer Satisfaction: Provides real-time answers about orders, tracking, and store policies. ✅ Flexible Integration: Easily connects with VAPI for voice assistants and phone-based customer support. ✅ Reusable Components: The RAG part can be extended for FAQs, while the post-sales agent can scale with more WooCommerce tools. How it Works It has two main components: RAG System (Knowledge Retrieval & Q\&A) Uses OpenAI embeddings to store documents in Qdrant. Retrieves relevant context with a Vector Store Retriever. Sends the information to a Question & Answer Chain powered by Google Gemini. Returns precise, context-based answers to user queries via webhook. Post-Sales Customer Support Agent Acts as a WooCommerce virtual assistant to: Retrieve customer orders (get_order, get_orders). Get user profiles (get_user). Provide shipment tracking (get_tracking) using YITH WooCommerce Order Tracking plugin. Enforces strict verification rules: customer email must match the order before disclosing details. Communicates professionally, providing clear and secure customer support. Integrates with GPT-4o-mini for natural conversation flow. Set Up Steps To implement this workflow, follow these three main steps: 1. Infrastructure & Credentials Setup in n8n: Ensure all required nodes have their credentials configured: OpenAI API Key: For the GPT 4o-mini and Embeddings OpenAI nodes. Google Gemini API Key: For the Google Gemini Chat Model node. Qdrant Connection Details: For the Qdrant Vector Store1 node (points to a Hetzner server). WooCommerce API Keys: For the get_order, get_orders, and get_user nodes (for magnanigioielli.com). WordPress HTTP Auth Credentials: For the Get tracking node in the sub-workflow. Pre-populate the Vector Database:** The RAG system requires a pre-filled Qdrant collection with your store's knowledge base (e.g., policy documents, product info). The "Sticky Note2" provides a link to a guide on building this RAG system. 2. Workflow Activation in n8n: Save this JSON workflow in your n8n instance. Activate the workflow.** This is crucial, as n8n only listens for webhook triggers when the workflow is active. Note the unique public webhook URLs generated for the Webhook (post-sales agent) and rag (RAG system) nodes. You will need these URLs for the next step. 3. VAPI Configuration: Create Two API Tools in VAPI:** Tool 1 (Post-Sales): Create an "API Request" tool. Connect it to the n8n Webhook URL. Configure the request body to send parameters email and n_order based on the conversation with the user. Tool 2 (RAG): Create another "API Request" tool. Connect it to the n8n rag webhook URL. Configure the request body to send a search parameter containing the user's query. Build the Assistant:** Create a new assistant in VAPI. Write a system prompt that instructs the AI on when to use each of the two tools you created. In the "Tools" tab, add both tools. Go Live:** Add a phone number (e.g., from Twilio) to your VAPI assistant and set it to "Inbound" to receive customer calls. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Shun Nakayama
Instagram Hashtag Generator Workflow This workflow automatically generates optimal hashtags for your Instagram posts by analyzing captions and fetching real-time engagement data. Key Features 100% Official API & Free**: Uses ONLY the official Instagram Graph API. No expensive third-party tools or risky scraping methods are required. Safe & Reliable**: Relying on the official API ensures compliance and long-term stability. Smart Caching**: Includes a Google Sheets caching mechanism to maximize the value of the official API's rate limits (30 searches/7 days). Workflow Overview Caption Input: Set your caption manually or via a workflow trigger. AI Suggestions: GPT-4o-mini analyzes the caption and suggests 10 relevant hashtags, balancing popular (big words) and niche keywords. Official API Search (Instagram Graph API): Fetches Hashtag IDs using the ig_hashtag_search endpoint. Retrieves engagement metrics (Average Likes, Average Comments) using the ID. Selection & Sorting: Sorts candidates by engagement metrics. Selects the top 5 most effective hashtags that balance relevance and engagement. Output: Returns the final list of hashtags as text. Setup Steps Import to n8n: Copy the content of workflow_hashtag_generator.json and paste it into your n8n canvas, or import the file directly. Credentials: OpenAI account: Connect your OpenAI credentials. Facebook Graph account: Connect your Facebook Graph API credentials. Configuration: Instagram Business ID: Update the YOUR_INSTAGRAM_BUSINESS_ACCOUNT_ID placeholder in the Get Hashtag Info and Get Hashtag Metrics nodes with your actual Business Account ID. Google Spreadsheet ID: Update the YOUR_SPREADSHEET_ID placeholder in the Fetch Cached Hashtags and Save to Cache nodes. Adjustments: Filter Logic: You can adjust the sorting or filtering logic in the Aggregate & Rank Candidates node's JavaScript code (e.g., exclude tags with fewer than 1000 posts) if needed. Important Notes on API Limits The official Instagram Hashtag Search API (ig_hashtag_search) allows for 30 unique hashtag queries per rolling 7-day period. Why this is fine**: This workflow caches results in Google Sheets. Once a tag is fetched, it doesn't need to be queried again for a while, allowing you to build up a large database of tags over time without hitting the limit. Recommendation**: Use mock data during initial testing to save your API quota.
by Anna Bui
🎥 AI Content Generator: Transcript to Video & Image Transform meeting transcripts into engaging multi-format content with AI-powered automation Perfect for educators, consultants, and content creators who record sessions and want to repurpose them into social media posts, videos, and images without manual work. How it works Chat interface triggers the AI orchestrator when you request content creation Fetches your most recent meeting transcript from Fathom AI analyzes the transcript and extracts key insights and breakthrough moments Generates written post content and creates a Google Doc automatically Creates detailed video generation prompts and sends to video API (Luma/Runway) Generates image prompts and creates social media graphics via DALL-E Returns all assets: written content, video URL, and image file ready to use How to use Connect your Fathom account to retrieve meeting transcripts Set up the three required subworkflows: Text to Video, Text to Image, and Transcript to Content Configure your OpenAI credentials for AI processing Simply chat: "Create content from my latest session - video and image" Review and customize the generated content as needed Requirements Fathom account with recorded meetings or sessions OpenAI API account (GPT-4 recommended for best results) Google Docs access for content storage Video generation API (Luma AI or Runway ML) for video creation Three subworkflows must be created separately (see setup notes) Good to know Video generation typically costs $0.50-$2.00 per video depending on your provider The workflow processes the most recent 7 days of Fathom transcripts automatically AI agents use ~5,000-10,000 tokens per complete content generation Subworkflows need to be set up once before using this main workflow Videos take 2-5 minutes to generate after the prompt is created Need Help? Join the Discord or ask in the Forum! Happy Creating! 🚀