by Ranjan Dailata
Disclaimer Please note - This workflow is only available on n8n self-hosted as it's making use of the community node for the Decodo Web Scraping This workflow automates intelligent keyword and topic extraction from Google Search results, combining Decodo’s advanced scraping engine with OpenAI GPT-4.1-mini’s semantic analysis capabilities. The result is a fully automated keyword enrichment pipeline that gathers, analyzes, and stores SEO-relevant insights. Who this is for This workflow is ideal for: SEO professionals** who want to extract high-value keywords from competitors. Digital marketers** aiming to automate topic discovery and keyword clustering. Content strategists** building data-driven content calendars. AI automation engineers** designing scalable web intelligence and enrichment pipelines. Growth teams** performing market and search intent research with minimal effort. What problem this workflow solves Manual keyword research is time-consuming and often incomplete. Traditional keyword tools only provide surface-level data and fail to uncover contextual topics or semantic relationships hidden in search results. This workflow solves that by: Automatically scraping live Google Search results for any keyword. Extracting meaningful topics, related terms, and entities using AI. Enriching your keyword list with semantic intelligence to improve SEO and content planning. Storing structured results directly in n8n Data Tables for trend tracking or export. What this workflow does Here’s a breakdown of the flow: Set the Input Fields – Define your search query and target geo (e.g., “Pizza” in “India”). Decodo Google Search – Fetches organic search results using Decodo’s web scraping API. Return Organic Results – Extracts the list of organic results and passes them downstream. Loop Over Each Result – Iterates through every search result description. Extract Keywords and Topics – Uses OpenAI GPT-4.1-mini to identify relevant keywords, entities, and thematic topics from each snippet. Data Enrichment Logic – Checks whether each result already exists in the n8n Data Table (based on URL). Insert or Skip – If a record doesn’t exist, inserts the extracted data into the table. Store Results – Saves both enriched search data and Decodo’s original response to disk. End Result: A structured and deduplicated dataset containing URLs, keywords, and key topics — ready for SEO tracking or further analytics. Setup Pre-requisite If you are new to Decode, please signup on this link visit.decodo.com Please make sure to install the n8n custom node for Decodo. Import and Configure the Workflow Open n8n and import the JSON template. Add your credentials: Decodo API Key under Decodo Credentials account. OpenAI API Key under OpenAI Account. Define Input Parameters Modify the Set node to define: search_query: your keyword or topic (e.g., “AI tools for marketing”) geo: the target region (e.g., “United States”) Configure Output The workflow writes two outputs: Enriched keyword data → Stored in n8n Data Table (DecodoGoogleSearchResults). Raw Decodo response → Saved locally in JSON format. Execute Click Execute Workflow or schedule it for recurring keyword enrichment (e.g., weekly trend tracking). How to customize this workflow Change AI Model** — Replace gpt-4.1-mini with gemini-1.5-pro or claude-3-opus for testing different reasoning strengths. Expand the Schema** — Add extra fields like keyword difficulty, page type, or author info. Add Sentiment Analysis** — Chain a second AI node to assess tone (positive, neutral, or promotional). Export to Sheets or DB** — Replace the Data Table node with Google Sheets, Notion, Airtable, or MySQL connectors. Multi-Language Research** — Pass a locale parameter in the Decodo node to gather insights in specific languages. Automate Alerts** — Add a Slack or Email node to notify your team when high-value topics appear. Summary Search & Enrich is a low-code AI-powered keyword intelligence engine that automates research and enrichment for SEO, content, and digital marketing. By combining Decodo’s real-time SERP scraping with OpenAI’s contextual understanding, the workflow transforms raw search results into structured, actionable keyword insights. It eliminates repetitive research work, enhances content strategy, and keeps your keyword database continuously enriched — all within n8n.
by Neloy Barman
Self-Hosted This workflow provides a complete end-to-end system for automatically managing your inbox by reading incoming questions, matching them to approved guidelines, and sending consistent, 24/7 replies. By combining local AI processing with an automated retrieval-augmented generation (RAG) pipeline, it ensures fast resolution times without compromising data privacy or incurring ongoing AI API costs. Who is this for? This is designed for University Admissions, Student Support Teams, Customer Service Staff, or professionals in any industry who are overwhelmed by their inboxes and spend countless hours answering repetitive questions. It is particularly useful for any organization looking to automate routine FAQs across various fields, maintaining personalized, human-like, and threaded email conversations while keeping data completely in-house. 🛠️ Tech Stack n8n**: For workflow orchestration of both the ingestion pipeline and response automation. Docker & Docker Compose**: For containerizing and orchestrating the n8n and Qdrant services locally. Google Drive**: To host and trigger updates from the approved FAQ knowledge base. Gmail**: For real-time incoming email triggers and threaded outbound replies. Qdrant**: For self-hosted vector database storage and similarity matching. LM Studio**: To host the local AI models via an OpenAI-compatible API for two primary tasks: Embedding Generation: Uses the mxbai-embed-large-v1 model to convert FAQ data and incoming questions into high-dimensional vectors for semantic matching. Response Generation: Uses the llama-3.2-3b-instruct model to process the retrieved context and craft a polite, personalized HTML email reply. ✨ How it works Knowledge Base Ingestion: The workflow automatically detects updates to a specific FAQ JSON file in Google Drive, converts the Q&A pairs into vector embeddings using the local mxbai model, and stores them in Qdrant. Email Trigger: The resolution pipeline kicks off instantly when a new incoming email arrives via the Gmail trigger. Semantic Search: The incoming question is converted to an embedding using the mxbai-embed-large-v1 model and checked against the Qdrant database to retrieve the top 3 most relevant FAQ answers, enforcing a minimum 0.7 similarity threshold for quality control. LLM Response Generation: The OpenAI node (pointing to LM Studio) processes the retrieved context and the student's email using the llama-3.2-3b-instruct model to craft a polite, personalized HTML email response. Threaded Reply: The Gmail node sends the generated response directly back into the original email thread, exactly like a human would. 📋 Requirements Docker* and *Docker Compose** installed to run n8n and Qdrant locally. LM Studio** running a local server on port 1234. mxbai-embed-large-v1* (GGUF) and *llama-3.2-3b-instruct** (GGUF) models loaded in LM Studio. Google Cloud Console** account with Gmail and Google Drive APIs enabled. An FAQ JSON file properly formatted and hosted in Google Drive. 🚀 How to set up Prepare your Local AI: Open LM Studio, download both the embedding and LLM models. Start the Local Server on port 1234. Note your machine's local IP address (e.g., 192.168.1.50). Spin up Services: Clone the repository and configure the .env file with your QDRANT_COLLECTION name. Run docker compose up -d to start the n8n and Qdrant containers. Import the Workflow: Open n8n at http://localhost:5678 and import the provided JSON workflow file. Link Services: Update the Google Drive nodes with the File ID of your FAQ JSON document. Update the embedding and AI nodes with your local IP address in the Base URL. Test and Activate: Execute the ingestion pipeline manually to populate Qdrant. Toggle the workflow to Active. Send a test email to your connected Gmail address to verify the automated reply. 🔑 Credential Setup To run this workflow, you must configure the following credentials in n8n: Google (Gmail & Drive)**: Create new Gmail OAuth2 API and Google Drive OAuth2 API credentials. Enter your Client ID and Client Secret obtained from the Google Cloud Console (the same credentials can be used for both). Qdrant API**: Create a new Qdrant API credential. REST URL: Set this to http://host.docker.internal:6333. Leave the API key blank for the self-hosted Docker setup. OpenAI API (Local)**: Create a new OpenAI API credential for connecting to LM Studio. API Key: Enter any placeholder text (e.g., lm-studio). Base URL: Set this to your machine's local IP address (e.g., http://<LM_STUDIO_IP>:1234/v1) to ensure n8n can connect to the local AI server from within the Docker network. ⚙️ How to customize Refine Response Tone**: Update the System Message in the AI node to change the personality, signature, or formatting rules of the generated email reply. Switch to Cloud AI: If you prefer not to host models locally, swap out the local **LM Studio connection for external APIs like OpenAI (GPT-4o), Anthropic (Claude), or Cohere for both embeddings and text generation. Change Embedding Models: While the workflow uses a local model by default, anyone can easily swap the embedding nodes to use alternative models like **OpenAI (text-embedding-3-small) or Google Gemini (text-embedding-004) if desired. Adjust Similarity Threshold**: Modify the semantic search threshold (default 0.7) in the Qdrant node to be stricter or more lenient depending on your knowledge base accuracy. Alternative Triggers & Channels: Replace the Gmail nodes with **Outlook / Microsoft 365, Zendesk, Intercom, or Slack to resolve queries across different communication platforms.
by Anas Chahid Ksabi
How it works Triggered via Jira webhook when a HIGH priority issue is created in your project Simultaneously sends alerts to Slack, Google Chat, and an escalation email to the configured manager(s) Set up steps Configure the ON JIRA ISSUE CREATED node with your Jira project key Fill in the CONFIGURATION node: JIRA_DOMAIN, MANAGER_EMAILS Connect credentials: Jira API, Gmail (OAuth2), Slack (OAuth2), Google Chat (Service Account) Watch tutorial to configure google chat node properly
by Rahul Joshi
Description Automate your AI-powered outreach and follow-up pipeline end-to-end with GPT-4o, Gmail, and Google Sheets. 🤖📬 This workflow personalizes emails for each lead, manages follow-ups automatically, tracks client replies, and updates CRM records in real time — all from a single Google Sheet. Ideal for sales and growth teams looking to convert leads faster without manual effort. ⚙️🚀 What This Template Does 1️⃣ Starts manually when you click “Execute workflow.” 🕹️ 2️⃣ Fetches all leads from the Google Sheet (sample_leads_50). 📊 3️⃣ Validates email format and filters only active (unbooked) leads. 🔍 4️⃣ Uses Azure OpenAI GPT-4o to generate short, personalized outreach emails in HTML. ✉️ 5️⃣ Cleans and parses the AI output (subject + HTML body). 🧠 6️⃣ Sends the first outreach email via Gmail and stores its thread ID. 📤 7️⃣ Waits 24 hours, then checks for a client reply in the Gmail thread. ⏱️ 8️⃣ If a positive reply is found → marks lead as BOOKED and updates in Sheets. ✅ 9️⃣ If no reply → triggers a polite follow-up email, waits again 24 hours, and checks the thread a second time. 🔁 🔟 If a second reply is found → marks BOOKED and logs the client message. 1️⃣1️⃣ If still no response → updates status to Declined in Google Sheets. ❌ 1️⃣2️⃣ Logs invalid or incomplete leads to a separate sheet for data cleanup. 🧾 Key Benefits ✅ Eliminates manual outreach and follow-up effort. ✅ Produces personalized, context-aware AI emails for every lead. ✅ Auto-tracks replies and updates CRM status with zero input. ✅ Prevents duplicate or repeated contact with booked clients. ✅ Keeps lead database synchronized and audit-ready. Features Google Sheets integration for dynamic lead retrieval and updates. Regex-based email validation for clean data pipelines. Azure OpenAI GPT-4o for contextual email writing. Two-stage Gmail automation (initial + follow-up). JavaScript parsing for AI output and Gmail thread analysis. Automated 24-hour wait and recheck logic. Conditional branches for Booked / Declined / Invalid outcomes. End-to-end CRM synchronization without manual review. Requirements Google Sheets OAuth2 credentials with read/write access. Azure OpenAI API key for GPT-4o model access. Gmail OAuth2 credentials with send, read, modify permissions. Environment Variables GOOGLE_SHEET_LEADS_ID GOOGLE_SHEET_OUTREACH_TAB_ID AZURE_OPENAI_API_KEY GMAIL_OAUTH_CLIENT_ID GMAIL_OAUTH_SECRET Target Audience 💼 Sales and Business Development teams automating outreach. 📈 Marketing and Growth teams running re-engagement campaigns. 🤖 Automation and RevOps teams integrating AI lead workflows. 💬 Freelancers and agencies managing large prospect lists. 📊 Operations teams maintaining CRM cleanliness and tracking. Step-by-Step Setup Instructions 1️⃣ Connect your Google Sheets, Azure OpenAI, and Gmail credentials. 2️⃣ Set your Google Sheet ID and tab name (outreach automation). 3️⃣ Update the GPT-4o system prompt to match your tone and signature. 4️⃣ Verify column headers (Company Name, Email, Booking Status, etc.). 5️⃣ Test the email validation branch with sample data. 6️⃣ Run once manually to confirm Gmail thread creation and reply detection. 7️⃣ Confirm successful CRM updates in Google Sheets. 8️⃣ Activate for continuous lead outreach and follow-up automation. ✅
by franck fambou
⚠️ IMPORTANT: This template requires self-hosted n8n hosting due to the use of community nodes (MCP tools). It will not work on n8n Cloud. Make sure you have access to a self-hosted n8n instance before using this template. Overview This workflow automation allows a Google Gemini-powered AI Agent to orchestrate multi-source web intelligence using MCP (Model Context Protocol) tools such as Firecrawl, Brave Search, and Apify. The system allows users to interact with the agent in natural language, which then leverages various external data collection tools, processes the results, and automatically organizes them into structured spreadsheets. With built-in memory, flexible tool execution, and conversational capabilities, this workflow acts as a multi-agent research assistant, capable of retrieving, synthesizing, and delivering actionable insights in real time. How the system works AI Agent + MCP Pipeline User Interaction A chat message is received and forwarded to the AI Agent. AI Orchestration The agent, powered by Google Gemini, decides which MCP tools to invoke based on the query. Firecrawl-MCP: Recursive web crawling and content extraction. Brave-MCP: Real-time web search with structured results. Apify-MCP: Automation of web scraping tasks with scalable execution. Memory Management A memory module stores context across conversations, ensuring multi-turn reasoning and task continuity. Spreadsheet automation Results are structured in a new, automatically created Google Spreadsheet, enriched with formatting and additional metadata. Data processing The workflow generates the spreadsheet content, updates the sheet, and improves results via HTTP requests and field edits. Delivery of results Users receive a structured and contextualized dataset ready for review, analysis, or integration into other systems. Configuration instructions Estimated setup time: 45 minutes Prerequisites Self-hosted n8n instance (v0.200.0 or higher recommended) Google Gemini API key MCP-compatible nodes (Firecrawl, Brave, Apify) configured Google Sheets credentials for spreadsheet automation Detailed configuration steps Step 1: Configuring the AI Agent AI Agent node**: Select Google Gemini as the LLM model Configure your Google Gemini API key in the n8n credentials Set the system prompt to guide the agent's behavior Connect the Simple Memory node to enable context tracking Step 2: Integrating MCP Tools Firecrawl-MCP Configuration**: Install the @n8n/n8n-nodes-firecrawl-mcp package Configure your Firecrawl API key Set crawling parameters (depth, CSS selectors) Brave-MCP configuration**: Install the @n8n/n8n-nodes-brave-mcp package Add your Brave Search API key Configure search filters (region, language, SafeSearch) Apify-MCP configuration**: Install the @n8n/n8n-nodes-apify-mcp package Configure your Apify credentials Select the appropriate actors for your use cases Step 3: Spreadsheet automation “Create Spreadsheet” node**: Configure Google Sheets authentication (OAuth2 or Service Account) Set the file name with dynamic timestamps Specify the destination folder in Google Drive “Generate Spreadsheet Content” node**: Transform the agent's outputs into tabular format Define the columns: URL, Title, Description, Source, Timestamp Configure data formatting (dates, links, metadata) “Update Spreadsheet” node**: Insert the data into the created sheet Apply automatic formatting (headers, colors, column widths) Add summary formulas if necessary Step 4: Post-processing and delivery “Data Enrichment Request” node** (formerly “HTTP Request1”): Configure optional API calls to enrich the data Add additional metadata (geolocation, sentiment, categorization) Manage errors and timeouts “Edit Fields” node**: Refine the final dataset (metadata, tags, filters) Clean and normalize the data Prepare the final response for the user Structure of generated Google Sheets Default columns | Column | Description | Type | |---------|-------------|------| | URL | Data source URL | Hyperlink | | Title | Page/resource title | Text | | Description | Description or content excerpt | Long text | | Source | MCP tool used (Brave/Firecrawl/Apify) | Text | | Timestamp | Date/time of collection | Date/Time | | Metadata | Additional data (JSON) | Text | Automatic formatting Headings**: Bold font, colored background URLs**: Formatted as clickable links Dates**: Standardized ISO 8601 format Columns**: Width automatically adjusted to content Use cases Business and enterprise Competitive analysis combining search, crawling, and structured scraping Market trend research with multi-source aggregation Automated reporting pipelines for business intelligence Research and academia Literature discovery across multiple sources Data collection for research projects Automated bibliographic extraction from online sources Engineering and development Discovery of APIs and documentation Aggregation of product information from multiple platforms Scalable structured scraping for datasets Personal productivity Automated creation of newsletters or knowledge hubs Personal research assistant compiling spreadsheets from various online data Key features Multi-source intelligence Firecrawl for deep crawling Brave for real-time search Apify for structured web scraping AI-driven orchestration Google Gemini for reasoning and tool selection Memory for multi-turn interactions Context-based adaptive workflows Structured data output Automatic spreadsheet creation Data enrichment and formatting Ready-to-use datasets for reporting Performance and scalability Handles multiple simultaneous tool calls Scalable web data extraction Real-time aggregation from multiple MCPs Security and privacy Secure authentication based on API keys Data managed in Google Sheets / n8n Configurable retention and deletion policies Technical architecture Workflow User query → AI agent (Gemini) → MCP tools (Firecrawl / Brave / Apify) → Aggregated results → Spreadsheet creation → Data processing → Results delivery Supported data types Text and metadata** from crawled web pages Search results** from Brave queries Structured data** from Apify scrapers Tabular reports** via Google Sheets Integration options Chat interfaces Web widget for conversational queries Slack/Teams chatbot integration REST API access points Data sources Websites (via Firecrawl/Apify) Search engines (via Brave) APIs (via HTTP Request enrichment) Performance specifications Query response: < 5 seconds (search tasks) Crawl capacity: Thousands of pages per run Spreadsheet automation: Real-time creation and updates Accuracy: > 90% when using combined sources Advanced configuration options Customization Set custom prompts for the AI Agent Adjust the spreadsheet schema for reporting needs Configure retries for failed tool runs Analytics and monitoring Track tool usage and costs Monitor crawl and search success rates Log queries and outputs for auditing Troubleshooting and support Timeouts:** Manually re-run failed MCP executions Data gaps:** Validate Firecrawl/Apify selectors Spreadsheet errors:** Check Google Sheets API quotas
by WeblineIndia
Facebook Mention Sentiment Tracker with Gemini, Supabase, Telegram & Slack This workflow automatically tracks Facebook page mentions, analyzes sentiment using AI (Gemini), stores the data in Supabase and sends alerts via Telegram and Slack. Positive mentions are shared on Telegram, while critical or negative mentions trigger Slack alerts for immediate attention. It also handles storage failures by notifying via Telegram. Quick Start (Implement in Minutes) Login to your n8n account. Connect your Facebook Page to the trigger node. Configure Gemini API credentials for sentiment analysis. Set up Supabase and create a mentions table. Add Telegram bot credentials for notifications. Connect Slack and choose a channel for alerts. Test with sample data and activate the workflow. What It Does This workflow listens for new Facebook Page mentions in real time. When a mention is received, it sends the message to an AI model (Gemini) to analyze sentiment (positive, neutral or negative) and extract the main topic. The response is cleaned and structured into a consistent JSON format for further processing. After processing, the workflow stores the mention data in Supabase for tracking and analytics. It then evaluates the sentiment and content of the message to determine the next action. Positive mentions are shared with your team via Telegram to highlight customer satisfaction. For risk management, the workflow detects critical keywords (like refund, bug, slow, cancel) and also checks if the sentiment is negative. If either condition is met, a Slack alert is triggered so your team can take immediate action. Additionally, if storing data in Supabase fails, a Telegram alert is sent to notify about the issue. Who It's For Social media managers monitoring brand mentions Customer support teams handling complaints Product teams tracking user feedback Startups and agencies managing multiple clients Businesses wanting real-time alerting on customer sentiment Requirements to Use This Workflow n8n account (self-hosted or cloud) Facebook Developer App with Page access Google Gemini API credentials Supabase account and project Telegram Bot Token & Chat ID(s) Slack workspace with API access Basic understanding of n8n nodes and credentials How It Works The workflow starts when a new Facebook mention is detected. The message is sent to Gemini AI for sentiment and topic analysis. The AI response is cleaned and normalized into structured JSON. The processed data is stored in Supabase. If sentiment is positive → Telegram notification is sent. Keywords are checked to detect critical intent. If sentiment is negative OR keywords match → Slack alert is triggered. If Supabase storage fails → Telegram error alert is sent. Setup Instructions Facebook Trigger Configure Facebook App ID. Enable webhook for Page mentions. Ensure required permissions are granted. Gemini Node Add Google Gemini API credentials. Use any model (default: gemini-2.5-flash). Ensure output is structured JSON. Code Node (Clean AI Response) Keep existing logic to extract and normalize AI response. Supabase Node Create a mentions table. Add fields: message, sentiment, topic, user_name, created_time, confidence Connect using Supabase credentials. Telegram Nodes Add bot token. Use different chat IDs for: Positive mentions Error alerts (recommended) Slack Node Connect Slack workspace. Select a channel for critical alerts. IF Nodes Configure: Positive sentiment check Critical/negative condition routing Test Workflow Use mock or pinned data. Verify Telegram and Slack alerts. Activate workflow. How To Customize Nodes Gemini Node** Change model (e.g., advanced models for better accuracy) Modify prompt to extract more fields (intent, urgency, etc.) Keyword Detection** Update keywords array: ["refund", "cancel", "bug", "slow"] Add business-specific terms Slack संदेश** Customize alert format Add priority levels or tagging Telegram Messages** Adjust formatting (Markdown, emojis, structure) Supabase** Add more columns if needed (e.g., region, category) Add-ons (Extend This Workflow) Add sentiment trend dashboard (via Supabase + BI tools) Auto-create support tickets (Zendesk, Freshdesk) Email alerts for high-priority issues Auto-reply system for Facebook comments Weekly analytics report via email or Slack AI-based response suggestions for support team Use Case Examples Detect and respond instantly to refund complaints Monitor brand reputation across social media Highlight positive feedback for marketing teams Escalate urgent issues to internal teams via Slack Build a centralized database of customer feedback There can be many more use cases depending on your business needs. Troubleshooting Guide | Issue | Possible Cause | Solution | | ------------------------- | ---------------------------------- | -------------------------------- | | No Facebook data | Webhook not configured | Check Facebook App & permissions | | AI not returning data | Incorrect prompt or API issue | Verify Gemini configuration | | Supabase insert fails | गलत credentials or schema mismatch | Check table structure & API key | | Telegram not sending | Wrong bot token or chat ID | Verify credentials | | Slack alert not triggered | Condition mismatch | Check IF node logic | | Keywords not detected | Case mismatch or logic error | Ensure .toLowerCase() is used | Need Help? If you need help setting up this workflow or want to extend it with advanced features like automation, dashboards or integrations then our n8n workflow development team at WeblineIndia can assist you. We can help you: Customize this workflow for your business Integrate additional tools (CRM, Helpdesk, Analytics) Build scalable automation systems using n8n Reach out to WeblineIndia to get started with tailored automation solutions.
by Yusuke Yamamoto
This n8n template demonstrates a “Human-in-the-Loop” workflow where AI automatically drafts replies to inbound emails, which are then reviewed and approved by a human before being sent. This powerful pattern ensures both the efficiency of AI and the quality assurance of human oversight. Use cases are many: Streamline sales inquiry responses, manage first-level customer support, handle initial recruitment communications, or any business process that requires personalized yet consistent email replies. Good to know At the time of writing, the cost per execution depends on your OpenAI API usage. This workflow uses a cost-effective model like gpt-4o-mini. See OpenAI Pricing for updated info. The AI’s knowledge base and persona are fully customizable within the Basic LLM Chain node’s prompt. How it works The Gmail Trigger node starts the workflow whenever a new email arrives in the specified inbox. The Classify Potential Leads node uses AI to determine if the incoming email is a potential lead. If not, the workflow stops. The Basic LLM Chain, powered by an OpenAI Chat Model, generates a draft reply based on a detailed system prompt and your internal knowledge base. A Structured Output Parser is crucially used to force the AI’s output into a reliable JSON format ({"subject": "...", "body": "..."}), preventing errors in subsequent steps. The Send for Review Gmail node sends the AI-generated draft to a human reviewer and pauses the workflow, waiting for a reply. The IF node checks the reviewer’s reply for approval keywords (e.g., “approve”, “承認”). If approved, the ✅ Send to Customer Gmail node sends the final email to the original customer. If not approved, the reviewer’s feedback is treated as a revision request, and the workflow loops back to the Basic LLM Chain to generate a new draft incorporating the feedback. How to use Gmail Trigger** node: Configure with your own Gmail account credentials. Send for Review** node: Replace the placeholder email reviewer@example.com with the actual reviewer's email address. IF** node: You can customize the approval keywords to match your team’s vocabulary. OpenAI Nodes**: Ensure your OpenAI credentials are set up. You can select a different model if needed, but the prompt is optimized for models like GPT-4o mini. Requirements An OpenAI account for the LLM. A Gmail account for receiving customer emails and for the review process. Customising this workflow By modifying the prompt and knowledge base in the Basic LLM Chain, you can adapt this agent for various departments, such as technical support, HR, or public relations. The approval channel is not limited to Gmail. You can easily replace the review nodes with Slack or Microsoft Teams nodes to fit your internal communication tools.
by Rahul Joshi
Description Automate Jira backlog management with intelligent cleanup, prioritization, and AI-powered reporting. This workflow scans daily to identify stale issues, missing priorities, and overdue tasks — auto-updates Jira with corrective labels, logs everything into Google Sheets for tracking, and notifies teams via Slack. Every Friday, it sends an AI-generated backlog summary email to project leads for visibility and planning. 🚀📅 What This Template Does Step 1: Triggers automatically every weekday at 7:00 AM to fetch backlog issues from Jira. ⏰ Step 2: Filters issues missing estimates, assignees, or priority values for cleanup. 🧹 Step 3: Applies corrective labels (e.g., “Needs Estimation,” “Unassigned,” “Overdue”). 🏷️ Step 4: Logs all flagged issues into Google Sheets with timestamps for audit tracking. 📊 Step 5: Sends real-time Slack alerts summarizing key backlog insights. 💬 Step 6: Every Friday, uses GPT-4 to generate a summarized backlog health report. 🤖 Step 7: Delivers weekly summary emails to leads and project managers via Gmail. 📧 Key Benefits ✅ Eliminates manual backlog reviews and prioritization. ✅ Ensures consistent Jira hygiene and task visibility. ✅ Provides centralized backlog tracking via Google Sheets. ✅ Sends real-time alerts for overdue and unassigned tasks. ✅ Offers AI-driven insights for better sprint planning. Features Automated daily trigger (Mon–Fri, 7 AM) Jira issue fetching and filtering by priority and assignment Smart labeling for hygiene tracking Slack alerts for backlog anomalies Weekly GPT-4 generated summary reporting Google Sheets integration for historical logging Gmail integration for summary email delivery Requirements Jira API credentials with read/write issue permissions Google Sheets OAuth2 credentials for data logging Slack Bot token with chat:write permissions Gmail OAuth2 credentials for email delivery OpenAI or Azure OpenAI API key for GPT-4 summarization Target Audience Agile and Scrum teams maintaining large backlogs 🧩 Product managers ensuring backlog quality and consistency 📋 Engineering leads seeking proactive backlog hygiene 🛠️ Organizations needing visibility across project tasks 🏢 Remote teams using Slack for daily syncs 🌐 Step-by-Step Setup Instructions Connect Jira credentials and specify your project key(s). 🔑 Link your Google Sheet and replace YOUR_SHEET_ID for backlog tracking. 📊 Configure Slack and replace YOUR_CHANNEL_ID for alert delivery. 💬 Add Gmail credentials and define recipient emails for weekly reports. 📧 Add your GPT-4 API key (OpenAI or Azure) for AI summarization. 🤖 Adjust cron expression (0 7 * * 1-5) to match your local timezone. ⏰ Run manually once to validate all connections, then enable automation. ✅
by WeblineIndia
Git Tag → Release Notes → Jira → Slack (Dev + QA) This workflow automatically detects a new GitLab tag, validates the version, fetches commit changes, generates release notes, creates a Jira task and sends notifications to separate Slack channels for Development and QA teams. This workflow starts whenever a new tag such as v1.0.0 is pushed in GitLab. It checks whether the tag format is correct, collects recent commits, prepares release notes, creates a Jira issue for QA testing and sends Slack notifications to Dev and QA teams. You receive: Automatic release process after GitLab tag push** Auto-generated release notes from commit history** Jira task for QA testing** Separate Slack alerts for Dev and QA teams** Ideal for teams that want a clean and automated software release process without manual follow-up. Quick Start – Implementation Steps Import workflow json in your n8n account Add your GitLab webhook URL in repository settings. Add your GitLab API token in HTTP Request node. Connect your Jira Cloud account. Connect your Slack account. Select Dev and QA Slack channels. Activate the workflow. What It Does This workflow automates release management: Detects new GitLab tag push. Validates semantic version format. Fetches commit changes from GitLab. Generates release notes automatically. Creates Jira issue for testing. Sends release update to Dev Slack channel. Sends testing request to QA Slack channel. Returns success or error response. This helps teams release faster with proper communication. Who It's For This workflow is ideal for: Development teams QA teams DevOps engineers Release managers Product teams Companies using GitLab + Jira + Slack Requirements to Use This Workflow To run this workflow, you need: n8n instance (cloud or self-hosted) GitLab repository access** GitLab API token** Jira Cloud account** Slack workspace** Basic understanding of releases and version tags How It Works Tag Push – Workflow starts when GitLab tag is created. Version Check – Validates tag like v1.0.0. Fetch Commits – Gets commit changes from GitLab. Generate Notes – Creates release notes. Create Jira Task – Creates testing ticket. Send Dev Alert – Notifies developers. Send QA Alert – Notifies QA team. Complete Workflow – Sends success response. Setup Steps Import the provided n8n JSON file. Open Webhook node and copy URL. Add webhook in GitLab repository settings. Open HTTP node and add GitLab API URL + Token. Connect Jira Cloud credentials. Select Jira project and issue type. Connect Slack credentials. Select separate Dev and QA channels. Activate workflow. How To Customize Nodes Customize Version Rules Modify the IF node: Allow custom tag formats Add beta or release candidate versions Customize Jira Task You can change: Summary title Priority Assignee Labels Due date Customize Slack Alerts You may add: Emojis Mentions (@channel) Jira link Deployment notes Release owner Customize Release Notes You can include: Commit author names Merge request links Feature list Bug fixes Build details Add-Ons (Optional Enhancements) You can extend this workflow to: Auto-create GitLab Release page Trigger deployment automatically Add email notifications Generate PDF release notes Add approval step before release Send summary to management Track release history in database Use Case Examples 1. Development Release Notify developers instantly after version tag push. 2. QA Testing Flow Automatically create Jira ticket for testing. 3. Sprint Release Use during sprint end releases. 4. Production Release Send official release communication. 5. Multi-Team Coordination Keep Dev, QA and Managers updated. Troubleshooting Guide | Issue | Possible Cause | Solution | |---|---|---| | Workflow not starting | Webhook not added | Recheck GitLab webhook | | Invalid tag error | Wrong version format | Use v1.0.0 | | No commits found | Token/API issue | Check GitLab token | | Jira issue not created | Wrong credentials | Reconnect Jira | | Slack message failed | Invalid Slack auth | Reconnect Slack | | Wrong channel used | Wrong channel ID | Select correct channel | Need Help? If you need help customizing or extending this workflow with features such as auto deployments, approvals, dashboards or enterprise release automation, then our n8n workflow developers at WeblineIndia can help with advanced automation solutions.
by PhilanthropEAK Automation
Who's it for Customer support teams, SaaS companies, and service businesses that need to quickly identify and respond to urgent customer issues. Perfect for organizations handling high ticket volumes where manual prioritization creates delays and missed critical issues. How it works This workflow automatically analyzes incoming Zendesk tickets using OpenAI's GPT-4 to determine urgency levels and routes high-priority issues to your team via Slack notifications. The system monitors new Zendesk tickets via webhook, extracts key information (subject, description, customer details), and sends this data to OpenAI for intelligent analysis. The AI considers factors like emotional language, business impact keywords, technical severity indicators, and customer context to assign an urgency score from 1-5. Based on the AI analysis, the workflow automatically updates the ticket priority in Zendesk, adds detailed reasoning as a private note, and sends formatted Slack notifications for high-priority issues (score 4+). The Slack alert includes ticket details, urgency reasoning, key indicators found, and direct links to the ticket for immediate action. How to set up Prerequisites: Zendesk account with API access OpenAI API key (GPT-4 access recommended) Slack workspace with webhook permissions n8n instance (cloud or self-hosted) Setup steps: Configure credentials in n8n: Add OpenAI API credential with your API key Add Zendesk API credential (email + API token) Add Slack API credential (bot token with chat:write permissions) Update Configuration Variables node: Set your Zendesk subdomain (e.g., "yourcompany" for yourcompany.zendesk.com) Configure Slack channel for urgent alerts (e.g., "#support-urgent") Adjust urgency threshold (1-5, default is 4) Set default assignee email for fallback scenarios Set up Zendesk webhook: Copy the webhook URL from the trigger node In Zendesk Admin, go to Settings > Extensions > Add target Create HTTP target with the copied URL and POST method Create a trigger for "Ticket is created" that sends to this target Test the workflow: Create a test ticket with urgent language ("system is down", "critical issue") Verify the AI analysis runs and priority is updated Check that Slack notifications appear for high-priority tickets Confirm ticket updates include AI reasoning in private notes Requirements Zendesk** account with API access and admin permissions for webhook setup OpenAI API key** with GPT-4 access (estimated cost: $0.01-0.05 per ticket analysis) Slack workspace** with bot creation permissions and access to notification channels n8n instance** (cloud subscription or self-hosted installation) How to customize the workflow Adjust AI analysis parameters: Modify the system prompt in the OpenAI node to focus on industry-specific urgency indicators Add custom keywords or phrases relevant to your business in the prompt Adjust the temperature setting (0.1-0.5) for more consistent vs creative analysis Configure priority mapping: Edit the Code node to change how urgency scores map to Zendesk priorities Add custom business logic based on customer tiers or product types Implement time-based urgency (e.g., higher priority during business hours) Enhance Slack notifications: Customize the Slack message blocks with additional fields (product, customer tier, SLA deadline) Add action buttons for common responses ("Acknowledge", "Escalate", "Assign to me") Route different urgency levels to different Slack channels Extend integrations: Add email notifications using the Email node for critical issues Integrate with PagerDuty or Opsgenie for after-hours escalation Connect to your CRM to enrich customer context before AI analysis Add Teams or Discord notifications as alternatives to Slack Advanced customizations: Implement machine learning feedback loops by tracking resolution times vs AI scores Add sentiment analysis as a separate factor in priority calculation Create daily/weekly summary reports of AI analysis accuracy Build approval workflows for certain priority changes before auto-updating
by Miftah Rahmat
Automate Water Bill Calculations with Telegram, Gemini AI, and Google Sheets This workflow automates the calculation of monthly water bills. Residents can send a photo of their water meter along with their name via Telegram. The workflow uses Gemini AI to extract the meter reading, calculates the usage difference compared to the previous month, and updates a Google Sheet with the billing details. Finally, the workflow sends a summary back via Telegram. Don’t hesitate to reach out if you have any questions or run into issues! 🙌 Requirements A Telegram bot token (created via BotFather). A Google account with access to Google Sheets. A Gemini API key (). A pre-created Google Sheet with the required columns. Google Sheet Setup Create a new Google Sheet with the following columns: Nama, Volume Sebelumnya, Volume Saat Ini, Harga/m³, Jumlah Bayar, Beban, Total Bayar, Tanggal Input Workflow Setup Instructions Connect Google Sheets Add your Google Sheets credentials in n8n. Link the workflow to your sheet with the structure above. Set Up Telegram Bot Create a Telegram bot via BotFather. Copy your bot token into the Telegram Trigger node. Configure Gemini AI Obtain a Gemini API key from Google AI Studio. Add it to your n8n credentials. The workflow will parse the meter reading from the uploaded image. Example Calculation Previous Volume: 535 m³ Current Volume: 545 m³ Usage: 10 m³ Price per m³: Rp3.000 Fixed cost: Rp3.000 Total Bill: Rp33.000 How It Works User sends a photo of the water meter with caption (name). Telegram Trigger receives the message. Gemini AI reads the meter number from the photo. Workflow fetches previous volume from Google Sheets. Usage and total bill are calculated. Data is stored back into Google Sheets. Bot replies in Telegram with detailed bill info. Customization Change Harga/m³ in the sheet to match your community’s water price. Update Beban if your community uses a different fixed fee. Edit the Telegram reply message node to adjust wording. With this workflow, you can streamline water billing for residents, ensure accuracy, and save time on manual calculations.
by Țugui Dragoș
How It Works Story Generation – Your idea is transformed into a narrative split into scenes using DeepSeek LLM. Visuals – Each scene is illustrated with AI images via Replicate, then animated into cinematic video clips with RunwayML. Voice & Music – Narration is created using ElevenLabs (text-to-speech), while Replicate audio models generate background music. Final Assembly – All assets are merged into a professional video using Creatomate. Delivery – Everything is orchestrated by n8n, triggered from Slack with /render, and the final video link is delivered back instantly. Workflow in Action 1. Trigger from Slack Type your idea with /render in Slack - the workflow starts automatically. 2. Final Video Output Receive a polished cinematic video link in Slack. 3. Creatomate Template ⚠️ Important: You must create your own template in Creatomate. This is a one-time setup - the template defines where the voiceover, music, and video clips will be placed. The more detailed and refined your template is, the better the final cinematic result. Required APIs To run this workflow, you need accounts and API keys from the following services: DeepSeek – Story generation (LLM) Replicate – Images & AI music generation RunwayML – Image-to-video animations ElevenLabs – Text-to-speech voiceovers Creatomate – Video rendering and templates Dropbox – File storage and asset syncing Slack – Workflow trigger and video delivery Setup Steps Import the JSON workflow into your n8n instance. Add your API credentials for each service above. Create a Creatomate template (only once) – define layers for visuals, voice, and music. Trigger the workflow from Slack with /render Your Story Idea. Receive your final cinematic video link directly in Slack. Use Cases Automated YouTube Shorts / TikToks for faceless content creators. Scalable ad creatives and marketing videos for agencies. Educational explainers** and onboarding videos generated from text. Rapid prototyping** of cinematic ideas for developers & storytellers. With this workflow, you’re not just using AI tools – you’re running a full AI-powered studio in n8n.