by Bhuvanesh R
Instant, automated scheduling. This AI Scheduling Agent manages real-time appointments, availability checks, and rescheduling across Google Calendar and Sheets, eliminating human hold times. 🎯 Problem Statement Traditional call center or online booking systems often lack the flexibility to handle complex, multi-step customer requests like rescheduling, checking dynamic availability across multiple time slots, or handling context-aware conversational booking. This leads to friction, missed bookings, and high administrative overhead for service companies like HVAC providers. ✨ Solution This workflow deploys a sophisticated AI Scheduling Agent that acts as a virtual receptionist. It uses the Language Model's (LLM) "tool-use" capability to intelligently execute complex, sequential business logic (e.g., check availability before booking, find existing events before rescheduling) and manages the entire lifecycle of a service appointment, from initial inquiry to final confirmation. ⚙️ How It Works (Multi-Step Execution) Trigger: A customer request (e.g., from an external voice or text platform) hits the Webhook Trigger with intent details (e.g., tool\_request: 'reschedule\_appointment'). Agent Logic: The Receptionist Agent uses a strict system prompt and its internal tools to formulate an execution plan. It maintains conversational state via the simple-memory node. Tool Execution (Example: Reschedule): The Agent executes a predefined sequence of private tools: find\_old\_event: Locates the existing booking ID using the customer's email. check\_calendar: Verifies the proposed new time is available (2-hour window). reschedule\_appointment: Updates the calendar event. log\_lead: Updates the central Google Sheet. Synchronous Response: The Agent sends a confirmation or follow-up question via the respond\_to\_webhook node. Asynchronous Confirmation: The log\_lead action triggers a secondary workflow that composes a professional email via a second LLM (Anthropic) and sends it to the customer via Gmail, followed by an internal alert via Google Chat. 🛠️ Setup Steps Credentials: AI/LLM: Configure credentials for the Language Model used (OpenAI or Gemini) for the core Agent. Google Services: Set up OAuth2 credentials for Google Calendar (for booking/checking), Google Sheets (for logging), and Gmail (for customer confirmation). Google Calendar: Specify the technician's calendar ID (bhuvaneshx13@gmail.com in the template) in all Calendar nodes. Google Sheets: Create a new Google Sheet to serve as the Lead Log and update the Document ID and Sheet Name in the log\_lead and log\_lead\_trigger nodes. Tool Configuration: Review and customize the Agent's system prompt in the Receptionist node to align time zone rules (currently Asia/Kolkata - IST) and business hours (9:00 AM to 6:00 PM) with your operations. ✅ Benefits Increased Efficiency: Fully automates complex scheduling and rescheduling, freeing up human staff. Contextual Service: AI handles multi-turn conversations and adheres to strict business rules (e.g., 2-hour slots, maximum tool usage). Data Integrity: Ensures all bookings are immediately logged to Google Sheets, maintaining a centralized record (CRM). Professional Flow: Provides immediate confirmation to the customer via email and instant notification to the internal team via chat. 🚀 Other Use Cases The underlying multi-step, tool-execution pattern is highly versatile and can be adapted for any service industry requiring complex, rules-based scheduling: Real Estate:** Scheduling property viewings (Check agent availability → Book viewing → Send directions). HVAC Services:** Managing maintenance and repair visits (Diagnose issue type → Match with qualified technician → Check part availability → Schedule visit → Send service confirmation). Medical/Dental:** Booking patient appointments (Check insurance eligibility → Check doctor availability → Book → Send pre-visit forms). Legal Services:** Intake for consultations (Collect client issue → Check specialist availability → Book → Send retainer agreement). Automotive Repair:** Scheduling service bays (Check bay and mechanic availability → Book → Update internal service board).
by OwenLee
📚In the social and behavioral sciences (e.g., psychology, sociology, economics, management), researchers and students often need to normalize academic paper metadata and extract variables before any literature review or meta-analysis. 🧩This workflow automates the busywork. Using an LLM, it processes CSV/XLSX/XLS files (exported from WoS, Scopus, EndNote, Zotero, or your own spreadsheets) into normalized metadata and extracted variables, and writes a neat table to Google Sheets. 🔗 Example Google Sheet: click me 👥 Who is this for? 🎓 Undergraduate and graduate students or researchers in soft-science fields (psychology, sociology, economics, business) ⏱️ People who don’t have time to read full papers and need quick overviews 📊 Anyone who wants to automate academic paper metadata normalization and variable extraction to speed up a literature review ⚙️ How it works 📤 Upload an academic paper file (CSV/XLSX/XLS) in chat. 📑 The workflow creates a Google Sheets spreadsheet with two tabs: Checkpoint and FinalResult. 🔎 A structured-output LLM normalizes core metadata (title, abstract, authors, publication date, source) from the uploaded file and writes it to Checkpoint; 📧 a Gmail notification is sent when finished. 🧪 A second structured-output LLM uses the metadata above to extract variables (Independent Variable, Dependent Variable) and writes them to FinalResult; 📧 you’ll get a second Gmail notification when done. 🛠️ How to set up 🔑 Credentials Google Sheets OAuth2** (read/write) Gmail OAuth2** (send notifications) Google Gemini (or any LLM you prefer)** 🚀 Quick start Connect Google Sheets, Gmail, and Gemini (or your LLM) credentials. Open File Upload Trigger → upload your CSV/XLSX/XLS file and type a name in chat (used as the Google Sheets spreadsheet title). Watch your inbox for status emails and open the Google Sheets spreadsheet to review Checkpoint and FinalResult. 🎛 Customization 🗂️ Journal lists: Edit the Journal Rank Classifier code node to add/remove titles. The default list is for business/management journals—swap it for a list from your own field. 🔔 Notifications: Replace Gmail with Slack, Teams, or any channel you prefer. 🧠 LLM outputs: Need different metadata or extracted data? Edit the LLM’s system prompt and Structured Output Parser. 📝 Note 📝 Make sure your file includes abstracts. If the academic paper data you upload doesn’t contain an abstract, the extracted results will be far less useful. 🧩 CSV yields no items? Encoding mismatches can break the workflow. If this happens, convert the CSV to .xls or .xlsx and try again. 📩 Help Contact: owenlzyxg@gmail.com
by Tsubasa Shukuwa
How it works This workflow automatically detects new image files uploaded to a Google Drive folder, extracts Japanese text using OCR, summarizes it with AI, and records the result in Google Sheets. Finally, it sends a completion email notification with the file name and summary. Workflow steps: Google Drive New File Trigger – Watches a specific Google Drive folder for new image uploads. Download Image File – Downloads the newly uploaded image for processing. Extract Text with OCR.space – Sends the image to the OCR.space API to extract text (Japanese supported). Format OCR Result & Check for Empty – Cleans and validates the extracted text. Generate Summary with OpenRouter AI – Uses an AI model to generate a short summary of the text. OpenRouter Chat Model – Connects the AI Agent to the OpenRouter language model. Append row in sheet – Adds the file name, AI summary, and processing date to Google Sheets. Send Completion Notification via Gmail – Sends an email with the summarized content and Google Sheets link. Process Completed – Marks the workflow’s successful end. Setup steps Connect your Google Drive, Google Sheets, and Gmail accounts through credentials. Set your OCR.space API key in the HTTP Request node. Add your OpenRouter API key credential for the AI node. Replace the Google Sheet ID and folder ID with your own. Customize the Gmail recipient and email message as needed. Adjust the polling frequency (e.g., every 1 minute) depending on your workflow needs. Ideal for Digitizing and summarizing handwritten or printed book pages. Automatically extracting and archiving text from scanned reports or notes. Businesses or educators automating document reading and summarization tasks. ⚙️ Note: Each node includes a clear English Sticky Note above it for easier understanding and documentation.
by explorium
Outbound Agent - AI-Powered Lead Generation with Natural Language Prospecting This n8n workflow transforms natural language queries into targeted B2B prospecting campaigns by combining Explorium's data intelligence with AI-powered research and personalized email generation. Simply describe your ideal customer profile in plain English, and the workflow automatically finds prospects, enriches their data, researches them, and creates personalized email drafts. DEMO Template Demo Credentials Required To use this workflow, set up the following credentials in your n8n environment: Anthropic API Type:** API Key Used for:** AI Agent query interpretation, email research, and email writing Get your API key at Anthropic Console Explorium API Type:** Generic Header Auth Header:** Authorization Value:** Bearer YOUR_API_KEY Used for:** Prospect matching, contact enrichment, professional profiles, and MCP research Get your API key at Explorium Dashboard Explorium MCP Type:** HTTP Header Auth Used for:** Real-time company and prospect intelligence research Connect to: https://mcp.explorium.ai/mcp Gmail Type:** OAuth2 Used for:** Creating email drafts Alternative options: Outlook, Mailchimp, SendGrid, Lemlist Go to Settings → Credentials, create these credentials, and assign them in the respective nodes before running the workflow. Workflow Overview Node 1: When chat message received This node creates an interactive chat interface where users can describe their prospecting criteria in natural language. Type:** Chat Trigger Purpose:** Accept natural language queries like "Get 5 marketing leaders at fintech startups who joined in the past year and have valid contact information" Example Prompts:** "Find SaaS executives in New York with 50-200 employees" "Get marketing directors at healthcare companies" "Show me VPs at fintech startups with recent funding" Node 2: Chat or Refinement This code node manages the conversation flow, handling both initial user queries and validation error feedback. Function:** Routes either the original chat input or validation error messages to the AI Agent Dynamic Input:** Combines chatInput and errorInput fields Purpose:** Creates a feedback loop for validation error correction Node 3: AI Agent The core intelligence node that interprets natural language and generates structured API calls. Functionality: Interprets user intent from natural language queries Maps concepts to Explorium API filters (job levels, departments, company size, revenue, location, etc.) Generates valid JSON requests with precise filter criteria Handles off-topic queries with helpful guidance Connected to MCP Client for real-time filter specifications AI Components: Anthropic Chat Model:** Claude Sonnet 4 for query interpretation Simple Memory:** Maintains conversation context (100 message window) Output Parser:** Structured JSON output with schema validation MCP Client:** Connected to https://mcp.explorium.ai/mcp for Explorium specifications System Instructions: Expert in converting natural language to Explorium API filters Can revise previous responses based on validation errors Strict adherence to allowed filter values and formats Default settings: mode: "full", size: 10000, page_size: 100, has_email: true Node 4: API Call Validation This code node validates the AI-generated API request against Explorium's filter specifications. Validation Checks: Filter key validity (only allowed filters from approved list) Value format correctness (enums, ranges, country codes) No duplicate values in arrays Proper range structure for experience fields (total_experience_months, current_role_months) Required field presence Allowed Filters: country_code, region_country_code, company_country_code, company_region_country_code company_size, company_revenue, company_age, number_of_locations google_category, naics_category, linkedin_category, company_name city_region_country, website_keywords has_email, has_phone_number job_level, job_department, job_title business_id, total_experience_months, current_role_months Output: isValid: Boolean validation status validationErrors: Array of specific error messages Node 5: Is API Call Valid? Conditional routing node that determines the next step based on validation results. If Valid:** Proceed to Explorium API: Fetch Prospects If Invalid:** Route to Validation Prompter for correction Node 6: Validation Prompter Generates detailed error feedback for the AI Agent when validation fails. This creates a self-correcting loop where the AI learns from validation errors and regenerates compliant requests by routing back to Node 2 (Chat or Refinement). Node 7: Explorium API: Fetch Prospects Makes the validated API call to Explorium's prospect database. Method:** POST Endpoint:** /v1/prospects/fetch Authentication:** Header Auth (Bearer token) Input:** JSON with filters, mode, size, page_size, page Returns:** Array of matched prospects with prospect IDs based on filter criteria Node 8: Pull Prospect IDs Extracts prospect IDs from the fetch response for bulk enrichment. Input:** Full fetch response with prospect data Output:** Array of prospect_id values formatted for enrichment API Node 9: Explorium API: Contact Enrichment Single enrichment node that enhances prospect data with both contact and profile information. Method:** POST Endpoint:** /v1/prospects/enrich Enrichment Types:** contacts, profiles Authentication:** Header Auth (Bearer token) Input:** Array of prospect IDs from Node 8 Returns: Contacts:** Professional emails (current, verified), phone numbers (mobile, work), email validation status, all available email addresses Profiles:** Full professional history, current role details, company information, skills and expertise, education background, experience timeline, job titles and seniority levels Node 10: Clean Output Data Transforms and structures the enriched data for downstream processing. Node 11: Loop Over Items Iterates through each prospect to generate individualized research and emails. Batch Size:** 1 (processes prospects one at a time) Purpose:** Enable personalized research and email generation for each prospect Loop Control:** Processes until all prospects are complete Node 12: Research Email AI-powered research agent that investigates each prospect using Explorium MCP. Input Data: Prospect name, job title, company name, company website LinkedIn URL, job department, skills Research Focus: Company automation tool usage (n8n, Zapier, Make, HubSpot, Salesforce) Data enrichment practices Tech stack and infrastructure (Snowflake, Segment, etc.) Recent company activity and initiatives Pain points related to B2B data (outdated CRM data, manual enrichment, static workflows) Public content (speaking engagements, blog posts, thought leadership) AI Components: Anthropic Chat Model1:** Claude Sonnet 4 for research Simple Memory1:** Maintains research context Explorium MCP1:** Connected to https://mcp.explorium.ai/mcp for real-time intelligence Output: Structured JSON with research findings including automation tools, pain points, personalization notes Node 13: Email Writer Generates personalized cold email drafts based on research findings. Input Data: Contact info from Loop Over Items Current experience and skills Research findings from Research Email agent Company data (name, website) AI Components: Anthropic Chat Model3:** Claude Sonnet 4 for email writing Structured Output Parser:** Enforces JSON schema with email, subject, message fields Output Schema: email: Selected prospect email address (professional preferred) subject: Compelling, personalized subject line message: HTML formatted email body Node 14: Create a draft (Gmail) Creates email drafts in Gmail for review before sending. Resource:** Draft Subject:** From Email Writer output Message:** HTML formatted email body Send To:** Selected prospect email address Authentication:** Gmail OAuth2 After Creation: Loops back to Node 11 (Loop Over Items) to process next prospect Alternative Output Options: Outlook:** Create drafts in Microsoft Outlook Mailchimp:** Add to email campaign SendGrid:** Queue for sending Lemlist:** Add to cold email sequence Workflow Flow Summary Input: User describes target prospects in natural language via chat interface Interpret: AI Agent converts query to structured Explorium API filters using MCP Validate: API call validation ensures filter compliance Refine: If invalid, error feedback loop helps AI correct the request Fetch: Retrieve matching prospect IDs from Explorium database Enrich: Parallel bulk enrichment of contact details and professional profiles Clean: Transform and structure enriched data Loop: Process each prospect individually Research: AI agent uses Explorium MCP to gather company and prospect intelligence Write: Generate personalized email based on research Draft: Create reviewable email drafts in preferred platform This workflow eliminates manual prospecting work by combining natural language processing, intelligent data enrichment, automated research, and personalized email generation—taking you from "I need marketing leaders at fintech companies" to personalized, research-backed email drafts in minutes. Customization Options Flexible Triggers The chat interface can be replaced with: Scheduled runs for recurring prospecting Webhook triggers from CRM updates Manual execution for ad-hoc campaigns Scalable Enrichment Adjust enrichment depth by: Adding more Explorium API endpoints (technographics, funding, news) Configuring prospect batch sizes Customizing data cleaning logic Output Destinations Route emails to your preferred platform: Email Platforms:** Gmail, Outlook, SendGrid, Mailchimp Sales Tools:** Lemlist, Outreach, SalesLoft CRM Integration:** Salesforce, HubSpot (create leads with research) Collaboration:** Slack notifications, Google Docs reports AI Model Flexibility Swap AI providers based on your needs: Default: Anthropic Claude (Sonnet 4) Alternatives: OpenAI GPT-4, Google Gemini Setup Notes Domain Filtering: The workflow prioritizes professional emails—customize email selection logic in the Clean Output Data node MCP Configuration: Explorium MCP requires Header Auth setup—ensure credentials are properly configured Rate Limits: Adjust Loop Over Items batch size if hitting API rate limits Memory Context: Simple Memory maintains conversation history—increase window length for longer sessions Validation: The AI self-corrects through validation loops—monitor early runs to ensure filter accuracy This workflow represents a complete AI-powered sales development representative (SDR) that handles prospecting, research, and personalized outreach with minimal human intervention.
by Khairul Muhtadin
Promo Seeker automatically finds, verifies, and delivers active promo codes to users via Telegram or email using SerpAPI + Gemini (OpenRouter). Saves hours of manual searching and deduplicates results into an n8n Data Table for fast reuse. Why Use This Workflow? Time Savings: Reduces manual promo hunting from 2 hours to 5 minutes per query. Cost Reduction: Cuts reliance on paid scraping tools or manual services — potential savings of $50–$200/month. Error Prevention: Cross-references sources and enforces a 30‑day recency filter to reduce expired-code hits by ~60% vs single-source checks. Scalability: Handles hundreds of queries per day with Data Table upserts and optional scheduling for continuous discovery. Ideal For Marketing / Growth Managers:** Quickly discover competitor or partner discounts to promote in campaigns. Customer Support / Operations:** Respond to user requests with verified promo codes via Telegram or email. Affiliate / Content Teams:** Aggregate and maintain a clean promo feed for newsletters or site widgets. How It Works Trigger: Incoming request via Webhook, Telegram message, Google Form submission, or scheduled run. Data Collection: The LangChain agent uses SerpAPI search results and Gemini (OpenRouter) to locate recent promo codes. Processing: Filter for recency (last 30 days), extract code, value, terms, and expiry. Intelligence Layer: Gemini 2.5 Pro (OpenRouter) + LangChain agent structure and verify results, outputting a standardized JSON. Output & Delivery: If a code exists in the Data Table, notify the requester via Telegram and Gmail; otherwise, return results and upsert them into the Data Table. Storage & Logging: Results stored/upserted in an n8n Data Table to prevent duplicates and enable fast lookups. Setup Guide Prerequisites | Requirement | Type | Purpose | |-------------|------|---------| | n8n instance | Essential | Execute the workflow — import JSON to your n8n instance | | SerpAPI account | Essential | Web search results for promo code discovery | | OpenRouter (Gemini) account | Essential | Language model (Gemini 2.5 Pro) for extraction and verification | | Telegram Bot (BotFather) | Essential | Receive queries and send promo notifications | | Gmail OAuth2 | Essential | Send rich email notifications | | n8n Data Tables | Essential | Store and deduplicate promo code records | Get your n8n instance here: n8n instance — import the JSON and begin configuration. (Repeat link for convenience) n8n instance Installation Steps Import the JSON workflow into your n8n instance. Configure credentials (use n8n's Credentials UI — do NOT paste keys into nodes): SerpAPI: paste your SerpAPI API Key from your SerpAPI dashboard. OpenRouter API: paste your OpenRouter API Key (for Gemini 2.5 Pro). Telegram API: create bot via @BotFather, then add Bot Token to Telegram credentials. Gmail OAuth2: use n8n's Gmail OAuth2 credential flow and authorize the account. Update environment-specific values: Webhook path: /v1/promo-seeker (configured in the Webhook node) or replace with your preferred path. Data Table ID: point to your own Data Table Form/webhook recipient fields (email/chatId mappings). Customize settings: Adjust the LangChain agent prompt (Promo Seeker Agent) for different recency windows or regional focus. Change max results/limit on the Data Table node (default limit = 3). Test execution: Trigger via Telegram or a POST to the webhook with sample payload: { "platform":"example.com", "email":"you@domain.com" } Confirm notifications arrive and Data Table rows are upserted. Technical Details Core Nodes | Node | Purpose | Key Configuration | |------|---------|-------------------| | SerpAPI | Fetch web search results | Provide credential; adjust search params in agent prompt | | Gemini 2.5Pro (OpenRouter) | Extract & verify promo details | Use OpenRouter credential; model google/gemini-2.5-pro | | Promo Seeker Agent (LangChain) | Orchestrates search + parsing | System prompt enforces 30‑day recency & result format | | Structured Output Parser | Validates agent output | JSON schema example for platform/code/value/terms/validUntil | | Data Table (Get row(s)) | Lookup existing promos | Filters by platform; limit = 3 | | If (Code Exist?) | Branching logic | Checks existence of platform field | | Data Table (Upsert row(s)) | Insert or update promo | Mapping from agent output to Data Table columns | | Telegram Trigger / Telegram | Receive queries & notify users | Webhook-based trigger; parse_mode = HTML for messages | | Gmail | Send rich HTML emails | Uses Gmail OAuth2 credential | | Webhook / Form Trigger | Alternate inputs | Webhook path /v1/promo-seeker and Form trigger for manual submissions | Workflow Logic On trigger, the Platform Set node normalizes the incoming query and receiver. Get row(s) checks Data Table for existing promos. If found: notify via Telegram and send Gmail (email template included). If not found: the Promo Seeker Agent runs SerpAPI searches, parses structured output, then Upsert row(s) saves results and notifications are sent. Structured Output Parser enforces correct JSON to avoid bad upserts. Customization Options Basic Adjustments Recency window: change the agent prompt to 7/14/30 days. Result limit: increase Data Table Get limit or change Upsert batch size. Advanced Enhancements Add Slack or Microsoft Teams notifications (moderate complexity). Add caching layer (Redis) to reduce repeated SerpAPI calls (advanced). Parallelize searches across multiple search engines (higher API usage & complexity). Performance & Optimization | Metric | Expected Performance | Optimization Tips | |--------|----------------------|-------------------| | Execution time | 8–30s per new search (depends on SerpAPI + LM response) | Reduce SerpAPI page depth; cache recent results | | API calls | 3–10 SerpAPI calls per complex query | Batch queries; use higher-quality search params | | Error handling | Agent retries on malformed output | Use retry nodes and set onError strategy for downstream nodes | Troubleshooting | Problem | Cause | Solution | |---------|-------|----------| | No results returned | Query too vague or rate-limited API | Improve query specificity; check SerpAPI quota | | Gmail send fails | OAuth scope not granted or token expired | Reconnect Gmail OAuth2 credential in n8n | | Telegram webhook not firing | Incorrect bot token or webhook setup | Recreate Telegram credential and check bot permissions | | Duplicate rows | Upsert mapping mismatch | Ensure promoCode mapping in Upsert matches structured output | | Agent returns malformed JSON | LM prompt too permissive | Tighten the agent system prompt and validate with Structured Output Parser | Created by: khaisa Studio Category: Marketing Automation Tags: promo-codes, coupons, serpapi, telegram, gmail, openrouter, data-tables Need custom workflows or help adapting this template? Contact us
by Julian Kaiser
n8n Forum Job Aggregator - AI-Powered Email Digest Overview Automate your n8n community job board monitoring with this intelligent workflow that scrapes, analyzes, and delivers opportunities straight to your inbox. Perfect for freelancers, agencies, and developers looking to stay on top of n8n automation projects without manual checking. How It Works Scrapes the n8n community job board to find new postings from the last 7 days Extracts key metadata including job titles, descriptions, posting dates, and client details Analyzes each listing using OpenRouter AI to generate concise summaries of project requirements and client needs Delivers a professionally formatted email digest with all opportunities organized and ready for review Prerequisites OpenRouter API Key**: Sign up at OpenRouter.ai to access AI summarization capabilities SMTP Email Account**: Gmail, Outlook, or any SMTP-compatible email service Setup Steps Time estimate: 5-10 minutes Configure OpenRouter Credentials Add your OpenRouter API key in n8n credentials manager Recommended model: GPT-3.5-turbo or Claude for cost-effective summaries Set Up SMTP Email Configure sender email address Add recipient email(s) for digest delivery Test connection to ensure delivery Customize Date Range (Optional) Default: Last 7 days of job postings Adjust the date filter node to match your preferred frequency Test & Refine Run a test execution Review email formatting and AI summary quality Customize HTML template styling to match your preferences Customization Options Scheduling**: Set up cron triggers (daily, weekly, or custom intervals) Filtering**: Add keyword filters for specific technologies or project types AI Prompts**: Modify the summarization prompt to extract different insights Email Design**: Customize HTML/CSS styling in the email template node Example Use Cases Freelance Developers**: Never miss relevant n8n automation opportunities Agencies**: Monitor market demand and competitor activity Job Seekers**: Track n8n-related positions and consulting gigs Market Research**: Analyze trends in automation project requests Example Output Each email digest includes: Job title and posting date AI-generated summary (e.g., "Client needs workflow automation for Shopify order processing with Slack notifications") Direct link to original posting Organized by recency
by Jameson Kanakulya
Automated Email Order Tracking System with AI Classification and Notion Sync Overview ⚠️ Self-Hosted Solution Required This workflow requires a self-hosted n8n instance with active integrations for Gmail, Google Gemini AI, OpenAI, and Notion. API credentials and database IDs must be configured before use. Template Image Description This intelligent automation system monitors your Gmail inbox for order-related emails, extracts key order information using AI, and automatically syncs the data to a Notion database for centralized order tracking. Perfect for individuals managing multiple e-commerce accounts or small businesses tracking customer orders across various platforms (Amazon, Noon, Namshi, etc.). What This Workflow Does Email Monitoring: Continuously monitors Gmail inbox for new incoming emails Smart Classification: Uses AI to identify order-related emails (confirmations, shipping notifications, delivery updates) Intelligent Extraction: Parses email content to extract order details (order number, items, prices, status, delivery info) Database Synchronization: Automatically creates or updates Notion database records with order information Status Tracking: Monitors order progression through stages (Ordered → Shipped → Out for Delivery → Delivered) Key Features Multi-vendor support**: Works with any e-commerce platform (Amazon, Noon, Carrefour, Namshi, etc.) Duplicate prevention**: Searches existing records before creating new entries Smart updates**: Only modifies records when order status actually changes Status validation**: Detects backward status changes (potential returns/reshipments) Graceful error handling**: Handles missing data and optional fields intelligently Timestamped history**: Maintains audit trail of all status changes Technologies Used Gmail Trigger**: Email monitoring JavaScript Code**: Email content classification with pattern matching Google Gemini AI / OpenAI**: Natural language processing for order extraction Structured Output Parser**: JSON formatting and validation Notion API**: Database search, create, and update operations Prerequisites Before setting up this workflow, ensure you have: Self-hosted n8n instance (version 1.0.0 or higher) Gmail account with IMAP access enabled Google Gemini API key OR OpenAI API key Notion workspace with: Integration access configured Database created with the required schema (see below) Integration token/API key Notion Database Schema Create a Notion database with the following properties: Required Properties | Property Name | Type | Description | |--------------|------|-------------| | Name of the Item | Title | Product/item name | | Order Number | Text | Unique order identifier | | Quantity | Number | Number of items | | Expected Date | Date or Text | Expected delivery date | | Order Status | Select | Options: Ordered, Shipped, Out for Delivery, Delivered | Optional Properties (Recommended) | Property Name | Type | Description | |--------------|------|-------------| | Vendor | Select | E-commerce platform (Amazon, Noon, etc.) | | Customer Name | Rich Text | Order recipient name | | Price | Number or Rich Text | Item price | | Order Total | Number | Total order amount | | Currency | Select | Currency code (AED, USD, SAR, etc.) | | Delivery Location | Rich Text | Delivery city/address | | Notes | Rich Text | Status change history | | Created Date | Created Time | Auto-populated by Notion | | Last Updated | Last Edited Time | Auto-populated by Notion | Setup Instructions Step 1: Import the Workflow Copy the workflow JSON from this template In your n8n instance, go to Workflows → Add Workflow → Import from File/URL Paste the JSON and click Import Step 2: Configure Gmail Trigger Click on the Gmail Trigger node Click Create New Credential Follow the OAuth authentication flow to connect your Gmail account Configure trigger settings: Trigger On: Message Received Filters: (Optional) Add label filters to monitor specific folders Step 3: Configure AI Model (Choose One) Option A: Google Gemini AI Click on the Google Gemini AI Model node Click Create New Credential Enter your Gemini API key (obtain from Google AI Studio) Select model: gemini-1.5-pro or gemini-1.5-flash Option B: OpenAI Click on the OpenAI Chat Model node Click Create New Credential Enter your OpenAI API key (obtain from OpenAI Platform) Select model: gpt-4o or gpt-4-turbo Step 4: Update Email Classification Node Click on the Check Email Type node (JavaScript code) Review the classification patterns (pre-configured for common e-commerce emails) (Optional) Add custom keywords specific to your vendors Step 5: Configure Notion Integration 5.1: Create Notion Integration Go to Notion Integrations Click New Integration Name it (e.g., "n8n Order Tracker") Select your workspace Copy the Internal Integration Token 5.2: Share Database with Integration Open your Notion order database Click Share → Invite Search for your integration name and select it Grant Edit permissions 5.3: Get Database ID Open your Notion database in browser Copy the database ID from the URL: https://notion.so/workspace/DATABASE_ID?v=... ^^^^^^^^^^^^ 5.4: Configure Notion Nodes Click on Search a database in Notion node Click Create New Credential Paste your Integration Token In the node parameters: Database ID: Paste your database ID Filter: Set to search by Order Number property Repeat credential setup for Create a database page in Notion and Update a database page in Notion nodes Step 6: Update Agent Prompts Click on the Email Classification and Extraction Agent node Review the system prompt (pre-configured for common order emails) Update the {{$now}} variable if using a different timezone (Optional) Customize extraction rules for specific vendors Click on the Order Database Sync Agent node Replace {{notion_database_id}} with your actual database ID in the prompt Review status handling logic Step 7: Test the Workflow Click Execute Workflow to activate it Send yourself a test order confirmation email Monitor the execution: Check if email was classified correctly Verify extraction output in the AI agent node Confirm Notion database was updated Review your Notion database for the new/updated record Step 8: Activate for Production Click Active toggle in the top-right corner The workflow will now run automatically for new emails Monitor executions in the Executions tab Workflow Node Descriptions Email Trigger Monitors Gmail inbox for new incoming emails and triggers the workflow when a message is received. Check Email Type JavaScript code node that analyzes email content using pattern matching to identify order-related emails based on keywords, order numbers, and shipping terminology. Email Router (IF Node) Routes emails based on classification results: TRUE branch**: Order-related emails proceed to extraction FALSE branch**: Non-order emails are filtered out (no action) Email Classification and Extraction Agent AI-powered parser using Google Gemini or OpenAI to extract structured order information: Order number, items, prices, quantities Order status (Ordered/Shipped/Out for Delivery/Delivered) Customer name, delivery location, expected dates Vendor identification Structured Output Parser Validates and formats AI extraction output into clean JSON for downstream processing. Search a database in Notion Queries the Notion database by order number to check if a record already exists, preventing duplicates. Order Database Sync Agent Intelligent database manager that decides whether to create new records or update existing ones based on search results and status comparison. Create a database page in Notion Adds new order records to Notion when no existing record is found. Update a database page in Notion Modifies existing records when order status changes, appending timestamped notes for audit history. No Action Taken Terminates workflow branch for non-order emails with no further processing. Customization Options Add More Vendors Edit the Check Email Type node to add vendor-specific keywords: const customVendors = [ 'your-vendor-name', 'vendor-domain.com' ]; Modify Status Values Update the Email Classification and Extraction Agent prompt to add custom status values or change status progression logic. Add Email Notifications Insert a Send Email node after database sync to receive notifications for status changes. Filter by Labels Configure Gmail Trigger to monitor only specific labels (e.g., "Orders", "Shopping"). Multi-Database Support Duplicate the Notion sync section to route different vendors to separate databases. Troubleshooting Email not being classified as order Check the Check Email Type node output Add vendor-specific keywords to the classification patterns Review email content for order indicators AI extraction returning empty data Verify AI model credentials are valid Check if email content is being passed correctly Review the extraction prompt for compatibility with email format Notion database not updating Confirm integration has edit permissions on the database Verify database ID is correct in all Notion nodes Check that property names in the workflow match your Notion schema exactly Duplicate records being created Ensure Search a database in Notion node is filtering by Order Number Verify the search results are being evaluated correctly in the sync agent Status not updating Check if the Order Database Sync Agent is comparing current vs new status Review the status comparison logic in the agent prompt Performance Considerations Email Volume**: This workflow processes each email individually. For high-volume inboxes, consider adding filters or label-based routing. AI Costs**: Each email classification uses AI tokens. Monitor your API usage and costs. Rate Limits**: Notion API has rate limits (3 requests/second). The workflow handles this gracefully with built-in error handling. Privacy & Security All email content is processed through AI APIs (Google/OpenAI) - review their privacy policies Notion data is stored in your workspace with your configured permissions No data is stored or logged outside your n8n instance, AI provider, and Notion workspace Consider using self-hosted AI models for sensitive order information Support & Contributions Found a bug or have a suggestion? Please open an issue or contribute improvements to this template! License This template is provided as-is under the MIT License. Feel free to modify and distribute as needed. Credits Created for the n8n community to streamline e-commerce order tracking across multiple platforms.
by WeblineIndia
Customer Feedback Loop Analyzer (n8n Automated Workflow) This workflow automates the process of collecting customer feedback from forms and emails, analyzes it using AI, classifies it by category and sentiment, logs it into Google Sheets, and routes it to the right communication channels like Slack or email. It closes the feedback loop efficiently by ensuring every review is categorized, tracked, and acted upon. Who’s it for Product managers wanting structured customer insights Customer support teams needing fast issue routing Engineering teams who want to be alerted to bugs quickly Growth & UX teams tracking feature requests and usability feedback Any business managing customer feedback at scale How it works Form submission trigger captures reviews submitted via customer review forms. Gmail trigger listens for new feedback emails. Extract details (Code node) parses sender details and extracts the actual review text. AI node (LLM) summarizes the feedback, determines sentiment, and classifies it (Bug, Feature Request, UX Issue, Other). Google Gemini (optional) provides advanced classification/summarization. Google Sheets node logs all structured feedback for historical tracking. Switch node routes feedback into separate flows by category. Slack node instantly notifies the team of critical feedback (e.g., Bugs). Email node sends reports to relevant stakeholders (e.g., Feature Requests to product managers). How to set up Import the workflow JSON into your n8n instance. Connect credentials for: Gmail (for receiving/sending feedback) Google Sheets (for logging reviews) Slack (for real-time team alerts) Configure your Google Sheet (columns for Date, Reviewer, Sentiment, Category, Feedback). Adjust the AI node prompt to reflect your team’s preferred categories. Set Slack channels and email recipients for notifications. Activate workflow. Requirements n8n (cloud or self-hosted) Gmail API access (OAuth2 connected in n8n) Google Sheets API access Slack webhook or OAuth connection (Optional) Google Gemini or another LLM integration How to customize Modify the AI prompt to classify into different categories (e.g., “Support Issue”, “Billing Problem”). Extend the Google Sheet schema to include product version, tags, or priority scores. Add a translation step if feedback is multilingual. Replace Slack notifications with Teams/Discord if needed. Connect to Jira or Trello to auto-create tasks for certain categories. Add-ons Sentiment-based alerts**: Trigger Slack notifications only if sentiment is negative. Monthly report generator**: Compile all feedback into a PDF and email it automatically. CRM integration**: Sync categorized feedback into HubSpot or Salesforce. Auto-response emails**: Acknowledge receipt of customer feedback via Gmail. Use Case Examples SaaS product team routes all Bug feedback directly to engineering Slack channel. UX team receives only “UX Issue” categorized feedback for design improvements. Marketing team logs Feature Requests into Google Sheets for roadmap prioritization. Customer support automatically responds with a thank-you email for all submissions. Common Troubleshooting | Issue | Possible Cause | Solution | | ------------------------ | ----------------------------------------- | ----------------------------------------------------- | | Workflow doesn’t trigger | Gmail/Form node not authenticated | Reconnect Gmail / check webhook form integration | | No data extracted | Code node parsing wrong field | Update regex/parsing logic to match email format | | AI classification fails | Invalid LLM credentials or quota exceeded | Reconnect LLM node / check usage limits | | Feedback not logged | Wrong Google Sheet ID or missing sharing | Verify Sheet ID and grant access to connected account | | Slack messages not sent | Invalid webhook or channel not found | Reconfigure Slack node with valid channel/webhook | | Email reports fail | Gmail OAuth token expired | Refresh Gmail credentials in n8n | Need Help? Our n8n automation experts at WeblineIndia can help you: Fine-tune the AI prompts for better categorization accuracy Build custom dashboards from your Google Sheet data Add multilingual feedback handling Connect to your ticketing system (Jira, Trello, Asana) for seamless issue tracking
by Connor Provines
AI-Powered Product-Qualified Lead (PQL) Scoring & Sales Routing One-Line Description Automatically score product usage signals from Amplitude cohorts and route hot leads to sales with enriched context. Detailed Description What it does: This workflow transforms behavioral data into sales-ready leads by instantly detecting when users hit your PQL threshold, enriching their profile with company intelligence, and using AI to score their conversion potential. Hot leads are routed directly to sales with personalized conversation starters, while warm and cold leads enter appropriate nurture sequences. Who it's for: Product-led growth (PLG) teams** bridging the gap between product adoption and sales conversion Sales development teams** needing real-time alerts on high-intent users with actionable context Revenue operations professionals** optimizing lead handoff processes between product and sales Key Features: Real-time PQL detection** - Triggers instantly when users enter Amplitude behavior cohorts, eliminating manual lead review Multi-source enrichment** - Combines product usage data with company intelligence from People Data Labs and AI-powered research AI-driven scoring** - Evaluates usage intensity, ICP fit, intent signals, and timing to produce 0-10 lead scores with breakdown reasoning Smart routing logic** - Automatically categorizes leads as hot (8-10), warm (5-7), or cold (0-4) for appropriate follow-up workflows Sales enablement context** - Provides conversation starters, key insights, red flags, and handoff recommendations tailored to each lead Customizable criteria** - References external Google Doc for PQL rules, allowing non-technical teams to update scoring logic How it works: Trigger: Amplitude fires webhook when user enters predefined PQL cohort based on product usage patterns Enrichment: Pulls company data from People Data Labs and conducts AI research on company stage, tech sophistication, and budget indicators AI Scoring: Agent evaluates combined usage + enrichment data against ICP criteria stored in Google Docs, producing structured scoring output Routing: High-scoring leads (hot) generate formatted Slack alerts for immediate sales outreach; warm/cold leads could trigger email sequences (not shown in this template) Setup Requirements Prerequisites: Amplitude account** with cohort webhook capability (Growth plan or higher) People Data Labs API key** for company/person enrichment (paid credits required) Perplexity API** for AI-powered company research Anthropic Claude API** for PQL scoring logic Google Gemini API** for Slack message formatting Slack workspace** with OAuth app configured for posting messages Google Docs** containing your PQL criteria and ICP definition (publicly readable or authenticated access) Estimated Setup Time: 45-60 minutes including API credential configuration, Amplitude cohort definition, and PQL criteria document creation Installation Notes Amplitude cohort setup**: Define your PQL cohort using behavioral criteria (e.g., "Users who viewed 5+ pages AND invited team members in last 7 days"). Configure webhook to fire on cohort entry. PQL criteria document**: Create a Google Doc outlining your scoring components (usage intensity factors, ICP requirements, intent signals). Update the Google Docs Tool node with your document URL. Free email filtering**: The workflow includes logic to flag free email domains (Gmail, Yahoo, etc.) which you may want to route differently Testing tip**: Use Amplitude's "Test Webhook" feature to send sample payloads before going live Customization Options Replace People Data Labs** with Clearbit, Apollo, or other enrichment providers by swapping the HTTP Request node Add CRM integration** to automatically create opportunities or update lead scores in Salesforce/HubSpot Extend routing paths** by adding branches for warm/cold leads (e.g., trigger email sequences via Customer.io, Braze) Adjust scoring weights** by modifying the AI agent prompt or criteria document without touching workflow logic Multi-channel alerts** by duplicating output nodes to send to email, SMS, or CRM tasks in addition to Slack Category Sales Tags amplitude pql product-qualified-leads sales-automation lead-scoring enrichment people-data-labs slack-notifications ai-scoring revenue-operations Use Case Examples SaaS PLG companies**: Automatically escalate free trial users who hit usage milestones (API calls, integrations connected, team invites sent) to sales for upgrade conversations Developer tools**: Identify enterprise-ready accounts based on team size growth, deployment patterns, and GitHub integration usage, routing to enterprise sales team B2B marketplaces**: Surface buyers showing high-intent behavior (multiple searches, saved items, pricing page views) to account executives with company context for proactive outreach
by Dev Dutta
Geopolitics Breaking News Alert System Workflow Name: Geopolitics Breaking News Alert System Author: Devjothi Dutta Category: Productivity, News & Media, AI/Machine Learning Complexity: Medium Setup Time: 45-60 minutes 📖 Description An intelligent geopolitical monitoring system that filters 200+ daily news articles down to only the critical breaking news that matters to you. This workflow uses smart keyword filtering and AI-powered scoring to eliminate noise, reduce AI costs, and deliver only high-priority geopolitical alerts to Telegram. The Problem: Traditional news monitoring is overwhelming - hundreds of articles per hour, 95% irrelevant to your region of interest, no urgency prioritization, and critical breaking news gets buried in noise. The Solution: This workflow combines dual-layer filtering (primary + secondary keywords) with AI scoring to distinguish actual breaking news from general news coverage. By filtering first and scoring second, you reduce AI API costs by 80-90% while ensuring you never miss critical geopolitical developments. Switch between monitoring India, China, Middle East, Russia-Ukraine, or any region by simply changing a configuration file. Perfect for government analysts, corporate security teams, investment research firms, news organizations, or anyone who needs to stay informed about geopolitical developments without information overload. 👥 Who's it for For Government & Defense Analysts: Monitor specific regions for military actions, diplomatic developments, and security threats Filter by mission-critical keywords to eliminate irrelevant news AI scoring identifies genuine breaking news vs routine coverage Reduce analyst workload by 90% through intelligent automation For Corporate Security & Risk Teams: Track geopolitical risks affecting global supply chains and operations Custom keyword filters for industry-specific concerns (e.g., "semiconductor", "tariff", "sanctions") Real-time alerts for events impacting business continuity Cost-efficient monitoring with minimal AI API usage For Investment Research Firms: Monitor emerging market geopolitical risks affecting portfolio companies AI scoring differentiates market-moving events from background noise Configurable alert thresholds based on investment strategy (conservative vs aggressive) Track multiple regions simultaneously with different configs For News Organizations & Journalists: Monitor breaking geopolitical developments for editorial coverage Filter by urgency to prioritize assignment desk resources Aggregate multiple international news sources in one place Extend alerts to newsroom Slack channels or email ✨ Key Features 🎯 Smart Dual-Layer Filtering - Primary keywords ensure regional relevance, secondary keywords filter by event type (military, diplomatic, economic) 🤖 AI-Powered Urgency Scoring - GPT-4o-mini scores articles 1-10 based on geopolitical urgency, distinguishing breaking news from routine coverage 💰 Cost-Efficient Design - Filter first, score second approach reduces AI API calls by 80-90% (only ~5 articles analyzed out of 200) 🌍 Multi-Region Support - Monitor India, China, Middle East, Russia-Ukraine, or any region by switching config files 📰 Multi-Source RSS Aggregation - Combines 6 international news sources (NYT, BBC, Al Jazeera, SCMP, regional feeds) 🔄 Duplicate Detection - Persistent storage prevents re-analyzing same articles across multiple executions 📊 Consolidated Alerts - Single Telegram message with all breaking news, grouped by urgency score ⏰ Flexible Scheduling - Configure trigger interval per your needs (15min for active conflicts, 3hr for routine monitoring) 💾 Config-Driven Architecture - All filters, keywords, and scoring rules in Google Drive JSON file 🔒 Production Ready - Tested end-to-end with real-world India and China configurations 📈 Scalable Design - Run multiple regional configs in parallel, extend to Slack/WhatsApp/Email delivery 🛠️ Requirements Required Services: n8n (version 1.0+) - Workflow automation platform Free tier: n8n cloud or self-hosted Docker Required feature: Data Tables (for duplicate tracking) OpenAI API (GPT-4o-mini) - AI scoring engine Cost: ~$0.10/day for 30min intervals Free tier: $5 credit for new accounts Telegram Bot - Alert delivery Free: Create via @BotFather on Telegram Get chat ID via @userinfobot Google Drive - Config file storage Free: Any Google account Used for publicly shared JSON config files Required Credentials: OpenAI API Key** - Get from platform.openai.com (GPT-4o-mini access) Telegram Bot Token** - Create bot via @BotFather, get token n8n Data Table** - Built-in n8n feature (no external credential) Optional: Slack Webhook URL (for extending alerts to Slack) SMTP credentials (for email alerts) Twilio account (for WhatsApp/SMS alerts) 📦 What's Included This workflow package includes: Complete n8n workflow JSON (ready to import) Complete setup guide - Detailed configuration with Data Table setup, troubleshooting Technical architecture documentation Use cases and customization guide 4 pre-built regional configs (India, China, Middle East, Russia-Ukraine) 🚀 Quick Start Full setup takes 45-60 minutes. For detailed step-by-step instructions, see SETUP_GUIDE.md Overview Create n8n Data Table (analyzed_articles with 2 columns) Upload config to Google Drive (choose region, share publicly, get file ID) Import workflow (22 nodes ready to configure) Configure nodes: Update Google Drive config URL with your file ID Update 6 RSS Feed URLs for your region Link 3 Data Table nodes to analyzed_articles table Add credentials (OpenAI API, Telegram Bot) Set schedule (15min-daily based on monitoring needs) Test workflow (verify filtering, scoring, alerts work) Activate (workflow runs automatically on schedule) Quick Start Result: ✅ 200+ articles processed → 5-7 filtered → 3-5 scored → 1-3 alerts sent ✅ Telegram receives consolidated breaking news message ✅ Workflow runs every 30min (or your chosen interval) ✅ Total monthly cost: $3-5 (OpenAI API only) Need help? See detailed SETUP_GUIDE.md for complete instructions with screenshots and troubleshooting. 📊 Workflow Stats Nodes:** 22 Complexity:** Medium Execution Time:** ~30-60 seconds per run Monthly Cost:** $3-5 (OpenAI API usage only) Maintenance:** Minimal (update RSS feeds if sources change) Scalability:** Handles 200+ articles per execution, easily scales to 10+ RSS feeds 🎨 Customization Options Add more regions:** Create new config JSON files for North Korea, Taiwan, Africa, Latin America, etc. Multi-channel alerts:** Extend to Slack, WhatsApp, Email, Discord, Microsoft Teams, SMS Severity-based routing:** Send critical alerts (score 9-10) via SMS, others to Telegram Custom scoring models:** Switch between GPT-4o-mini, GPT-4o, Claude based on config Exclude keywords:** Add "exclude_keywords" array to filter out sports, entertainment, weather Alert digest mode:** Aggregate alerts into daily/weekly summary emails instead of real-time Dashboard integration:** Connect to Grafana or Metabase for visual trend analysis Webhook triggers:** Use workflow output to trigger other n8n workflows or external systems Custom RSS feeds:** Add industry-specific or regional news sources Adjust alert threshold:** Change from score >= 6 to higher/lower based on notification preferences 🔧 How it Works Schedule Trigger (Configurable): Workflow runs at your configured interval (15min, 30min, 1hr, 3hr, daily, etc.) Trigger frequency depends on use case: active conflicts need more frequent monitoring Config Loading: HTTP Request node fetches JSON config from Google Drive Config contains: keywords, scoring rules, AI role, alert threshold, Telegram chat ID RSS Aggregation: 6 RSS Feed nodes fetch articles from international news sources Merge node combines all feeds (~200 articles per execution) RSS Cleanup node strips HTML and normalizes to 5 fields (60-75% size reduction) Smart Filtering (Cost Optimization Layer 1): Dynamic Filter checks PRIMARY keywords (geographic/entity: "india", "modi", "delhi") Also checks SECONDARY keywords (event type: "military", "conflict", "trade deal") Both conditions required: Article must mention at least one primary AND one secondary Result: 200 articles reduced to ~5-7 relevant articles (95% reduction) Why this matters: Eliminates noise BEFORE expensive AI scoring Duplicate Detection (Cost Optimization Layer 2): Queries Data Table for previously analyzed article links Filters out articles already scored in last 7 days Result: 5-7 filtered articles reduced to 3-5 new articles Why this matters: Prevents redundant AI API calls (saves 80% on repeat articles) Dynamic AI Prompt Generation: Code node builds system prompt from config.ai_role and config.scoring_criteria Instructs AI: "You are a geopolitical analyst for [REGION]. Score articles 1-10..." Includes scoring rubric: 9-10 = Military Action, 7-8 = Trade/Economic, etc. AI Urgency Scoring (Breaking News Detection): Breaking News Analyzer (GPT-4o-mini) evaluates geopolitical urgency Scores 1-10: Distinguishes genuine breaking news from routine coverage Returns: score, category, reasoning, should_alert (true/false based on threshold) Cost: $0.002 per article (only 3-5 articles scored per execution) Alert Decision: IF node checks: should_alert === true (score >= config.alert_threshold) Only high-priority alerts proceed to Telegram Articles below threshold are logged but not sent Alert Aggregation: Consolidates multiple breaking news alerts into single Telegram message Groups by urgency score with color-coded emojis (🔴 9-10, 🟠 7-8, 🟡 6-7) Includes: score, category, title, link for each alert Telegram Delivery: Sends consolidated alert to configured Telegram chat Uses HTML formatting for bold text and clickable links Chat ID dynamically loaded from config (different regions → different chats) 💡 Pro Tips Start with Higher Threshold:** Begin with alert_threshold = 7 to avoid alert fatigue, lower to 6 after tuning keywords Regional RSS Matters:** Use region-specific news sources for better coverage (e.g., Times of India for India, not just BBC/NYT) Test Keywords First:** Run workflow manually with "Test Workflow" to verify keyword filtering before activating schedule Monitor AI Costs:** Check OpenAI usage dashboard after first week to confirm ~$0.10/day cost estimate Tune Secondary Keywords:** Add domain-specific terms to secondary keywords (e.g., "semiconductor" for tech supply chain monitoring) Use Separate Configs for Critical Regions:** Clone workflow for high-priority regions instead of switching configs manually Schedule Based on Time Zones:** Align execution intervals with business hours in monitored region (e.g., 9AM-6PM IST for India) Clear Duplicates for Testing:** Manually clear analyzed_articles Data Table when testing new configs for fresh results Backup Working Configs:** Export and version control config files before making major keyword changes Consider Alert Fatigue:** Score 9-10 events are rare (0-1 per day), score 6-8 events are common (2-5 per day) - set threshold accordingly 🔗 Related Workflows Multi-Region Geopolitics Dashboard** - Combine multiple regional configs into single monitoring dashboard Geopolitical Risk Scoring for Portfolios** - Integrate with stock portfolio data to assess investment risk Automated Geopolitical Intelligence Reports** - Generate daily/weekly PDF reports from breaking news data Conflict Escalation Tracker** - Track score trends over time to detect escalating tensions Supply Chain Risk Alerting** - Focus on trade/sanctions news affecting global supply chains 📧 Support & Feedback For questions, issues, or feature requests: GitHub:** n8n-geopolitics-breaking-news-alert Repository n8n Community Forum:** Tag @devdutta Email:** devjothi@gmail.com 📄 License MIT License - Free to use, modify, and distribute ⭐ If you find this workflow useful, please share your feedback and star the workflow!
by Jitesh Dugar
Automated AI-Powered Testimonial Processing & Social Media Workflow Overview: This comprehensive workflow automates the entire testimonial collection and publishing process, from submission to social media-ready content. It uses AI to enhance testimonials, generates beautiful branded cards, and implements an approval system before posting. Key Features: ✅ Webhook-based submission - Accept testimonials via API 🤖 AI Enhancement - GPT-4 polishes grammar while maintaining authenticity 🎨 Automated Design - Generates professional 800x600px testimonial cards ☁️ Cloud Storage - Uploads to Google Drive with organized naming 📊 Database Logging - Tracks all testimonials in Google Sheets 🔔 Team Notifications - Slack alerts for new and approved testimonials ✅ Approval Workflow - Manual review before social media posting 🔄 Scheduled Checker - Auto-detects approved testimonials every 5 minutes Workflow Steps: Main Flow (Testimonial Processing): Receives testimonial via webhook (POST request) Validates and cleans data (name, testimonial, photo, email) Enhances testimonial using GPT-4 Turbo Generates HTML template with customer details Converts HTML to PNG image (800x600px) Uploads image to Google Drive Logs all data to Google Sheets with "Pending Approval" status Sends Slack notification to review team Approval Flow (Scheduled Check): Runs every 5 minutes automatically Checks Google Sheets for approved testimonials Filters testimonials not yet posted Sends ready-to-post Slack notification with formatted text Marks testimonial as processed in database Use Cases: SaaS companies collecting customer feedback Marketing agencies managing client testimonials E-commerce businesses showcasing reviews Course creators featuring student success stories Any business automating social proof collection What Makes This Workflow Special: Zero manual design work** - Beautiful cards generated automatically AI-powered quality** - Professional grammar enhancement Audit trail** - Complete tracking in Google Sheets Approval control** - Review before publishing Duplicate prevention** - Smart matching by Drive ID Flexible input** - Accepts multiple field name variations 🔧 Required Integrations: OpenAI API (GPT-4 Turbo) - AI testimonial enhancement HTML/CSS to Image API - Screenshot generation Google Drive OAuth2 - Image storage Google Sheets OAuth2 - Database management Slack API - Team notifications 📋 Prerequisites: n8n instance (self-hosted or cloud) OpenAI API key (https://platform.openai.com) HTML/CSS to Image account (https://htmlcsstoimg.com) - Free tier available Google Cloud project with Drive & Sheets API enabled Slack workspace with app permissions 🚀 Setup Instructions: 1. Import Workflow Download the JSON file Import into your n8n instance Replace placeholder credentials (see below) 2. Configure Credentials Add these credentials in n8n: OpenAI API** - Your API key htmlcsstoimgApi** - User ID and API key Google Drive OAuth2** - Configure OAuth app Google Sheets OAuth2** - Same Google Cloud project Slack API** - Create Slack app with chat:write scope 3. Update Configuration Replace in the JSON: Google Drive Folder ID** - Your testimonial storage folder Google Sheets ID** - Your database spreadsheet Slack Channel ID** - Your notification channel 4. Test the Workflow Send a POST request to your webhook URL: { "name": "Sarah Johnson", "designation": "Marketing Director", "photo_url": "https://i.pravatar.cc/400?img=5", "testimonial_text": "Working with this team was amazing!", "email": "sample@gmail.com" } 📊 Google Sheets Setup: Create a Google Sheet with these columns: Timestamp Name Designation Original Testimonial Testimonial (Enhanced) Image Link Drive ID Status Email Original Length Enhanced Source Posted to Social Posted At 🎨 Customization Options: Modify AI prompt for different enhancement styles Change HTML template colors/design Add more validation rules Integrate with Twitter/LinkedIn APIs for auto-posting Add email notifications instead of Slack Include rating/score system Add custom approval fields 🆘 Troubleshooting: Webhook not receiving data: Check webhook URL is correct Verify HTTP method is POST Ensure Content-Type is application/json AI enhancement failing: Verify OpenAI API key is valid Check API usage limits Ensure sufficient credits Image not generating: Confirm htmlcsstoimg credentials are correct Check HTML template has no errors Verify you haven't exceeded free tier limit Google Drive upload failing: Re-authenticate OAuth2 connection Check folder ID is correct Verify folder permissions 🏆 Perfect For: Marketing teams Customer success teams Product managers Social media managers Growth hackers Agency owners ⚖️ License: Free to use and modify for personal and commercial projects.
by Jitesh Dugar
Automated Client Onboarding Workflow This n8n workflow automates the end-to-end client onboarding process: capturing client details, validating emails, assigning tiers, generating welcome packs, creating tasks, notifying teams, archiving records, and sending weekly reports. Who’s It For B2B SaaS companies** onboarding new customers Agencies** handling structured client setups Sales & customer success teams** needing automation Consulting firms** aiming for error-free onboarding ⚙️ How It Works Capture client details through a Webhook (connected to forms). Validate client’s email using Verifi Email. Log onboarding data into Google Sheets. Assign tier logic (Basic/Pro/Enterprise) via Function node. Create a Trello task card with onboarding steps. Generate a personalized Welcome Pack PDF with client details. Send Slack notification to internal team with client details. Download and attach PDF, then send personalized welcome email to the client. Archive structured onboarding data in Airtable. Weekly scheduled report: Collects Airtable onboarding data Processes weekly stats (plans, tiers, counts) Sends onboarding summary via email to the manager 🛠️ How to Set Up Webhook Setup Install & configure credentials: Verifi Email key Google Sheets OAuth2 Airtable OAuth2 Gmail OAuth2 Slack OAuth2 Trello API Optional: Customize the Welcome PDF template (HTML/CSS). Edit tier assignment logic inside the Assign Tier Logic node. Modify Slack & email templates to match your branding. Adjust schedule for weekly reports (default: Monday 9 AM IST). Test with sample payload: { "name": "Jane Doe", "email": "jane@acme.com", "company": "Acme Corp", "plan": "Pro" } 📋 Requirements Self-hosted or Cloud n8n Credentials: Verifi Email, Google Sheets, Airtable, Gmail, Slack, Trello Optional: API for company enrichment ⚠️ Note: The HTML/CSS to PDF node (used for report generation) has a limit of 10 free requests. For production usage, you’ll need an API plan. ⭐ Core Features Email Validation: Blocks fake/spam signups - **Tier Assignment: Auto-classifies clients into Basic/Pro/Enterprise Task Management**: Trello cards for onboarding checklist Welcome PDF Pack**: Branded, client-personalized PDF attachment Slack Notifications**: Real-time internal updates Airtable Archiving**: Permanent record-keeping Weekly Reports**: Automated onboarding summaries for managers 📈 Use Cases & Applications B2B SaaS**: Scale client onboarding without hiring more staff Agencies**: Deliver smooth onboarding experiences Sales Teams**: Reduce delays in CRM entry Customer Success**: Focus on relationship-building instead of admin ✅ Key Benefits Saves 5–6 hours of manual onboarding per client Ensures error-free onboarding with email validation Provides a professional, branded experience Improves collaboration with Slack + Trello integration Scales seamlessly as client volume grows 🔧 Customization Options Modify tier logic (e.g., budget, plan, company size) Customize Slack channel or Trello list for task routing Update PDF branding (logo, theme, styling) Add extra onboarding steps (e.g., Calendly call scheduling) Extend weekly reports (e.g., include ROI or CSM notes) ⚠️ Important Disclaimers For educational & automation purposes Ensure compliance with GDPR/CCPA before storing client data Always test workflow with dummy data before production Workflow Components Webhook Trigger** → Captures client form submissions Verifi Email** → Validates client email Google Sheets** → Logs onboarding entries Code Node** → Assigns tier & priority Trello** → Creates task card for CSM HTML/CSS to PDF** → Generates Welcome Pack PDF Slack** → Notifies team about new client Gmail** → Sends welcome email with PDF Airtable** → Archives full onboarding record Schedule Trigger* + *Report** → Weekly summary to management