by KPendic
This n8n flow demos basic dev-ops operation task, dns records management. AI agent with light and basic prompt functions like getter and setter for DNS records. In this special case, we are managing remote dns server, via API calls - that are handled on CloudFlare platform side. Use-cases for this flow can be standalone, or you can chain it in your pipe-line to get powerful infrastructure flows for your needs. How it works we created basic agent and gave it a prompt to know about one tool: cf_tool - sub-routine (to itself flow - or it can be separate dedicated one) prompt have defined arguments that are needed for passing them when calling agent, for each action specifically tool it self have basic if switch that is - based of a action call - calling specific CloudFlare API endpoint (and pass down the args from the tool) Requirements For storing and processing of data in this flow you will need: CloudFlare.com API key/token - for retrieving your data (https://dash.cloudflare.com/?to=/:account/api-tokens) OpenAPI credentials (or any other LLM provider) saved - for agent chat (Optional) PostGres table for chat history saving Official CloudFlare api Documentation For full details and specifications please use API documentation from: https://developers.cloudflare.com/api/ Linkedin post Let me know if you found this flow usefull on my Linkedin post > here. tags: #cloudflare, #dns, #domain
by Evervise
🤖 AI Business Automation Opportunity Finder Turn automation audits into high-ticket sales with this ROI-focused n8n workflow powered by 4 specialized AI agents that identify and quantify automation opportunities in any business. What It Does This workflow analyzes any business and delivers a comprehensive automation blueprint with concrete ROI calculations in under 60 seconds. Perfect for agencies, consultants, and automation experts looking to generate qualified leads and close high-value deals. Unlike generic automation advice, this delivers personalized, quantified opportunities ranked by return on investment - making it incredibly easy for prospects to say yes. 🤖 Four Specialized AI Agents Business Analyst - Deep analysis of business model, workflows, pain points, tech stack, and scalability challenges Process Mapper - Maps all repetitive processes, calculates time waste, identifies bottlenecks across the entire operation Automation Architect - Designs 15+ specific automation solutions with tools, complexity ratings, and implementation steps ROI Calculator - Calculates detailed ROI for each automation, ranks top 10, creates 90-day implementation roadmap ✨ Key Features Concrete Dollar Savings**: Every automation shows exact time saved, labor cost saved, and payback period Top 10 Ranked by ROI**: Opportunities prioritized by impact vs. effort with detailed financial analysis 90-Day Implementation Roadmap**: Month-by-month plan showing progressive savings milestones Comprehensive Process Mapping**: Identifies inefficiencies they didn't even mention Tool-Specific Recommendations**: Exact tools and platforms needed (n8n, Zapier, Make, etc.) Beautiful HTML Reports**: Professional, conversion-focused email with 3-tier pricing built in Multiple CTAs**: Strategically placed conversion points throughout the report 📊 What Gets Analyzed Business Analysis Business model and revenue streams Operational workflows and processes Current tech stack assessment Team capacity and resource allocation Growth stage and scalability blockers Industry-specific automation patterns Process Mapping Comprehensive workflow documentation Time waste analysis (hours per month) Bottleneck identification Process dependencies and integration opportunities Quick win vs. strategic project categorization Automation Architecture For each of 15+ automation opportunities: Clear description of what it automates Specific tools required Step-by-step implementation flow Complexity rating (Easy/Medium/Hard) Prerequisites and requirements Additional benefits beyond time savings Real-world use case examples ROI Calculations For each automation: Time saved per week/month/year Labor cost savings (calculated from team size/industry) One-time implementation cost Ongoing monthly costs Payback period in months 12-month net savings ROI percentage Priority score (0-10) 💼 Perfect For Automation Agencies**: High-value lead magnet that pre-sells your services Business Consultants**: Demonstrate ROI before engagement No-Code Developers**: Show concrete value of your expertise Digital Transformation Consultants**: Quantify the opportunity SaaS Companies**: Lead gen for automation/workflow tools Freelancers**: Land bigger clients with data-driven proposals 🚀 Why This Converts Better Than Other Lead Magnets Traditional Lead Magnets: Generic advice ("You should automate") Subjective benefits ("Save time") No clear next steps Conversion rate: 5-10% This Workflow: Specific to their business** (personalized analysis) Quantified in dollars** ($50K+ annual savings) Prioritized action plan** (top 10 ranked by ROI) Clear implementation path** (90-day roadmap) Conversion rate: 20-30%** to strategy call 40-50% of calls close** to paid engagement 📈 Expected Business Results Per 100 Form Submissions: 25-30 strategy calls booked** (25-30% conversion) 10-15 deals closed** (40-50% call-to-close rate) $12K-18K in initial revenue** (mix of Tier 1 & 2) 2-4 retainer clients** ($30K-60K annual value) Total potential: $42K-78K** from 100 leads Why It Works: Self-qualifying**: Detailed form filters serious prospects Pre-sold**: They see the value before the call ROI-focused**: Speaks CFO language (dollars, not features) Urgency**: Shows money being wasted daily Social proof**: Built-in testimonials and case studies 📋 What You Need Required n8n instance (self-hosted or cloud) Anthropic API key (Claude Sonnet 4.5) Gmail account or SMTP provider Optional Enhancements CRM integration (HubSpot, Salesforce, Pipedrive) Slack notifications for high-value leads Calendly for automatic call booking Zapier/Make for additional workflows Analytics tracking (Mixpanel, Segment) ⚙️ Technical Details AI Model**: Claude Sonnet 4.5 (4 sequential agents) Average Runtime**: 50-70 seconds Cost Per Analysis**: ~$0.20-0.30 Form Fields**: 9 (business description, industry, team size, tasks, tools, bottleneck, revenue, email, name) Output**: Comprehensive HTML email with all analyses, pricing, and CTAs 🎨 Customization Options The workflow is fully customizable and includes detailed documentation: Adjust ROI calculation parameters (labor rates by industry) Modify agent prompts for specific niches Customize pricing tiers and packages Add/remove form fields White-label the entire report Integrate with your CRM/marketing stack Segment responses by company size or revenue Add video walkthroughs or personalized messages Create industry-specific versions 📊 Form Fields Explained The 9-field form is strategically designed to gather intelligence: Business Description (textarea): Core operations and offerings Industry/Niche (text): Context for automation patterns Team Size (dropdown): Affects ROI calculations and tool recommendations Repetitive Tasks (textarea): Gold mine for automation opportunities Current Tools (textarea): Integration points and tech stack assessment Biggest Bottleneck (textarea): Primary pain point for targeting Monthly Revenue (optional dropdown): For accurate ROI estimates and lead scoring Email (required): For report delivery Name (required): For personalization 🔧 Setup Difficulty Basic - Requires basic n8n knowledge and API configuration Setup Steps Import workflow JSON to n8n Add Anthropic API credentials Configure Gmail/SMTP credentials Customize branding and pricing in email template Test with sample business scenarios Deploy form on your website Set up follow-up sequences (recommended) 📚 Included Documentation Comprehensive sticky notes** for every component Setup instructions** with prerequisites Customization guide** for different industries Pricing strategy** breakdown and alternatives Conversion optimization** tips Follow-up sequence** recommendations Sales script** suggestions for strategy calls Marketing promotion** ideas 🌟 Advanced Use Cases 1. Lead Magnet Embed on website to capture qualified automation leads continuously 2. Discovery Tool Use during sales calls to demonstrate immediate value and build credibility 3. Content Marketing Offer in LinkedIn posts, email campaigns, YouTube videos for viral growth 4. Partner Program White-label for partners/affiliates to generate leads in their networks 5. Upsell Sequence For existing clients, identify additional automation opportunities 6. Industry Templates Create versions for specific industries (real estate, e-commerce, agencies) 7. Competitive Intelligence Analyze competitor operations and position your services ⚡ Why This Workflow Stands Out Compared to Generic Automation Audits: ✅ Quantified in dollars vs. vague "save time" claims ✅ Personalized to their business vs. generic templates ✅ Prioritized by ROI vs. random feature lists ✅ Actionable roadmap vs. overwhelming possibilities ✅ Tool-specific vs. theoretical concepts Compared to Manual Analysis: ✅ 60 seconds vs. 2-3 hours of consultant time ✅ $0.25 cost vs. $300-500 in labor ✅ Consistent quality vs. variable analyst experience ✅ Scalable vs. bottlenecked by human capacity ✅ 24/7 available vs. business hours only 🤝 Support & Community 📖 Website: https://evervise.ai/ ✨ Support: mark.marin@evervise.com N8N Link 🎁 Bonus Resources Included Follow-up email sequence** (3 emails over 10 days) Sales call script** for strategy calls Objection handling** guide Pricing calculator** spreadsheet Marketing assets** (social media templates) Case study template** for testimonials Tags automation lead-generation roi-calculator business-analysis process-mapping ai-agents anthropic claude workflow-automation business-consulting no-code n8n-workflows high-ticket-sales conversion-optimization saas-tools Ready to turn automation audits into recurring revenue? Import this workflow and start attracting qualified leads who can see the exact dollar value you provide before they even talk to you. Average user results: $42K-78K revenue from first 100 form submissions.
by Julian Kaiser
What problem does this solve? Earlier this year, as I got more involved with n8n, I committed to helping users on our community forums and the n8n subreddit. The volume of questions was growing, and I found it was a real challenge to keep up and make sure no one was left without an answer. I needed a way to quickly see what people were struggling with, without spending hours just searching for new posts. So, I built this workflow. It acts as my personal AI research assistant. Twice a day, it automatically scans Reddit and the n8n forums for me. It finds relevant questions, summarizes the key points using AI, and sends me a digest with direct links to each post. This allows me to jump straight into the conversations that matter and provide support effectively. While I built this for n8n support, you can adapt it to monitor any community, track product feedback, or stay on top of any topic you care about. It transforms noisy forums into an actionable intelligence report delivered right to your inbox. How it works Here’s the technical breakdown of my two-part system: AI Reddit Digest (Daily at 9AM / 5 PM): Fetches the latest 50 posts from a specified subreddit. Uses an AI Text Classifier to categorize each post (e.g., QUESTION, JOB_POST). Isolates the posts classified as questions and uses an AI model to generate a concise summary for each. Formats the original post link and its new summary into an email-friendly format and sends the digest. AI n8n Forum Digest (Daily at 9AM / 5 PM): Scrapes the n8n community forum to get a list of the latest post links. Processes each link individually, fetching the full post content. Filters these posts to keep only those containing a specific keyword (e.g., "2025"). Summarizes the filtered posts using an AI model. Combines the original post link with its AI summary and sends it in a separate email report. Set up steps This workflow is quite powerful and requires a few configurations. Setup should take about 15 minutes. Add Credentials: First, add your credentials for your AI provider (like OpenRouter) and your email service (like Gmail or SMTP) in the Credentials section of your n8n instance. Configure Reddit Digest: In the Get latest 50 reddit posts node, enter the name of the Subreddit you want to follow. Fine-tune the AI's behavior by editing the prompt in the Summarize Reddit Questions node. (Optional) Add more examples to the Text Classifier node to improve its accuracy. Configure n8n Forum Digest: In the Filter 2025 posts node, change the keyword to track topics you're interested in. Edit the prompt in the Summarize n8n Forum Posts node to guide the AI's summary style. Activate Workflow: Once configured, just set the workflow to Active. It will run automatically on schedule. You can also trigger it manually with the When clicking 'Test workflow' node.
by Incrementors
Overview: This n8n workflow automates the complete blog publishing process from topic research to WordPress publication. It researches topics, writes SEO-optimized content, generates images, publishes posts, and notifies teams—all automatically from Google Sheets input. How It Works: Step 1: Client Management & Scheduling Client Data Retrieval:** Scans master Google Sheet for clients with "Active" project status and "Automation" blog publishing setting Publishing Schedule Validation:** Checks if current day matches client's weekly frequency (Mon, Tue, Wed, Thu, Fri, Sat, Sun) or if set to "Daily" Content Source Access:** Connects to client-specific Google Sheet using stored document ID and sheet name Step 2: Content Planning & Selection Topic Filtering:** Retrieves rows where "Status for Approval" = "Approved" and "Live Link" = "Pending" Content Validation:** Ensures Focus Keyword field is populated before proceeding Single Topic Processing:** Selects first available topic to maintain quality and prevent API rate limits Step 3: AI-Powered Research & Writing Comprehensive Research:** Google Gemini analyzes search intent, competitor content, audience needs, trending subtopics, and LSI keywords Content Generation:** Creates 800-1000 word articles with natural keyword integration, internal linking, and conversational tone optimized for Indian investors Quality Assessment:** Evaluates content for human-like writing, conversational tone, readability, and engagement factors Content Optimization:** Automatically fixes grammar, punctuation, sentence flow, and readability issues while maintaining HTML structure Step 4: Visual Content Creation Image Prompt Generation:** OpenAI creates detailed prompts based on blog title and content for professional visuals Image Generation:** Ideogram AI produces 1248x832 resolution images with realistic styling and professional appearance Binary Processing:** Downloads and converts generated images to binary format for WordPress upload Step 5: WordPress Publication Media Upload:** Uploads generated image to WordPress media library with proper filename and headers Content Publishing:** Creates new WordPress post with title, optimized content, and embedded image Featured Image Assignment:** Sets uploaded image as post's featured thumbnail for proper display Category Assignment:** Automatically assigns posts to predefined category Step 6: Tracking & Communication Status Updates:** Updates Google Sheet with live blog URL in "Live Link" column using S.No. as identifier Team Notification:** Sends Discord message to designated channel with published blog link and review request Process Completion:** Triggers next iteration or workflow conclusion based on remaining topics Setup Steps: Estimated Setup Time: 45-60 minutes Required API Credentials: 1. Google Sheets API Service account with sheets access OAuth2 credentials for client-specific sheets Proper sharing permissions for all target sheets 2. Google Gemini API Active API key with sufficient quota Access to Gemini Pro model for content generation Rate limiting considerations for bulk processing 3. OpenAI API GPT-4 access for creative prompt generation Sufficient token allocation for daily operations Fallback handling for API unavailability 4. Ideogram AI API Premium account for quality image generation API key with generation permissions Understanding of rate limits and pricing 5. WordPress REST API Application passwords for each client site Basic authentication setup with proper encoding REST API enabled in WordPress settings User permissions for post creation and media upload 6. Discord Bot API Bot token with message sending permissions Channel ID for notifications Guild access and proper bot roles Master Sheet Configuration: Document Structure: Create primary tracking sheet with columns Client Name:** Business identifier Project Status:** Active/Inactive/Paused Blog Publishing:** Automation/Manual/Disabled Website URL:** Full WordPress site URL with trailing slash Blog Posting Auth Code:** Base64 encoded username: password On Page Sheet:** Google Sheets document ID for content planning WeeklyFrequency:** Daily/Mon/Tue/Wed/Thu/Fri/Sat/Sun Discord Channel:** Channel ID for notifications Content Planning Sheet Structure: Required Columns (exact naming required): S.No.:** Unique identifier for tracking Focus Keyword:** Primary SEO keyword Content Topic** Article title/subject Target Page:** Internal linking target Words:** Target word count Brief URL:** Content brief reference Content URL:** Draft content location Status for Approval:** Pending/Approved/Rejected Live Link:** Published URL (auto-populated) WordPress Configuration: REST API Activation:** Ensure wp-json endpoint accessibility User Permissions:** Create dedicated user with Editor or Administrator role Application Passwords:** Generate secure passwords for API authentication Category Setup:** Create or identify category ID for automated posts Media Settings:** Configure upload permissions and file size limits Security:** Whitelist IP addresses if using security plugins Discord Integration Setup: Bot Creation:** Create application and bot in Discord Developer Portal Permissions:** Grant Send Messages, Embed Links, and Read Message History Channel Configuration:** Set up dedicated channel for blog notifications User Mentions:** Configure user ID for targeted notifications Message Templates:** Customize notification format and content Workflow Features & Capabilities: Content Quality Standards: SEO Optimization:** Natural keyword integration with LSI keywords and related terms Readability:** Conversational tone with short sentences and clear explanations Structure:** Proper HTML formatting with headings, lists, and internal links Length:** Consistent 800-1000 word count for optimal engagement Audience Targeting:** Content tailored for Indian investor audience with relevant examples Image Generation Specifications: Resolution:** 1248x832 pixels optimized for blog headers Style:** Realistic professional imagery with human subjects Design:** Clean layout with heading text placement (bottom or left side) Quality:** High-resolution output suitable for web publishing Branding:** Light beige to gradient backgrounds with golden overlay effects Error Handling & Reliability: Graceful Failures:** Workflow continues even if individual steps encounter errors API Rate Limits:** Built-in delays and retry mechanisms for external services Data Validation:** Checks for required fields before processing Backup Processes:** Alternative paths for critical failure points Logging:** Comprehensive tracking of successes and failures Security & Access Control: Credential Encryption:** All API keys stored securely in n8n vault Limited Permissions:** Service accounts with minimum required access Authentication:** Basic auth for WordPress with encoded credentials Data Privacy:** No sensitive information exposed in logs or outputs Access Logging:** Track all sheet modifications and blog publications Troubleshooting: Common Issues: API Rate Limits:** Check your API quotas and usage limits WordPress Authentication:** Verify your basic auth credentials are correct Sheet Access:** Ensure Google Sheets API has proper permissions Image Generation Fails:** Check Ideogram API key and quotas Need Help?: For technical support or questions: Email: info@incrementors.com Contact Form: https://www.incrementors.com/contact-us/
by Dean Pike
CV → Match → Screen → Decide, all automated This workflow automatically processes candidate CVs from email, intelligently matches them to job descriptions, performs AI-powered screening analysis, and sends actionable summaries to your team in Slack. Good to know Handles both PDF and Word document CVs automatically Two-stage JD matching: prioritizes role mentioned in candidate's email, falls back to CV analysis if needed Uses Google Gemini API for AI screening (generous free tier and rate limits, typically enough to avoid paying for API requests, but check latest pricing at Google AI Pricing) All CVs stored in Google Drive with standardized naming (candidate name + date/time) Complete audit trail logged in Google Sheets Who's it for Hiring teams and recruiters who want to automate first-round CV screening while maintaining quality. Perfect for companies receiving high volumes of applications across multiple roles, especially in tech, sales, or automation-focused positions. How it works Gmail monitors inbox for CVs with specific label and downloads attachments Detects file type (PDF or Word) and converts/standardizes format for text extraction AI agent matches candidate to best-fit job description by analyzing email context first (if candidate mentioned a role), or CV content as fallback (selects up to 3 potential JD matches) If multiple JDs matched, second AI agent selects the single best fit AI recruiter agent analyzes CV against selected JD and generates structured screening report (strengths, weaknesses, risk/reward factors, overall fit score 0-10 with justification) Extracts candidate details (name, email) from CV text Logs complete analysis to Google Sheets tracker Sends formatted summary to Slack with Proceed/Reject action buttons for instant team decisions Requirements Gmail account with API access Google Drive account (OAuth2) Google Sheets account (OAuth2) Slack workspace with bot permissions Google Gemini API key (Get free key here) Google Drive folders: one for CVs, one for Job Descriptions (as PDFs or Google Docs) How to set up Add credentials: Gmail OAuth2, Google Drive OAuth2, Google Sheets OAuth2, Slack OAuth2, Google Gemini API Create Gmail label (e.g., "CV-Screening") for incoming candidate emails In "Receive CV via Email" node: select your Gmail label for filtering Create two Google Drive folders: "Candidate CVs" and "Job Descriptions" In "Upload CV - PDF" and "Stream Doc/Docx File" nodes: update folder ID to your "Candidate CVs" folder In "Access JD Files" node: update folder ID to your "Job Descriptions" folder Create Google Sheet named "AI Candidate Screening" with columns matching the sample AI Candidate Screening sheet In "Append row in sheet" node: select your Google Sheet In "Send Candidate Screening Confirmation" node: select your Slack channel Activate workflow Customizing this workflow Change JD matching logic: Edit "JD Matching Agent" node prompt to adjust how CVs are matched to roles (e.g., weight technical skills vs. experience). Change "Company Description" in AI prompts: Insert your "Company Description" in System Message sections in "JD Matching Agent" and "Detailed JD Matching Agent" nodes Modify screening criteria: Edit "Recruiter Scoring Agent" node system message to focus on specific qualities (culture fit, leadership, technical depth, etc.) Add more storage locations: Add nodes to save CVs to other systems (Notion, Airtable, ATS platforms) Customize Slack message: Edit "Send Candidate Screening Confirmation" node to change formatting, add more context, or include additional candidate data Auto-proceed logic: Add IF node after screening to auto-proceed candidates with fit score above threshold (e.g., 8+/10) Add email responses: Connect nodes to automatically email candidates (confirmation, rejection, interview invite) Add human-in-the-loop: Sub-workflow triggered by Slack response or email response, to update Sheet with approve/reject status Add candidate email responses + interview scheduling**: For approved candidates, trigger email to candidate with Cal.com or Calendly link so they can book their interview Quick Troubleshooting No CVs being processed: Check Gmail label is correctly set in "Receive CV via Email" node and emails are being labeled Word documents failing: Verify "Stream Doc/Docx File" node has correct parent folder ID and Google Drive credentials authorized JD matching returns no results: Check "Access JD Files" node folder ID points to your Job Descriptions folder, and JD files are named clearly (e.g., "Marketing Director JD.pdf") JD matching is not relevant for my company: Update the "Company Description" in the System Messages in the "JD Matching Agent" and "Detailed JD Matching Agent" nodes "Can't find matching JD": Ensure candidate's email mentions role name OR their CV clearly indicates relevant experience for available JDs Google Sheets errors: Verify sheet name is "AI Candidate Screening" and column headers exactly match workflow expectations (Submission ID, Date, CV, First Name, etc.) Slack message not appearing: Re-authorize Slack credentials and confirm channel ID in "Send Candidate Screening Confirmation" node Missing candidate name/email: CV text must be readable - check PDF extraction quality or try converting complex CVs to simpler format 401/403 API errors: Re-authorize all OAuth2 credentials (Gmail, Google Drive, Google Sheets, Slack) AI analysis quality issues: Edit system prompts in "JD Matching Agent" and "Recruiter Scoring Agent" nodes to refine screening criteria Sample Outputs Google Sheets - AI Candidate Screening - sample Slack confirmation message Acknowledgments This workflow was inspired by Nate Herk's YouTube demonstration on building a resume analysis system. This implementation builds upon that foundation by adding dynamic job description matching (initial + detailed JD matching agents), Slack Block Kit integration with interactive buttons, updated Google Drive API methods for document handling, and enhanced candidate data capture in Google Sheets.
by Harry Siggins
This n8n template transforms your daily meeting preparation by automatically researching attendees and generating comprehensive briefing documents. Every weekday morning, it analyzes your calendar events, researches each external attendee using multiple data sources, and delivers professionally formatted meeting briefs directly to your Slack channel. Who's it for Business professionals, sales teams, account managers, and executives who regularly attend meetings with external contacts and want to arrive fully prepared with relevant context, conversation starters, and strategic insights about their attendees. How it works The workflow triggers automatically Monday through Friday at 6 AM, fetching your day's calendar events and filtering for meetings with external attendees. For each meeting, an AI agent researches attendees using your CRM (Attio), email history (Gmail), past calendar interactions, and external company research via Perplexity when needed. The system then generates structured meeting briefs containing attendee background, relationship context, key talking points, and strategic objectives, delivering everything as a formatted Slack message to start your day. Requirements Google Calendar with OAuth2 credentials Gmail with OAuth2 credentials Slack workspace with bot token and channel access Attio CRM with API bearer token OpenRouter API key for AI model access (or other API credentials to connect AI to your AI agents) Perplexity API key for company research How to set up Configure credentials for all required services in your n8n instance Update personal identifiers in the workflow: Replace "YOUR_EMAIL@example.com" with your actual calendar email in both Google Calendar nodes Replace "YOUR_SLACK_CHANNEL_ID" with your target channel ID in both Slack nodes Adjust AI models in OpenRouter nodes based on your preferences and model availability Test the workflow manually with a day that contains external meetings Verify Slack formatting appears correctly in your channel How to customize the workflow Change meeting research depth: Modify the AI agent prompt to focus on specific research areas like company financial data, recent news, or technical background. Adjust notification timing: Update the cron expression in the Schedule Trigger to run at different times or days. Expand CRM integration: Add additional Attio API calls to capture more contact details or create follow-up tasks. Enhance Slack formatting: Customize the Block Kit message structure in the JavaScript code node to include additional meeting metadata or visual elements. Add more research sources: Connect additional tools like LinkedIn, company databases, or news APIs to the AI agent for richer attendee insights. The template uses multiple AI models through OpenRouter for different processing stages, allowing you to optimize costs and performance by selecting appropriate models for research tasks versus text formatting operations.
by Tom
This n8n workflow template simplifies the process of digesting cybersecurity reports by summarizing, deduplicating, organizing, and identifying viral topics of interest into daily emails. It will generate two types of emails: A daily digest with summaries of deduplicated cybersecurity reports organized into various topics. A daily viral topic report with summaries of recurring topics that have been identified over the last seven days. This workflow supports threat intelligence analysts digest the high number of cybersecurity reports they must analyse daily by decreasing the noise and tracking topics of importance with additional care, while providing customizability with regards to sources and output format. How it works The workflow follows the threat intelligence lifecycle as labelled by the coloured notes. Every morning, collect news articles from a set of RSS feeds. Merge the feeds output and prepare them for LLM consumption. Task an LLM with writing an intelligence briefing that summarizes, deduplicates, and organizes the topics. Generate and send an email with the daily digest. Collect the daily digests of the last seven days and prepare them for LLM consumption. Task an LLM with writing a report that covers 'viral' topics that have appeared prominently in the news. Store this report and send out over email. How to use & customization The workflow will trigger daily at 7am. The workflow can be reused for other types of news as well. The RSS feeds can be swapped out and the AI prompts can easily be altered. The parameters used for the viral topic identification process can easily be changed (number of previous days considered, requirements for a topic to be 'viral'). Requirements The workflow leverages Gemini (free tier) for email content generation and Baserow for storing generated reports. The viral topic identification relies on the Gemini Pro model because of a higher data quantity and more complex task. An SMTP email account must be provided to send the emails with. This can be through Gmail.
by Akshay
Overview This project is an AI-powered hotel receptionist built using n8n, designed to handle guest queries automatically through WhatsApp. It integrates Google Gemini, Redis, MySQL, and Google Sheets via LangChain to create an intelligent conversational system that understands and answers booking-related questions in real time. A standout feature of this workflow is its AI model-switching system — it dynamically assigns users to different Gemini models, balancing traffic, improving performance, and reducing API costs. How It Works WhatsApp Trigger The workflow starts when a hotel guest sends a message through WhatsApp. The system captures the message text, contact details, and session information for further processing. Redis-Based Model Management The workflow checks Redis for a saved record of the user’s previously assigned AI model. If no record exists, a Model Decider node assigns a new model (e.g., Gemini 1 or Gemini 2). Redis then stores this model assignment for an hour, ensuring consistent routing and controlled traffic distribution. Model Selector The Model Selector routes each user’s request to the correct Gemini instance, enabling parallel execution across multiple AI models for faster response times and cost optimization. AI Agent Logic The LangChain AI Agent serves as the system’s reasoning core. It: Interprets guest questions such as: “Who checked in today?” “Show me tomorrow’s bookings.” “What’s the price for a deluxe suite for two nights?” Generates safe, read-only SQL SELECT queries. Fetches the requested data from the MySQL database. Combines this with dynamic pricing or promotions from Google Sheets, if available. Response Delivery Once the AI Agent formulates an answer, it sends a natural-sounding message back to the guest via WhatsApp, completing the interaction loop. Setup & Requirements Prerequisites Before deploying this workflow, ensure the following: n8n Instance** (local or hosted) WhatsApp Cloud API** with messaging permissions Google Gemini API Key** (for both models) Redis Database** for user session and model routing MySQL Database** for hotel booking and guest data Google Sheets Account** (optional, for pricing or offer data) Step-by-Step Setup Configure Credentials Add all API credentials in n8n → Settings → Credentials (WhatsApp, Redis, MySQL, Google). Prepare Databases MySQL Tables Example: bookings(id, guest_name, room_type, check_in, check_out) rooms(id, type, rate, status) Ensure the MySQL user has read-only permissions. Set Up Redis Create Redis keys for each user: llm-user:<whatsapp_id> = { "modelIndex": 0 } TTL: 3600 seconds (1 hour). Connect Google Sheets (Optional) Add your sheet under Google Sheets OAuth2. Use it to manage room rates, discounts, or seasonal offers dynamically. WhatsApp Webhook Configuration In Meta’s Developer Console, set the webhook URL to your n8n instance. Select message updates to trigger the workflow. Testing the Workflow Send messages like “Who booked today?” or a voice message. Confirm responses include real data from MySQL and contextual replies. Key Features Text & voice support** for guest interactions Automatic AI model-switching** using Redis Session memory** for context-aware conversations Read-only SQL query generation** for database safety Google Sheets integration** for live pricing and availability Scalable design** supporting multiple LLM instances Example Guest Queries | Guest Query | AI Response Example | |--------------|--------------------| | “Who checked in today?” | “Two guests have checked in today: Mr. Ahmed (Room 203) and Ms. Priya (Room 410).” | | “How much is a deluxe room for two nights?” | “A deluxe room costs $120 per night. The total for two nights is $240.” | | “Do you have any discounts this week?” | “Yes! We’re offering a 10% weekend discount on all deluxe and suite rooms.” | | “Show me tomorrow’s check-outs.” | “Three check-outs are scheduled tomorrow: Mr. Khan (101), Ms. Lee (207), and Mr. Singh (309).” | Customization Options 🧩 Model Assignment Logic You can modify the Model Decider node to: Assign models based on user load, region, or priority level. Increase or decrease TTL in Redis for longer model persistence. 🧠 AI Agent Prompt Adjust the system prompt to control tone and response behavior — for example: Add multilingual support. Include upselling or booking confirmation messages. 🗂️ Database Expansion Extend MySQL to include: Staff schedules Maintenance records Restaurant reservations Then link new queries in the AI Agent node for richer responses. Tech Stack n8n** – Workflow automation & orchestration Google Gemini (PaLM)** – LLM for reasoning & generation Redis** – Model assignment & session management MySQL** – Booking & guest data storage Google Sheets** – Dynamic pricing reference WhatsApp Cloud API** – Messaging interface Outcome This workflow demonstrates how AI automation can transform hotel operations by combining WhatsApp communication, database intelligence, and multi-model AI reasoning. It’s a production-ready foundation for scalable, cost-optimized, AI-driven hospitality solutions that deliver fast, accurate, and personalized guest interactions.
by Yusuke Yamamoto
Daily AI News Summary & Gmail Delivery This n8n template demonstrates how to build an autonomous AI agent that automatically scours the web for the latest news, intelligently summarizes the top stories, and delivers a professional, formatted news digest directly to your email inbox. Use cases are many: Create a personalized daily briefing to start your day informed, keep your team updated on industry trends and competitor news, or automate content curation for your newsletter. Good to know At the time of writing, costs will depend on the LLM you select via OpenRouter and your usage of the Tavily Search API. Both services offer free tiers to get started. This workflow requires API keys and credentials for OpenRouter, Tavily, and Gmail. The AI Agent's system prompt is configured to produce summaries in Japanese. You can easily change the language and topics by editing the prompt in the "AI News Agent" node. How it works The workflow begins on a daily schedule, which you can configure to your preferred time. A Code node dynamically generates a search query for the current day's most important news across several categories. The AI Agent receives this query. It uses its attached tools to perform the task: It uses the Tavily News Search tool to find relevant, up-to-date articles from the web. It then uses the OpenRouter Chat Model to analyze the search results, identify the most significant stories, and write a summary for each. The agent's output is strictly structured into a JSON format, containing a main title and an array of individual news stories. Another Code node takes this structured JSON data and transforms it into a clean, professional HTML-formatted email. Finally, the Gmail node sends the beautifully formatted email to your specified recipient. How to use Before you start, you must add your credentials for OpenRouter, Tavily, and Gmail in their respective nodes. Customize the schedule in the "Schedule Trigger" node to set the daily delivery time. Change the recipient's email address in the final "Send a message" (Gmail) node. Requirements OpenRouter account (for access to various LLMs) Tavily AI account (for the real-time search API) Google account with Gmail enabled for sending emails via OAuth2 Customising this workflow Change the delivery channel:** Easily swap the final Gmail node for a Slack, Discord, or Telegram node to send the news summary to a team channel. Focus the news topics:** Modify the "Prepare News Query" node to search for highly specific topics, such as "latest advancements in artificial intelligence" or "financial news from the European market." Archive the news:** Add a node after the AI Agent to save the structured JSON data to a database or Google Sheet, allowing you to build a searchable news archive over time.
by Manav Desai
WhatsApp RAG Chatbot with Supabase, Gemini 2.5 Flash, and OpenAI Embeddings This n8n template demonstrates how to build a WhatsApp-based AI chatbot that answers user questions using document retrieval (RAG) powered by Supabase for storage, OpenAI embeddings for semantic search, and Gemini 2.5 Flash LLM for generating high-quality responses. Use cases are many: Turn your WhatsApp into a knowledge assistant for FAQs, customer support, or internal company documents — all without coding. Good to know The workflow uses OpenAI embeddings for both document embeddings and query embeddings, ensuring accurate semantic search. Gemini 2.5 Flash LLM** is used to generate user-friendly answers from the retrieved context. Messages are processed in real-time and sent back directly to WhatsApp. Workflow is modular — you can split document ingestion and query handling for large-scale setups. Supabase and WhatsApp API credentials must be configured before running. How it works Trigger: A new WhatsApp message triggers the workflow via webhook. Message Check: Determines if the message is a query or a document upload. Document Handling: Fetch file URL from WhatsApp. Convert binary to text. Generate embeddings with OpenAI and store them in Supabase. Query Handling: Generate query embeddings with OpenAI. Retrieve relevant context from Supabase. Pass context to Gemini 2.5 Flash LLM to compose a response. Response: Send the answer back to the user on WhatsApp. Optional: Add Gmail node to forward chat logs or daily summaries. How to use Configure WhatsApp Business API webhook for incoming messages. Add your Supabase and OpenAI credentials in n8n’s credentials manager. Upload documents via WhatsApp to populate the Supabase vector store. Ask queries — the bot retrieves context and answers using Gemini 2.5 Flash. Requirements WhatsApp Business API** (or Twilio WhatsApp Sandbox) Supabase account** (vector storage for embeddings) OpenAI API key** (for generating embeddings) Gemini API access** (for LLM responses) Customising this workflow Swap WhatsApp with Telegram, Slack, or email for different chat channels. Extend ingestion to other sources like Google Drive or Notion. Adjust the number of retrieved documents or prompt style in Gemini for tone control. Add a Gmail output node to send logs or alerts automatically.
by gotoHuman
This workflow automatically classifies every new email from your linked mailbox, drafts a personalized reply, and creates Linear tickets for bugs or feature requests. It uses a human-in-the-loop with gotoHuman and continuously improves itself by learning from approved examples. How it works The workflow triggers on every new email from your linked mailbox. Self-learning Email Classifier: an AI model categorizes the email into defined categories (e.g., Bug Report, Feature Request, Sales Opportunity, etc.). It fetches previously approved classification examples from gotoHuman to refine decisions. Self-learning Email Writer: the AI drafts a reply to the email. It learns over time by using previously approved replies from gotoHuman, with per-classification context to tailor tone and style (e.g., different style for sales vs. bug reports). Human Review in gotoHuman: review the classification and the drafted reply. Drafts can be edited or retried. Approved values are used to train the self-learning agents. Send approved Reply: the approved response is sent as a reply to the email thread. Create ticket: if the classification is Bug or Feature Request, a ticket is created by another AI agent in Linear. Human Review in gotoHuman: How to set up Most importantly, install the gotoHuman node before importing this template! (Just add the node to a blank canvas before importing) Set up credentials for gotoHuman, OpenAI, your email provider (e.g. Gmail), and Linear. In gotoHuman, select and create the pre-built review template "Support email agent" or import the ID: 6fzuCJlFYJtlu9mGYcVT. Select this template in the gotoHuman node. In the "gotoHuman: Fetch approved examples" http nodes you need to add your formId. It is the ID of the review template that you just created/imported in gotoHuman. Requirements gotoHuman (human supervision, memory for self-learning) OpenAI (classification, drafting) Gmail or your preferred email provider (for email trigger+replies) Linear (ticketing) How to customize Expand or refine the categories used by the classifier. Update the prompt to reflect your own taxonomy. Filter fetched training data from gotoHuman by reviewer so the writer adapts to their personalized tone and preferences. Add more context to the AI email writer (calendar events, FAQs, product docs) to improve reply quality.
by Ada
How it works: This template demonstrates how to build a low-code, AI-powered data analysis workflow in n8n. It enables you to connect to various data sources (such as MySQL, Google Sheets, or local files), process and analyze structured data, and generate natural language insights and visualizations using external AI APIs. Key Features: Flexible data source selection (MySQL, Google Sheets, Excel/CSV, etc.) AI-driven data analysis, interpretation, and visualization via HTTP Request nodes Automated email delivery of analysis results (Gmail node) Step-by-step sticky notes for credential setup and workflow customization Step-by-step: Apply for an API Key You can easily create and manage your API Key in the ADA official website - API. To begin with, You need to register for an ADA account. Once on the homepage, click the bottom left corner to access the API management dashboard. Here, you can create new APIs and set the credit consumption limit for each API. A single account can create up to 10 APIs. After successful creation, you can copy the API Key to set credentials. You can also view the credit consumption of each API and manage your APIs. Set credentials In HTTP nodes(DataAnalysis, DataInterpretation, and DataVisualization) select Authentication → Generic Credential Type Choose Header Auth → Create new credential Name the header Authorization, which must be exactly 'Authorization', and fill in the previously applied API key Data Source: The workflow starts by extracting structured data from your chosen source (e.g., database, spreadsheet, or file). AI Skills: Data is sent to external AI APIs for analysis, interpretation, and visualization, based on your configured queries. Result Processing: The AI-generated results are converted to HTML or Markdown as needed. Output: The final report or visualization is sent via email. You can easily adapt this step to other output channels. API Keys Required: Ada API Key: For AI data analysis Gmail OAuth2: For sending emails (if using Gmail node) (Optional) Data source credentials: For MySQL, Google Sheets, etc.