by Kevin Armbruster
Automatically add Travel time blockers before Appointments This bot automatically adds Travel time blockers to your calendar, so you never come late to an appointment again. How it works Trigger**: The workflow is initiated daily at 7 AM by a "Schedule Trigger". AI Agent**: An "AI Agent" node orchestrates the main logic. Fetch events**: It uses the get_calendar_events tool to retrieve all events scheduled for the current day. Identify events with location**: It then filters these events to identify those that have a specified location. Check for existing travel time Blockers*: For each event with a location, it checks if a Travel time blocker already exists. Events that *do not have such a blocker are marked for processing. Calculate travel time: Using the Google Directions API it determines how lot it takes to get to the location of the event. The starting location is by default your **Home Address, unless there is a previous event within 2 hours before the event, in which case it will use the location of that previous event. Create Travel time blocker**: Finally, it uses the create_calendar_event tool to create the Travel time blocker with a duration equal to the calculated travel time + 10 minutes for buffer. Set up steps Set Variables Home address Blocker name Mode of Transportation Connect your LLM Provider Connect your Google Calendar Connect your Google Directions API
by Denis
What this workflow does Complete Airtable database management system using MCP (Model Context Protocol) for AI agents. Create bases, tables with complex field types, manage records, and maintain state with Redis storage. Setup steps Add your Airtable Personal Access Token to credentials Configure Redis connection for ID storage Get your workspace ID from Airtable (starts with wsp...) Connect to MCP Server Trigger Configure your AI agent with the provided instructions Key features Create new Airtable bases and custom tables Support for all field types (date, number, select, etc.) Full CRUD operations on records Rename tables and fields Store base/workspace IDs to avoid repeated requests Generic operations work with ANY Airtable structure Included operations create_base, create_custom_table, add_field get_table_ids, get_existing_records update_record, rename_table, rename_fields delete_record get/set base_id and workspace_id (Redis storage) Notes Check sticky notes in workflow for ID locations and field type requirements.
by Guillaume Duvernay
Stop duplicating your work! This template demonstrates a powerful design pattern to handle multiple triggers (e.g., Form, Webhook, Sub-workflow) within a single, unified workflow. By using a "normalize and consolidate" technique, your core logic becomes independent of the trigger that started it, making your automations cleaner, more scalable, and far easier to maintain. Who is this for? n8n developers & architects:** Build robust, enterprise-grade workflows that are easy to maintain. Automation specialists:** Integrate the same core process with multiple external systems without repeating yourself. Anyone who values clean design:** Apply the DRY (Don't Repeat Yourself) principle to your automations. What problem does this solve? Reduces duplication:** Avoids creating near-identical workflows for each trigger source. Simplifies maintenance:** Update your core logic in one place, not across multiple workflows. Improves scalability:** Easily add new triggers without altering the core processing logic. Enhances readability:** A clear separation of data intake from core logic makes workflows easier to understand. How it works (The "Normalize & Consolidate" Pattern) Trigger: The workflow starts from one of several possible entry points, each with a unique data structure. Normalize: Each trigger path immediately flows into a dedicated Set node. This node acts as an adapter, reformatting the unique data into a standardized schema with consistent key names (e.g., mapping body.feedback to feedback). Consolidate: All "normalize" nodes connect to a single Set node. This node uses the generic {{ $json.key_name }} expression to accept the standardized data from any branch. From here, the workflow is a single, unified path. Setup This template is a blueprint. To adapt it: Replace the triggers with your own. Normalize your data: After each trigger, use a Set node to map its unique output to your common schema. Connect to the consolidator: Link all your "normalize" nodes to the Consolidate trigger data node. Build your core logic after the consolidation point, referencing the unified data. Taking it further Merge any branches:** Use this pattern to merge any parallel branches in a workflow, not just triggers. Create robust error handling:** Unify "success" and "error" paths before a final notification step to report on the outcome.
by Guido X Jansen
AI Council: Multi-Model Consensus with Peer Review Inspired by Andrej Karpathy's LLM Council, but rebuilt in n8n. This workflow creates a "council" of AI models that independently answer your question, then peer-review each other's responses before a final arbiter synthesizes the best answer. Who is this for? If you want to prepare for an upcoming meeting with different people and prep for their different views find any "blind spots" in your view on a certain subject Researchers wanting more robust AI-generated answers Developers exploring multi-model architectures Anyone seeking higher-quality responses through AI consensus, potentially with faster/cheaper models. Teams evaluating different LLM capabilities side-by-side How it works Ask a Question — Submit your query via the Chat Trigger Individual Answers — Four different models (Gemini, Llama, Gemma, Mistral) independently generate responses Peer Review — Each model reviews ALL answers, identifying pros, cons, and overall assessment Final Synthesis — DeepSeek R1 analyzes all peer reviews and produces a refined, consensus-based final answer Setup Instructions Prerequisites Access to an LLM (e.g. OpenRouter account with API credits) Steps Create OpenRouter credentials in n8n: Go to Settings → Credentials → Add Credential Select "OpenRouter" and paste your API key Connect all model nodes to your OpenRouter credential. In this example I used Gemini, Llama, Gemma, Mistral and Deepseek, but you can use whatever you want. You can also use the same models, but change their parameters. Play around to find out what suits you best. Activate the workflow and open the Chat interface to test Customization Ideas You can add as many answer and review models as you want. Do note that each AI node is executed in series, so each will add to the total duration. Swap models via OpenRouter's model selector (e.g., use Claude, GPT-4, etc.) Adjust the peer review prompt to represent a certain persona or with domain-specific evaluation criteria Add memory nodes for multi-turn conversations Connect to Slack/Discord instead of the Chat Trigger
by higashiyama
AI Team Morale Monitor Who’s it for For team leads, HR, and managers who want to monitor the emotional tone and morale of their teams based on message sentiment. How it works Trigger: Runs every Monday at 9 AM. Config: Defines your Teams and Slack channels. Fetch: Gathers messages for the week. AI Analysis: Evaluates tone and stress levels. Aggregate: Computes team sentiment averages. Report: Creates a readable morale summary. Slack Post: Sends report to your workspace. How to set up Connect Microsoft Teams and Slack credentials. Enter your Team and Channel IDs in the Workflow Configuration node. Adjust the schedule if desired. Requirements Microsoft Teams and Slack access. Gemini (or OpenAI) API credentials set in AI nodes. How to customize Modify the AI prompts for different insight depth. Replace Gemini with other LLMs if preferred. Change posting platform or format. Note: This workflow uses only linguistic data — no personal identifiers or private metadata.
by LeeWei
⚙️ Sales Assistant Build: Automate Prospect Research and Personalized Outreach for Sales Calls 🚀 Steps to Connect: Google Sheets Setup Connect your Google account via OAuth2 in the "Review Calls", "Product List", "Testimonials Tool", "Update Sheet", and "Update Sheets 2" nodes. Duplicate the mock Google Sheet (ID: 1u3WMJwYGwZewW1IztY8dfbEf5yBQxVh8oH7LQp4rAk4) to your drive and update the documentId in all Google Sheets nodes to match your copy's ID. Ensure the sheet has tabs for "Meeting Data", "Products", and "Success Stories" populated with your data. Setup time: ~5 minutes. OpenAI API Key Go to OpenAI and generate your API key. Paste this key into the credentials for both "OpenAI Chat Model" and "OpenAI Chat Model1" nodes. Setup time: ~2 minutes. Tavily API Key Sign up at Tavily and get your API key. In the "Tavily" node, replace the placeholder api_key in the JSON body with your key (e.g., "api_key": "your-tavily-key-here"). Setup time: ~3 minutes. How it Works • Triggers on a new sales call booking (manual for testing). • Pulls prospect details from Google Sheets and researches their company, tech stack, and updates using Tavily. • Matches relevant products/solutions from your product list and updates the sheet. • Generates personalized email confirmation (subject + body) and SMS using testimonials for relevance. • Updates the sheet with the outreach content for easy follow-up. Setup takes ~10-15 minutes total. All nodes are pre-configured—edit only the fields above. Detailed notes (e.g., prompt tweaks) are in sticky notes within the workflow.
by System Admin
https://api.fastmail.com/.well-known/jmap https://api.fastmail.com/jmap/session. https://api.fastmail.com/.well-known/jmap https://api.fastmail.com/jmap/session
by System Admin
A one sentence summary of what the post is about.. Is the post about n8n?. A one sentence summary of what the post is about.
by Davide
This workflow automates the creation of realistic Multi-speaker podcasts using ElevenLabsv3 API by reading a script from Google Sheets and saving the final MP3 file to Google Drive. Data Source – Dialogue scripts are stored in a Google Sheet. Each row contains: Speaker name (optional) Voice ID (from ElevenLabs) Text to be spoken Data Preparation – The workflow transforms the spreadsheet content into the proper JSON format required by the ElevenLabs API. Podcast Generation – ElevenLabs’ Eleven v3 model converts the prepared text into expressive, natural-sounding dialogue. It supports not only speech but also non-verbal cues and audio effects (e.g., \[laughs], \[whispers], \[clapping]). File Storage – The generated audio file is automatically uploaded to Google Drive, organized by timestamped filenames. Key Advantages Seamless Automation** – From dialogue writing to final audio upload, everything runs automatically in one workflow. Multi-Speaker Support** – Easily assign different voices to multiple characters for dynamic conversations. Expressive & Realistic Output** – Supports emotions, speech styles, and ambient effects, making podcasts more immersive. Flexible Content Input** – Scripts can be collaboratively written and edited in Google Sheets, with no technical knowledge required. Scalable & Reusable** – Can generate multiple podcast episodes in seconds, ideal for content creators, educators, or businesses. Cloud Integration** – Final audio files are securely stored in Google Drive, ready to be shared or published. How It Works The workflow processes a structured script from a spreadsheet and uses AI to generate a realistic conversation. Manual Trigger: The workflow is started manually by a user clicking "Execute workflow" in n8n. Get Dialogue: The "Get dialogue" node fetches the podcast script data from a specified Google Sheet. The sheet should contain columns for Speaker (optional), Voice ID, and the dialogue Input/Text. Prepare Dialogue: The "Code" node transforms the raw sheet data into the precise JSON format required by the ElevenLabs API. It creates an array of objects where each object contains the text and the corresponding voice_id for each line of dialogue. Generate Podcast: The "HTTP Request" node sends a POST request to the ElevenLabs Text-to-Dialogue API endpoint (/v1/text-to-dialogue). It sends the transformed dialogue array in the request body, instructing the API to generate a single audio file with a conversation between the specified voices. Upload File: The "Upload file" node takes the audio file response from ElevenLabs and saves it to a designated folder in Google Drive.. Set Up Steps To use this workflow, you must complete the following configuration steps: Prepare the Google Sheet: Clone the Template: Duplicate the provided Google Sheet template into your own Google Drive. Fill the Script: Column A (Speaker): Optional. Add speaker names for your reference (e.g., "Host", "Guest"). Column B (Voice ID): Mandatory. Enter the unique Voice ID for each line from ElevenLabs. Column C (Input): Mandatory. Write the dialogue text for each speaker. You can use [non-speech audio events] like [laughs] or [whispers] to add expression. Configure ElevenLabs API Credentials: Login or create FREE account on Elevenlabs Edit the "Generate podcast" node's credentials. Create an HTTP Header Auth credential named "ElevenLabs API". Set the Name to xi-api-key and the Value to your actual ElevenLabs API key. Configure Google Services: Google Sheets: Ensure the "Get dialogue" node has valid OAuth credentials and that the documentId points to your copy of the script sheet. Google Drive: Ensure the "Upload file" node has valid OAuth credentials and that the folderId points to the correct Google Drive folder where you want the audio files saved. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Candra Reza
Unleash the full potential of your website's search engine performance and user experience with this all-in-one n8n automation template. Designed for SEO professionals and webmasters, this suite provides meticulous on-page and technical SEO auditing, deep insights into Core Web Vitals (LCP & INP), and an intelligent AI-powered chatbot for instant insights and troubleshooting. Key Features: Comprehensive On-Page SEO Audit: Automatically checks for **missing or malformed titles, meta descriptions, H1s (including multiple H1s), missing alt text on images, and canonical tag issues. Detailed Technical SEO Scan: Verifies **HTTPS implementation, robots.txt accessibility and content, and sitemap.xml presence. Core Web Vitals Monitoring: Leverages **Google PageSpeed Insights to continuously track and alert on critical performance metrics like Largest Contentful Paint (LCP) and Interaction to Next Paint (INP). AI-Powered Analysis & Recommendations: Integrates advanced AI models (ChatGPT, Claude, or Gemini) to **analyze audit findings, provide actionable recommendations for improvements, and even suggest better alt text for images based on content context. Intelligent SEO Chatbot: A dynamic chatbot triggered by webhooks understands natural language queries, extracts entities (URLs, keywords, SEO topics), and provides **instant, AI-generated answers about SEO best practices, Core Web Vitals explanations, or even specific site data (via Google Search Console integration). Automated Reporting & Alerts: Logs all audit data to **Google Sheets for historical tracking and sends real-time Slack alerts for critical SEO issues or performance degradations. Streamline your SEO workflow, ensure optimal website health, and react swiftly to performance challenges. This template is your ultimate tool for staying ahead in the competitive digital landscape.
by Țugui Dragoș
This workflow automates customer support across multiple channels (Email, Live Chat, WhatsApp, Slack, Discord) using AI-powered responses enhanced with Retrieval Augmented Generation (RAG) and your product documentation. It intelligently handles incoming queries, provides instant and context-aware answers, and escalates complex or negative-sentiment cases to your human support team. All interactions are logged and categorized for easy tracking and reporting. Key Features Omnichannel Support:** Handles customer queries from Email, Live Chat, WhatsApp, Slack, and Discord. AI-Powered Answers:** Uses RAG to generate accurate, context-aware responses based on your product documentation. Automatic Escalation:** Detects low-confidence or negative-sentiment cases and escalates them to your human support team. Conversation Logging:** Automatically logs and categorizes all conversations for future analysis. Weekly Reporting:** Sends automated weekly summaries and metrics to your support team. How It Works Trigger: The workflow starts when a new message is received on any supported channel. Normalization: Incoming messages are normalized into a common format for unified processing. Context Management: Conversation history is fetched and merged with the new query for better AI context. AI Response: The workflow uses RAG to generate a response, referencing your product documentation. Confidence & Sentiment Analysis: The response is scored for confidence and sentiment. Escalation Logic: If the response is low-confidence or negative, the workflow escalates the case to your support team and creates a ticket. Response Delivery: The answer (or escalation notice) is sent back to the customer on the original channel. Logging & Reporting: All interactions are logged, categorized, and included in weekly reports. Configuration Connect Your Channels: Set up triggers for each channel you want to support (Email, Webhook, WhatsApp, Slack, Discord). Add Your Documentation: Integrate your product documentation source (e.g., Google Docs, Notion, or a knowledge base) for the RAG model. Configure AI Model: Set your preferred AI provider and model (e.g., OpenAI, Azure OpenAI, etc.). Set Escalation Rules: Adjust confidence thresholds and escalation logic to fit your support workflow. Integrate Support Tools: Connect your ticketing system (e.g., Zendesk) and reporting tools (e.g., Google Sheets, Slack). Test the Workflow: Send test queries from each channel to ensure correct routing, AI responses, and escalation. Use Cases Provide instant, accurate answers to customer questions 24/7. Reduce manual workload for your support team by automating common queries. Ensure complex or sensitive cases are quickly escalated to human agents. Gain insights into support trends with automated logging and weekly reports. Requirements n8n version 2.0.2 or later Accounts and credentials for your chosen channels and AI provider Access to your product documentation in a supported format Notes Please review and customize the workflow to fit your company’s privacy and data handling policies. For best results, keep your product documentation up to date and well-structured.
by WeblineIndia
AI-Powered Fake Review Detection Workflow Using n8n & Airtable This workflow automates the detection of potentially fake or manipulated product reviews using n8n, Airtable, OpenAI and Slack. It fetches reviews for a given product, standardizes the data, generates a unique hash to avoid duplicates, analyzes each review using an AI model, updates the record in Airtable and alerts the moderation team if the review appears suspicious. Quick Implementation Steps Add Airtable, OpenAI and Slack credentials to n8n. Create an Airtable Base with a reviews table. Connect the Webhook URL to your scraper or send sample JSON via Postman. Test the workflow by passing product and review URLs. Activate the workflow for continuous automated review screening. What It Does This workflow provides an automated pipeline to analyze product reviews and determine whether they may be fake or manipulated. It begins with a webhook that accepts product information and a scraper API URL. Using this information, the workflow fetches associated reviews. Each review is then expanded into separate items and normalized to maintain a consistent structure. The workflow generates a hash for deduplication, preventing multiple entries of the same review. New reviews are stored in Airtable and subsequently analyzed by OpenAI. The resulting risk score, explanation and classification are saved back into Airtable. If a review's score exceeds a predefined threshold, a structured Slack alert is sent to the moderation team. This ensures that high-risk reviews are escalated promptly while low-risk reviews are simply stored for recordkeeping. Who’s It For eCommerce marketplaces monitoring review integrity Sellers seeking automated fraud detection for product reviews SaaS platforms that accept user-generated reviews Trust & Safety and compliance teams Developers looking for an automated review-quality pipeline Requirements n8n (Cloud or Self-Hosted) Airtable Personal Access Token OpenAI API Key Slack Bot Token or Webhook Review Scraper API Basic understanding of Airtable field setup How It Works & How To Set Up 1. Receive Product Data The workflow starts with the Webhook – Receive Product Payload, which accepts a list of products and their scraper URLs. 2. Extract and Process Each Product Extract products separates the list into individual items. Process Each Product ensures that each product’s reviews are processed one at a time. 3. Fetch and Validate Reviews Fetch Product Reviews calls the scraper API. IF – Has Reviews? determines whether any reviews were returned. 4. Expand and Normalize Reviews Expand reviews[] to items splits reviews into individual items. Prepare Review Fields ensures consistent review structure. 5. Generate Review Hash Generate Review Hash1 produces a deterministic hash based on review text, reviewer ID, and date. 6. Airtable Deduplication Check Search Records by Hash checks whether the review already exists. Normalize Airtable Result cleans Airtable’s inconsistent empty output. Is New Review? decides if the review should be inserted or skipped. 7. Store New Reviews Create Review Record inserts new reviews into Airtable. 8. AI-Based Fake Review Analysis AI Fake Review Analysis sends relevant review fields to OpenAI. Parse AI Response ensures the output is valid JSON. 9. Update Airtable With AI Results Update Review Record stores the AI’s score, classification, and reasoning. 10. Moderation Alert Check Suspicious Score Threshold evaluates if the fake score exceeds a defined limit. If so, Send Moderation Alert posts a detailed message to Slack. How To Customize Nodes Fake Score Threshold Modify threshold in Check Suspicious Score Threshold. Slack Message Format Adjust text fields in Send Moderation Alert. AI Prompt Instructions Edit the instructions inside AI Fake Review Analysis. Airtable Fields Update mappings in both Create Review Record and Update Review Record. Additional Checks Insert enrichment steps before AI analysis, such as: reviewer profile metadata geolocation or reverse IP checks keyword density analysis Add-ons Notion integration for long-term review case tracking Jira or Trello integration for incident management Automated sentiment tagging Weekly review-risk summary reports Google Sheets backup for archived reviews Reviewer behavior modeling (number of reviews, frequency, patterns) Use Case Examples Detecting manipulated Amazon product reviews. Flagging repetitive or bot-like reviews for Shopify stores. Screening mobile app reviews for suspicious content. Quality-checking user reviews on multi-vendor marketplaces. Monitoring competitor-driven false negative or positive reviews. There can be many more scenarios where this workflow helps identify misleading product reviews. Troubleshooting Guide | Issue | Possible Cause | Solution | | ---------------------------- | ----------------------------------- | ------------------------------------------------------ | | No data after review fetch | Scraper API returned empty response | Validate scraper URL and structure | | Duplicate reviews inserted | Hash mismatch | Ensure Generate Review Hash1 uses the correct fields | | Slack alert not triggered | Bot not added to channel | Add bot to the target Slack channel | | AI response fails to parse | Model returned non-JSON response | Strengthen "JSON only" prompt in AI analysis | | Airtable search inconsistent | Airtable returns empty objects | Rely on Normalize Airtable Result for correction | Need Help If you need assistance customizing this workflow, integrating additional systems or designing advanced review moderation solutions, our n8n workflow development team at WeblineIndia is available to help. We offer support for: Workflow setup and scaling Custom automation logic AI-driven enhancements Integration with third-party platforms And so much more. Feel free to contact us for guidance, implementation or to build similar automated systems tailored to your needs.