by Davide
🤖📞 This workflow automates the process of calling customers to remind them of their booking reservations using AI-generated messages and a Twilio phone number. It can easily be adapted for other venues. Key Benefits Time-Saving Automation**: Eliminates the need for manual calls by staff, saving hours per week. Human-like AI Messages**: Uses a custom language model to generate polite, natural phone messages tailored to each customer. Multi-Channel Integration**: Google Sheets for reservation tracking. Twilio for automated calling. OpenRouter (or other LLMs) for generating speech content. Error Reduction**: Ensures all customers receive reminders exactly on the reservation day, minimizing no-shows. Scalable**: Easily adapts to growing reservation lists and more complex message logic. Suitable** for restaurants, hairdressers, offices and any other business How It Works Trigger: The workflow can be triggered manually (via "When clicking ‘Execute workflow’) or automatically at 11 AM daily (via Schedule Trigger). Data Fetch: Retrieves today’s reservations from a Google Sheet, filtering rows where DATE = today and CALLED is empty. AI-Generated Call Script: For each reservation, the Secretary Agent (powered by OpenRouter’s Grok-4) generates a phone script using the guest’s name, time, and party size. Twilio Call: The script is sent to Twilio, which calls the guest’s phone number (from the sheet) and reads the message aloud using text-to-speech. Update & Loop: Marks the reservation as called (CALLED = "x") in the sheet and waits 2 minutes between calls to avoid rate limits. Set Up Steps Twilio Configuration: Sign up for Twilio, buy a phone number, and: Enable text-to-speech (set language to Italian). Configure geo permissions for the target country. Add credentials to the Twilio node (sender number in From field). Google Sheets Setup: Clone the Google Sheet template and ensure: Phone numbers include the international prefix (without "+"). Columns: DATE, TIME, NAME, N. PEOPLE, PHONE, CALLED. OpenRouter API: Connect the OpenRouter Chat Model node to your account (using Grok-4 or another model). Deploy: Activate the workflow and test with manual execution. Note: The workflow is currently inactive (active: false). Enable it after setup. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Tharwat Mohamed
Document-Aware WhatsApp AI Bot for Customer Support Google Docs-Powered WhatsApp Support Agent 24/7 WhatsApp AI Assistant with Live Knowledge from Google Docs 📝Description Template Smart WhatsApp AI Assistant Using Google Docs Help customers instantly on WhatsApp using a smart AI assistant that reads your company’s internal knowledge from a Google Doc in real time. Built for clubs, restaurants, agencies, or any business where clients ask questions based on a policy, FAQ, or services document. ⚙️ How it works Users send free-form questions to your WhatsApp Business number (e.g. “What are the gym rules?” or “Are you open today?”) The bot automatically reads your company’s internal Google Doc (policy, schedule, etc.) It merges the document content with today’s date and the user’s question to craft a custom AI prompt The AI (Gemini or ChatGPT) then replies back on WhatsApp using natural, helpful language All conversations are logged to Google Sheets for reporting or audit > 💡Bonus: The AI even understands dates inside the document and compares them to today’s date — e.g. if your document says “Closed May 25 for 30 days,” it will say “We're currently closed until June 24. 🧰 Set up steps Connect your WhatsApp Cloud API account (Meta) Add your Google account and grant access to the Doc containing your company info Choose your AI model (ChatGPT/OpenAI or Gemini) Paste your document ID into the Google Docs node Connect your WhatsApp webhook to Meta (only takes 5 minutes) Done — start receiving and answering customer questions! > 📄 Works best with free-tier OpenAI/Gemini, Google Docs, and Meta's Cloud API (no phone required). Everything is modular, extensible, and low-code. 🔄 Customization Tips Change the Google Doc anytime to update answers — no retraining needed Add your logo and business name in the AI agent’s “System Prompt” Add fallback routes like “Escalate to human” if the bot can't help Clone for multiple brands by duplicating the workflow and swapping in new docs 🤝 Need Help Setting It Up? If you'd like help connecting your WhatsApp Business API, setting up Google Docs access, or customizing this AI assistant for your business or clients… 📩 I offer setup, branding, and customization services: WhatsApp Cloud API setup & verification Google OAuth & Doc structure guidance AI model configuration (OpenAI / Gemini) Branding & prompt tone customization Logging, reporting, and escalation logic Just send a message via: Email: tharwat.elsayed2000@gmail.com WhatsApp: +20 106 180 3236
by David Roberts
AI evaluation in n8n This is a template for n8n's evaluation feature. Evaluation is a technique for getting confidence that your AI workflow performs reliably, by running a test dataset containing different inputs through the workflow. By calculating a metric (score) for each input, you can see where the workflow is performing well and where it isn't. How it works This template shows how to calculate a workflow evaluation metric: whether an output matches an expected output (i.e. has the same meaning). The workflow takes questions about the causes of historical events and compares them with the reference answers in the dataset. We use an evaluation trigger to read in our dataset It is wired up in parallel with the regular chat trigger so that the workflow can be started from either one. More info If we're evaluating (i.e. the execution started from the evaluation trigger), we calculate the correctness metric using AI We pass this information back to n8n as a metric If we're not evaluating we avoid calculating the metric, to reduce cost
by Don Jayamaha Jr
Instantly access real-time decentralized exchange (DEX) insights directly in Telegram! This workflow integrates the DexScreener API with GPT-4o-powered AI and Telegram, allowing users to fetch the latest blockchain token analytics, liquidity pools, and trending tokens effortlessly. Ideal for crypto traders, DeFi analysts, and investors who need actionable market data at their fingertips. How It Works A Telegram bot listens for user queries about tokens or trading pairs. The workflow interacts with the DexScreener API (no API key required) to fetch real-time data, including: Token fundamentals (profiles, images, descriptions, and links) Trending and boosted tokens (hyped projects, potential market movers) Trading pair analytics (liquidity, price action, volumes, volatility) Order and payment activity (transaction insights, investor movements) Liquidity pool depth (market stability, capital flows) Multi-chain pair comparisons (performance tracking across networks) An AI-powered language model (GPT-4o-mini) enhances responses for better insights. The workflow logs session data to improve user interaction tracking. The requested DEX insights are sent back via Telegram in an easy-to-read format. What You Can Do with This Agent This AI-driven Telegram bot enables you to: ✅ Track trending and boosted tokens before they gain mainstream traction. ✅ Monitor real-time liquidity pools to assess token stability. ✅ Analyze active trading pairs across different blockchains. ✅ Identify transaction trends by checking paid orders for tokens. ✅ Compare market activity with detailed trading pair analysis. ✅ Receive instant insights with AI-enhanced responses for deeper understanding. Set Up Steps Create a Telegram Bot Use @BotFather on Telegram to create a bot and obtain an API token. Configure Telegram API Credentials in n8n Add your Telegram bot token under Telegram API credentials. Deploy and Test Send a query (e.g., "SOL/USDC") to your Telegram bot and receive real-time insights instantly! 🚀 Unlock powerful, real-time DEX insights directly in Telegram—no API key required! 📺 Setup Video Tutorial Watch the full setup guide on YouTube:
by Tarek Mustafa
Who is this for? Jira users who want to automate the generation of a Lessons Learned or Retrospective report after an Epic is Done. What problem is this workflow solving? / use case Lessons Learned / Retrospective reports are often omitted in Agile teams because they take time to write. With the use of n8n and AI this process can be automated. What is this workflow doing Triggers automatically upon an Epic reaching the "Done" status in Jira. Collects all related tasks and comments associated with the completed Epic. Intelligently filters the gathered data to provide the LLM with the most relevant information. Utilizes an LLM with a structured System Message to generate insightful reports. Delivers the finalized report directly to your specified Google Docs document. Setup Create a Jira API key and follow the Credentials Setup in the Jira trigger node. Create credentials for Google Docs and paste your document ID into the Node. How to customize this workflow to your needs Change the System Message in the AI Agent to fit your needs.
by Agus Narestha
🔒 SSL Certificate Monitoring & Expiry Alert with Spreadsheet [FREE APIs] ✅ What This Workflow Does This n8n template automatically monitors SSL certificates of websites listed in a Google Sheet and sends email alerts if any are expiring within 14 days. It helps ensure you avoid downtime, security issues, and trust warnings due to expired certificates. 🧩 Key Features 📅 Weekly Automation: Runs every Monday at 7:00 AM (configurable). 📄 Google Sheets Integration: Fetches and updates data in a spreadsheet. 🔍 SSL Check via API: Uses ssl-checker.io to get certificate details. ⚠️ SSL Expiry Filter: Identifies certificates expiring within 14 days. 📧 Email Alerts: Sends notifications for certificates close to expiration. 📂 Input Spreadsheet Format Your Google Sheet should have the following columns: | No | Name | Link | SSL Issued On | SSL Expired On | SSL Status | |----|-----------------|-----------------------|-------------------|-------------------|------------| | 1 | Example Site | https://example.com | 2024-07-01 | 2025-07-01 | Valid | | 2 | My Blog | https://myblog.org | 2024-07-05 | 2024-07-20 | Expiring | Each row should include a valid website URL in the Link column. 🛠️ How It Works Scheduled Trigger Executes weekly (Monday 7:00 AM). Fetch Website List Reads all website entries from the Google Sheet. Check SSL Certificates Uses ssl-checker.io API to retrieve certificate details for each website. Update Spreadsheet Writes "Issued On" and "Expired On" fields back to the spreadsheet. Evaluate SSL Expiry Filters for certificates expiring within 14 days. Check Condition Determines whether to send alerts based on filtered results. Send Email Alert Notifies via email if any certificates are expiring soon. 📬 Example Email Output Subject: ⚠️ ALERT!! SSL EXPIRED SSL certificates expiring soon: example.com (expires in 5 days) anotherdomain.net (expires in 3 days) 🧰 Setup Requirements A Google Sheet with the correct columns and website links. SMTP credentials to send alert emails.
by Gleb D
This n8n workflow automatically retrieves recent Reuters news articles related to a user-specified keyword, summarizes the main findings using Google Gemini, formats the output into styled HTML, and sends a clean email report to a specified address. 🚀 What It Does Collects news data from Bright Data's Reuters dataset. Sorts and filters top 10 most recent news articles by publication_date. Sends structured news data to Gemini Flash for summarization. Converts Gemini's response (in Markdown) into styled HTML. Delivers a concise news briefing via email, including clickable source links and topic highlights. 🛠️ Step-by-Step Setup User Form: Accepts a keyword from the user via an n8n form trigger. Bright Data API: Posts a discover_new request to Bright Data's Reuters dataset using the keyword. Snapshot Polling: Waits and checks for dataset readiness using the snapshot ID. Data Retrieval: Downloads the news data once the snapshot is complete. Parsing: Filters and sorts the latest 10 articles using a Python Code node. AI Analysis: Google Gemini summarizes the filtered content into one briefing. Markdown → HTML: Converts AI response into styled HTML using Markdown + Code node. Email Delivery: Sends the briefing as an email to a predefined recipient. 🧠 How It Works Polling Control: Uses Wait and If nodes to handle Bright Data snapshot readiness. Date Sorting: Publication dates (ISO 8601 format) are parsed and used for sorting. AI Summarization: Gemini condenses multiple articles into one cohesive summary. Formatting: Clean HTML with readable styles is generated dynamically before sending. 📨 Final Output The email includes: A brief summary of the most important developments Date range of the collected news Topics covered 🔐 Credentials Used Bright Data API (replace YOUR_API_KEY in the HTTP nodes) Google Gemini (Flash) API Email SMTP (configured in Email Send node) ⚠️ Notes You must replace all YOUR_API_KEY placeholders in Bright Data request headers with your actual Bright Data API key. You can customize the keyword prompt and output style freely. I would recommend to keep the sort = relevance option for best chronological results - sorting by date is handled later.
by ARRE
Good to know: This workflow automatically processes product images from Google Drive, generates AI-powered background prompts using multiple AI models (ChatGPT, Claude, or Groq), creates professional background scenes using Pixelcut.ai, and saves enhanced images back to your Google Drive. Perfect for e-commerce businesses and product photography workflows. Who is this for? ➖E-commerce store owners who need professional product backgrounds ➖Product photographers looking to automate background generation ➖Marketing teams creating consistent product imagery ➖Small businesses wanting to enhance their product photos without expensive studio setups ➖Anyone who needs to quickly transform transparent product images into commercial-ready photos What problem is this workflow solving? This workflow solves the challenge of creating professional product photography backgrounds at scale. Instead of manually editing each product image or setting up expensive photo shoots, it automatically generates contextually appropriate backgrounds for your products using AI technology. It eliminates the time-consuming process of background creation while maintaining professional quality and consistency across your product catalog. What this workflow does: ✅Automatically fetches product images from your Google Drive folder ✅Downloads transparent/background-free product images ✅Uses advanced AI models (ChatGPT, Claude, or Groq) to generate intelligent background prompts based on product analysis ✅Creates professional backgrounds using Pixelcut.ai API with AI-generated or custom prompts ✅Saves enhanced product images back to Google Drive with organized naming ✅Processes multiple images in batch automatically How it works: 1️⃣Google Drive node searches for PNG product images in your specified folder 2️⃣Binary download node retrieves the actual image files for processing 3️⃣Optional AI agent analyzes products using your chosen AI model (OpenAI GPT-4, Claude, or Groq) and generates appropriate background prompts 4️⃣Pixelcut.ai API processes images and adds professional backgrounds using AI-generated or manual prompts 5️⃣Enhanced images are automatically saved back to Google Drive with "enhanced-" prefix How to use: Set up Google Drive OAuth2 credentials in n8n Create a Pixelcut.ai account and get your API key Configure your source folder ID in the Google Drive nodes Set up your output folder ID for enhanced images Choose and configure your preferred AI model credentials (OpenAI for ChatGPT, Anthropic for Claude, or Groq) Replace placeholder API keys with your actual credentials Execute the workflow to process your product images Requirements: ✅n8n instance (cloud or self-hosted) ✅Google Drive account with OAuth2 access ✅Pixelcut.ai API account and key ✅Product images in PNG format (transparent backgrounds recommended) ✅AI API credentials for automatic prompt generation (choose from): OpenAI API (for ChatGPT/GPT-4) Anthropic API (for Claude) Groq API (for fast inference) ✅Basic understanding of n8n workflows Customizing this workflow: 🟢Modify the image format filter to support JPG, WEBP, or other formats 🟢Switch between different AI models (ChatGPT, Claude, Groq) for prompt generation 🟢Customize background prompts for different product categories 🟢Add background removal step for products with existing backgrounds 🟢Switch to different AI background services (Deep-Image.ai, Remove.bg, etc.) 🟢Configure different AI model parameters for varied prompt creativity 🟢Add image resizing or quality optimization steps 🟢Create multiple output folders for different product categories 🟢Add error handling and retry mechanisms for failed processes 🟢Implement A/B testing with different AI models for prompt quality comparison
by Automate With Marc
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 🧠 AI-Powered Blog Post Generator Category: Content Automation / AI Writing / Marketing Description: This automated workflow helps you generate fresh, SEO-optimized blog posts daily using AI tools—perfect for solo creators, marketers, and content teams looking to stay on top of the latest AI trends without manual research or writing. For more of such builds and step-by-step Tutorial Guides, check out: https://www.youtube.com/@Automatewithmarc Here’s how it works: Schedule Trigger kicks off the workflow daily (or at your preferred interval). Perplexity AI Node researches the most interesting recent AI news tailored for a non-technical audience. AI Agent (Claude via Anthropic) turns that news into a full-length blog post based on a structured prompt that includes title, intro, 3+ section headers, takeaway, and meta description—designed for clarity, engagement, and SEO. Optional Memory & Perplexity Tool Nodes enhance the agent's responses by allowing it to clarify facts or fetch more context. Google Docs Node automatically saves the final blog post to your selected document—ready for review, scheduling, or publishing. Key Features: Combines Perplexity AI + Claude AI (Anthropic) for research + writing Built-in memory and retrieval logic for deeper contextual accuracy Non-technical, friendly writing style ideal for general audiences Output saved directly to Google Docs Fully no-code, customizable, and extendable Use Cases: Automate weekly blog content for your newsletter or site Repurpose content into social posts or scripts Keep your brand relevant in the fast-moving AI landscape Setup Requirements: Perplexity API Key Anthropic API Key Google Docs (OAuth2 connected)
by Joseph
This workflow automates invoice generation from form submissions, ensuring unique order IDs, creating PDF invoices, storing files, emailing customers, and logging invoice data — all seamlessly integrated. 🔹 Workflow Overview Trigger (Webhook) Starts when an order form is submitted, capturing customer and order details. Generate Random Order ID A Function node creates a unique alphanumeric invoice ID (e.g., INV-X92B7D). Check for Duplicate Order ID Google Sheets looks up the generated order ID in your invoice log sheet to prevent duplicates. Conditional Check (IF Node) If the ID already exists → regenerates a new ID (loops back) If unique → proceeds to invoice creation Prepare Invoice Data A Set node formats customer info, date, order items, and the unique order ID to fit your invoice template. Convert HTML to PDF HTTP Request node sends your invoice HTML to the RapidAPI HTML-to-PDF service and receives the PDF file. Upload PDF to Cloud Storage Save the PDF in Google Drive or Dropbox with a clear file name like Invoice-INV-X92B7D.pdf. Send Invoice Email to Customer Email node attaches the PDF and includes the order ID in the email subject/body. Log Invoice Details Append invoice data (customer info, order ID, total, PDF link) to your Google Sheet for tracking. ⚙️ Node Details & Setup 1. Webhook Trigger Configure to receive form submissions (order details like name, email, items, total). 2. Function: Generate Random Order ID Sample JS code generates unique IDs prefixed by INV-. 3. Google Sheets: Lookup Row Set up connection to your invoice log sheet. Search for existing order ID to avoid duplicates. 4. IF Node: Check Order ID Existence Condition: If order ID found → loop to regenerate. Else → continue workflow. 5. Set Node: Prepare Invoice HTML Define variables like customer name, date, items, and order ID. This data populates your HTML invoice template. 6. HTTP Request: Convert HTML to PDF API URL to get your key Send invoice HTML in the request body. Receive PDF file blob or download URL. 7. Google Drive (or Dropbox) Upload Upload the PDF file. Use file name format: Invoice-{{$json["order_id"]}}.pdf 8. Email Node Recipient: customer email from the form data. Attach generated PDF. Include order ID in email subject or body for reference. 9. Google Sheets: Append Row Log invoice metadata to keep records updated. 📁 Google Sheets Template You can make a copy of the invoice log template here This sheet includes columns for order\_id, customer name, email, total, and invoice PDF link. Customize it as needed. 📌 Additional Notes Customize the invoice HTML template inside the Set node to match your branding. Ensure API credentials for RapidAPI, Google Drive/Dropbox, and email are properly set up in your n8n credentials. You can expand this workflow by adding payment processing or SMS notifications. Need help or want a custom workflow? Reach out via email at joseph@uppfy.com.
by Stephan Koning
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. **Alternatively, you can delete the community node and use the HTTP node instead. ** Most email agent templates are fundamentally broken. They're stateless—they have no long-term memory. An agent that can't remember past conversations is just a glorified auto-responder, not an intelligent system. This workflow is Part 1 of building a truly agentic system: creating the brain. Before you can have an agent that replies intelligently, you need a knowledge base for it to draw from. This system uses a sophisticated parser to automatically read, analyze, and structure every incoming email. It then logs that intelligence into a persistent, long-term memory powered by mem0. The Problem This Solves Your inbox is a goldmine of client data, but it's unstructured, and manually monitoring it is a full-time job. This constant, reactive work prevents you from scaling. This workflow solves that "system problem" by creating an "always-on" engine that automatically processes, analyzes, and structures every incoming email, turning raw communication into a single source of truth for growth. How It Works This is an autonomous, multi-stage intelligence engine. It runs in the background, turning every new email into a valuable data asset. Real-Time Ingest & Prep: The system is kicked off by the Gmail Trigger, which constantly watches your inbox. The moment a new email arrives, the workflow fires. That email is immediately passed to the Set Target Email node, which strips it down to the essentials: the sender's address, the subject, and the core text of the message (I prefer using the plain text or HTML-as-text for reliability). While this step is optional, it's a good practice for keeping the data clean and orderly for the AI. AI Analysis (The Brain): The prepared text is fed to the core of the system: the AI Agent. This agent, powered by the LLM of your choice (e.g., GPT-4), reads and understands the email's content. It's not just reading; it's performing analysis to: Extract the core message. Determine the sentiment (Positive, Negative, Neutral). Identify potential red flags. Pull out key topics and keywords. The agent uses Window Buffer Memory to recall the last 10 messages within the same conversation thread, giving it the context to provide a much smarter analysis. Quality Control (The Parser): We don't trust the AI's first draft blindly. The analysis is sent to an Auto-fixing Output Parser. If the initial output isn't in a perfect JSON format, a second Parsing LLM (e.g., Mistral) automatically corrects it. This is our "twist" that guarantees your data is always perfectly structured and reliable. Create a Permanent Client Record: This is the most critical step. The clean, structured data is sent to mem0. The analysis is now logged against the sender's email address. This moves beyond just tracking conversations; it builds a complete, historical intelligence file on every person you communicate with, creating an invaluable, long-term asset. Optional Use: For back-filling historical data, you can disable the Gmail Trigger and temporarily connect a Gmail "Get Many" node to the Set Target Email node to process your backlog in batches. Setup Requirements To deploy this system, you'll need the following: An active n8n instance. Gmail** API credentials. An API key for your primary LLM (e.g., OpenAI). An API key for your parsing LLM (e.g., Mistral AI). An account with mem0.ai for the memory layer.
by Vitorio Magalhães
Auto-publish NASA APOD to LinkedIn with AI translation and hashtags Transform NASA's daily astronomical wonders into engaging LinkedIn content automatically. This workflow fetches NASA's Astronomy Picture of the Day, translates it to Brazilian Portuguese using AI, generates strategic hashtags, and publishes everything to your LinkedIn profile with the stunning space image attached. Who's it for Content creators, astronomy enthusiasts, science communicators, and anyone wanting to share high-quality educational content consistently on LinkedIn. Perfect for Portuguese-speaking professionals who want to engage their network with fascinating space discoveries while building their personal brand as a science advocate. How it works The workflow runs on a daily schedule and handles the complete content pipeline automatically. It fetches the latest NASA APOD through the official API, including both the image and detailed explanation. The English description gets professionally translated to selected language using Google Gemini 2.5 Flash, while maintaining scientific accuracy and terminology. Smart hashtag generation combines fixed branding tags with content-specific ones, mixing Portuguese and English for maximum reach. The final post includes the NASA image, translated description, and strategic hashtags, then gets published to your LinkedIn profile automatically. How to set up You'll need accounts for Google AI Studio (free), LinkedIn Developer (free), and a Telegram bot for notifications. The setup takes about 15 minutes and uses only free services and APIs. First, create your Google AI Studio account and get an API key for the AI translation services. Then set up a LinkedIn OAuth2 application to enable posting permissions. Create a Telegram bot through BotFather and get your chat ID for notifications. Configure the Settings node with your Telegram chat ID and preferred language. The workflow comes with all prompts and configurations ready to use. Test each component individually before activating the daily automation. Requirements LinkedIn account with posting permissions Google AI Studio API key (free tier available) Telegram bot token and your chat ID Basic understanding of OAuth2 setup for LinkedIn NASA API key (optional - demo key included) All services used have generous free tiers, making this workflow completely free to operate indefinitely. How to customize the workflow The centralized Settings node makes customization simple. Change the target language from Brazilian Portuguese to any other language by updating the translate_to_language variable. Modify the posting schedule in the CRON trigger to match your preferred timing. Customize the post template in the "Create Final Post Text" node to match your personal brand voice. Adjust the hashtag strategy by editing the AI prompt in the "Generate Hashtags" node. Add additional social platforms by duplicating the LinkedIn publisher with different credentials. The AI prompts can be fine-tuned for different writing styles or specific astronomical topics. You can also extend the workflow to include additional content processing, image enhancements, or cross-posting to multiple platforms while maintaining the core NASA APOD automation.