by Julien DEL RIO
Who's it for This template is designed for content creators, podcasters, businesses, and researchers who need to transcribe long audio recordings that exceed OpenAI Whisper's 25 MB file size limit (~20 minutes of audio). How it works This workflow combines n8n, FileFlows, and OpenAI Whisper API to transcribe audio files of any length: User uploads an MP3 file through a web form and provides an email address n8n splits the file into 4 MiB chunks and uploads them to FileFlows FileFlows uses FFmpeg to segment the audio into 15-minute chunks (safely under the 25 MB API limit) Each segment is transcribed using OpenAI's Whisper API (configured for French by default) All transcriptions are merged into a single text file The complete transcription is automatically emailed to the user Processing time: Typically 10-15 minutes for a 1-hour audio file. Requirements n8n instance (self-hosted or cloud) FileFlows with Docker and FFmpeg installed OpenAI API key (Whisper API access) Gmail account for email delivery Network access between n8n and FileFlows Setup Complete setup instructions, including FileFlows workflow import, credentials configuration, and storage setup, are provided in the workflow's sticky notes. Cost OpenAI Whisper API: $0.006 per minute. A 1-hour recording costs approximately $0.36.
by Rajeet Nair
📝 Description This workflow helps automatically classify incoming emails using a combination of conditional logic and minimal AI-based classification. The system checks email content, performs sentiment analysis, uses OpenAI for categorization, and routes emails accordingly — with smart but efficient use of LLMs and AI Agents. ⚙️ How it works Trigger: An IMAP Email Trigger initiates the workflow upon receiving a new email. Code Block: Parses essential data from the email. Switch Node: Routes emails based on classification. LLM Chain: Processes specific email cases (e.g., inquiries or complaints). AI Agent (Minimal): Used only when other methods cannot determine intent. Email Responses: Sends tailored replies or routes to support/sales teams accordingly. Sentiment Analysis: Assists with tone evaluation for better response routing. 🧩 Set up steps Estimated setup time: 10–15 minutes You’ll need: An IMAP-compatible email account OpenAI or any compatible LLM provider Pinecone (optional, for vector memory) SMTP credentials for sending email Replace placeholder credentials in sticky notes before running.
by The O Suite
How the n8n OWASP Scanner Works & How to Set It Up How It Works (Simple Flow): Input**: Enter target URL + endpoint (e.g., https://example.com, /login) Scan**: This workflow executes 5 parallel HTTP tests (Headers, Cookies, CORS, HTTPS, Methods) Analyze**: Pure JS logic checks OWASP ASVS (Application Security Verification Standard) rules (no external tools) Merge**: Combines all findings into one Markdown report Output: Auto-generates + downloads scan-2025-11-16_210900.md (example filename) Email:** (Optional) Forward the report to an email address using Gmail. Setup in 3 Steps (2 Minutes) Import Workflow Copy the full JSON (from "Export Final Workflow") In n8n → Workflows → Import from JSON → Paste → Import (Optional) Connect your Gmail credentials In the last node to auto-email the report Click Execute the workflow Enter a URL in the new window, then click 'submit'. You can alternatively download or receive the Markdown report directly from the Markdown to File node (Supports any HTTP/HTTPS endpoint. Works in n8n Cloud or self-hosted.)
by Fahmi Fahreza
Automated Crypto Forecast Pipeline using Decodo and Gmail Sign Up for Decodo HERE for discount This template scrapes CoinGecko pages for selected coins, converts metrics into clean JSON, stores them in an n8n Data Table, generates 24-hour direction forecasts with Gemini, and emails a concise report. Who’s it for? Crypto watchers who want automated snapshots, forecasts, and a daily email—without managing a full data stack. How it works 30-min schedule loops coins, scrapes CoinGecko (Decodo), parses metrics, and upserts to Data Table. 18:00 schedule loads last 48h data. Gemini estimates next-24h direction windows. Email is rendered (HTML + plain text) and sent. How to set up Add Decodo, Gmail, and Gemini credentials. Open Configure Coins to edit tickers. Set Data Table ID. Replace recipient email. (Self-host only) Community node Decodo required. @decodo/n8n-nodes-decodo (community)
by Sayone Technologies
🚀 AI-Powered Email to Purchase Order Workflow Automatically scan your inbox for new purchase order requests, extract order details using Gemini AI, and log them into Google Sheets — all without manual effort. ✨ Core Capabilities ⏱ Runs every minute to check unread emails 📧 Filters emails by subject 🤖 Uses Gemini AI to summarize email content & extract structured order details 📅 Formats dates into ISO calendar weeks 📊 Adds product data from Google Sheets to complete order info ✅ Appends final purchase order records into a Google Sheet (without replacing previous ones) 🛠 Setup Essentials 📩 Gmail account for fetching unread emails 🔑 Google Gemini (PaLM) API credentials 📒 Google Sheet with predefined purchase order headers 📖 Activation Guide ⚙️ Configure Gmail & Google Sheets credentials in n8n 🎯 Adjust the subject filter to match your email rules 🔌 Connect Gemini AI with your API credentials 📑 Create a Google Sheet with the required headers ▶️ Activate the workflow and let it run in the background 🎨 Customizing the Workflow 🔍 Email Filters → Change keywords in the filter node to match your purchase order email subjects 🏷 Order Fields → Modify Set and Append to Google Sheet nodes if your schema differs ✍️ AI Instructions → Adjust the AI Agent’s prompt to fit your company’s email style or product details ⏲ Frequency → Update the Cron node if you want to scan emails less often 📂 Target Google Sheet → Point to a different sheet or tab depending on your department or customer
by Abdullahi Ahmed
AI-Powered Lead Triage and Response System 🤖 This advanced workflow creates a customized, embedded lead-capture form, automatically logs client data to a spreadsheet, and uses AI to instantly analyze and summarize the lead for rapid human follow-up. How it works A potential client fills out the Gurey AI partnership form (built-in n8n form trigger). The workflow immediately logs all submitted data to a designated Google Sheet. An AI Agent receives the raw data and is instructed to condense it into a factual, concise client summary. A second AI Agent generates a personalized welcome and confirmation email to the client, using the AI-generated summary and original goals to make the email highly relevant. Set up steps (2-3 minutes) ⏱️ Google Sheets: Create a new Google Sheet to log your client data, making sure the column headers match the form field names (e.g., "First Name", "📧 Email", etc.). Credentials: Add two credentials to your n8n instance: Google Sheets OAuth2 API OpenRouter API (for the AI Agents) Update Nodes: Connect the new credentials to the "Log client data" and "OpenRouter Chat Model" nodes. Finalize: Open the "Log client data" node and select your newly created Google Sheet. Detailed descriptions and links are available in the sticky notes within the workflow. 🤓
by Thesys
Analyze crypto markets with interactive graphs using CoinGecko and C1 by Thesys This n8n template can answer questions about real-time prices, market moves, trending coins, and token details with interactive UI in real time (cards, charts, buttons) instead of plain text using C1 by Thesys. Data is fetched through the CoinGecko Free MCP tool. Check out a working demo of this template here. What this workflow does A user sends a message in the n8n Chat UI (public chat trigger). The AI Agent interprets the request. The agent calls CoinGecko Free MCP to fetch market data (prices, coins, trending, etc.). The model responds through C1 by Thesys with a streaming, UI answer. Example prompts you can try right away Copy/paste any of these into the chat: “What’s the current price of Bitcoin and Ethereum?” “Give me today’s market summary: total market cap, BTC dominance, top gainers/losers.” “Compare ETH vs SOL over 30 days with a chart.” > Note: This template is for information and visualization, not financial advice. How it works User sends a prompt C1 model based on prompt will use CoinGecko MCP to fetch live data C1 Model generates a UI Schema Response Schema is rendered as UI using Thesys GenUI SDK on the frontend Setup Make sure you have the following: 1️⃣ Thesys API Key You’ll need an API key to authenticate and use Thesys services. 👉 Get your key here What is C1 by Thesys? C1 by Thesys is an API middleware that augments LLMs to respond with interactive UI (charts, buttons, forms) in real time instead of text. Facing setup issues? If you get stuck or have questions: 💬 Join the Thesys Community 📧 Email support: support@thesys.dev
by ICTS Automation
Overview This workflow processes passport images submitted through a form, extracts structured data using OpenAI OCR, and generates QR codes with the extracted information. Results are displayed on the form completion page and sent via email. The workflow reduces manual data entry and speeds up document processing workflows where passport information needs to be captured consistently. It works best with clear, front-facing passport images. How it works User submits a form with one or more passport images Files are split and processed individually Each file is validated as an image and resized for OCR accuracy Image is converted to base64 and sent to OpenAI for data extraction Extracted fields are cleaned, standardized, and validated QR payload is generated from standardized data and converted to QR code image URL Results are aggregated into HTML summary Summary is displayed on form completion page and sent via email Setup steps Add OpenAI API credentials to the OCR request node Connect your Gmail account for email delivery Replace YOUR_EMAIL placeholder with recipient address Test with 1–3 clear passport images Customization Modify OCR prompt to support other document types Adjust QR payload format to match your system requirements
by koichi nagino
Description Start your day with the perfect outfit suggestion tailored to the local weather. This workflow runs automatically every morning, fetches the current weather forecast for your city, and uses an AI stylist to generate a practical, gender-neutral outfit recommendation. It then designs a clean, vertical image card with all the details—date, temperature, weather conditions, and the complete outfit advice—and posts it directly to your Slack channel. It’s like having a personal stylist and weather reporter deliver a daily briefing right where your team communicates. Who’s it for Teams working in a shared office location who want a fun, daily update. Individuals looking to automate their morning routine and take the guesswork out of getting dressed. Community managers wanting to add engaging, automated content to their Slack workspace. Anyone interested in a practical example of combining weather data, AI, and dynamic image generation. How it works / What it does Triggers Daily: The workflow automatically runs every day at 6 AM. Fetches Weather: It gets the current weather forecast for a specified city (default is Tokyo) using the OpenWeatherMap node. Consults AI Stylist: The weather data is sent to an AI model, which acts as a stylist and returns a practical, gender-neutral outfit suggestion. Designs an Image Card: It dynamically creates a vertical image and writes the date, detailed weather info, and the AI's full recommendation onto it. Posts to Slack: Finally, it uploads the completed image card to your designated Slack channel with a friendly morning greeting. Requirements An n8n instance. An OpenWeatherMap API Key. An OpenRouter API Key (or credentials for another compatible AI model). A Slack workspace and the necessary permissions to connect an app. How to set up Set Weather Location: In the Get Weather Data node, add your OpenWeatherMap API Key and change the city name if you wish. Configure AI Model: In the OpenRouter Chat Model node, add your API Key. Configure Slack: In the Upload a file node, add your Slack credentials and, most importantly, select the channel where you want the forecast to be posted. Adjust Schedule (Optional): You can change the trigger time in the Daily 6AM Trigger node. How to customize the workflow Change the AI's Personality: Edit the system message in the Generate Outfit Advice node. You could ask the AI to be a pirate, a 90s fashion icon, or a formal stylist. Customize the Image: In the Create Image Card node, you can change the background color, font sizes, colors, and the layout of the text. Use a Different Platform: Swap the Slack node for a Discord, Telegram, or Email node to send the forecast to your preferred platform.
by Sasikala Jayamani
How it works Input: Google Sheets provides “Expected Content” rows (one per block/section). HTML Parse:** A JS/HTML step extracts Actual Content from the email’s HTML (from Gmail or any provided HTML source). Merge:** Expected and Actual items are merged into aligned pairs for comparison. Compare:** A JS node compares strings and produces a Result (Pass/Fail). (This flow intentionally stops at the result and does not compute a mismatch reason.) Log:** The workflow writes back “Actual Content” + “Result” to Google Sheets for reporting. Setup steps Google Sheets** Create a sheet with columns: SectionId, ExpectedContent, ActualContent, Result. Populate SectionId + ExpectedContent for each content block you want to verify. Email HTML source** Use a Gmail node to pull message HTML or use HTTP Request/Read Binary File + HTML/JS to supply the HTML. Extraction logic (JS/HTML)** Implement selectors/XPaths/DOM parsing for each SectionId to extract Actual Content. Normalize whitespace and trim HTML entities for a fair comparison. Merge & Compare** Merge on SectionId to align Expected ↔ Actual. In the Code (JS) node, compare strings and set Result to Pass if equal (or meets your rule), otherwise Fail. Write back** Use Google Sheets to update ActualContent and Result for each row. Requirements n8n with access to: Google Sheets, Code/HTML, and (optionally) Gmail nodes. A Google Sheets document with at least these columns: SectionId (or Key) ExpectedContent ActualContent (output) Result (output: Pass/Fail) Access to the email HTML (Gmail node, HTTP fetch, or paste‑in).
by Abdullah Al Shishani
This workflow helps support teams evaluate call quality and deliver structured feedback without manual review. Agents upload their recordings using an n8n Form, and the system handles transcription, scoring, risk checks, and coaching delivery automatically using Gemini, OpenAI, Google Drive, and Gmail. How it works Form submission Agents submit their name, email, and call recording using an n8n Form. The file is stored securely in Google Drive. AI transcription Gemini converts the audio into a structured transcript for analysis. Performance scoring An AI Agent evaluates the conversation across key criteria such as empathy, clarity, accuracy, and policy compliance, producing a weighted score out of 100. Sentiment and risk detection The workflow identifies customer sentiment and flags potential issues like missing consent or sensitive data exposure. Coaching delivery A personalized performance summary is generated and sent automatically to both the agent and supervisor via Gmail. Setup steps Configure the n8n Form fields (agent name, email, file upload) Connect Google Drive for file storage Add your Gemini and OpenAI API credentials Connect Gmail for automated feedback emails (Optional) Adjust scoring criteria inside the AI Agent node
by Rachel Stewart
This N8N Template shows you how to create a basic Reddit scraper and email yourself the highest scoring threads This is for founders, service providers and anyone who wants to do more social listening but doesn't want to pay for an expensive tool. It uses a basic google sheet for configuration so you can manage and filter without updating any code. How it works We start with a scheduler (but you could manually trigger if you want) We read in a google sheet with the configuration of which subreddits you want to search, as well as minimum scoring so you can weight importance of each subreddit. Then you use an RSS feed to get the content Next, we normalize the RSS feed so that we can extract the important information Then we go back to the Google sheet (this time a different tab) that has the keywords we want to look for. We also include key words we don't want. We score each post based on the key words and additional pain points written into the scoring node. Then we filter out the posts that don't score high enough, or that we've already "seen" We keep track of the posts we've seen in another tab in the excel sheet. This prevents duplication Then we create the email, sending just the title as a link and send it via SMTP Requirements Google sheets account & credentials Google sheet with Email for SMTP How to Customize Create your own Google Sheets Template like this: Google Doc Template In the scoring node, update with painpoints (this could be added to Google Sheet config if you want) Update weights and scoring metrics in scoring node Update with your email