by Shinji Watanabe
Who’s it for Learners, teachers, and content creators who track German vocabulary in Google Sheets and want automatic enrichment with synonyms, example sentences, and basic lexical info—without copy-and-paste. How it works / What it does When a new row is added to your sheet (column vocabulary), the workflow looks up the word in OpenThesaurus and checks if any entries are found. If so, an LLM generates a strict JSON object containing: natural_sentence (a clear German example), part_of_speech, translation_ja (concise Japanese gloss), and level (CEFR estimate). The JSON is parsed and written back to the same row, keeping your spreadsheet the single source of truth. If no entry is found, the workflow writes a helpful “not found” note. How to set up Connect Google Sheets and select your spreadsheet/tab. Confirm a vocabulary column exists. Configure OpenThesaurus (no API key required). Add your LLM credentials and keep the prompt’s “JSON only” constraint. Rename nodes clearly and add a yellow sticky note with this description. Requirements Access to Google Sheets LLM credentials (e.g., OpenAI) A tab containing a vocabulary column How to customize the workflow Adjust the If condition (e.g., require terms.length > 1 or fall back to the headword). Tweak the LLM prompt for tone, length, or level policy. Map extra fields in the Set node; add columns for difficulty tags or usage notes. Follow security best practices (no hardcoded secrets in HTTP nodes).
by Rahul Joshi
Description Automate your financial reporting by pulling charge and refund data from Stripe, calculating key revenue and risk metrics, and delivering professional reports directly into Slack. This workflow runs on a monthly or quarterly schedule, processes Stripe data into insights, and formats a rich Slack message with revenue breakdowns, top customers, refund analysis, and payment method insights. 📊💰💬 What This Template Does Runs automatically on a monthly (1st day) or quarterly schedule (every 3 months) at 9 AM. ⏱️ Fetches Stripe charges and refunds for the reporting period. 💳 Merges charge and refund data for a unified dataset. 🔄 Calculates financial metrics: total revenue, net revenue, average transaction value, refund rate. 📈 Estimates growth metrics: Monthly Recurring Revenue (MRR) and Annual Recurring Revenue (ARR). 🚀 Identifies top 3 customers by revenue. 🏆 Breaks down payment methods used (e.g., Visa, Mastercard, etc.). 💳 Performs risk analysis on transactions by Stripe’s risk scores. ⚠️ Analyzes refund reasons and generates insights. 🔄 Formats all results into a clear, structured Slack message with sections for finance, growth, risk, and customers. 💬 Key Benefits Eliminates manual Stripe report exports. ⚡ Ensures timely financial reporting (monthly or quarterly). 📅 Provides instant visibility of revenue, refunds, and risks in Slack. 📲 Surfaces top customers and payment methods for strategic insights. 🏅 Helps finance and ops teams catch anomalies early (high refunds or risky transactions). 🛡️ Keeps leadership and teams aligned with automated reporting. 👩💻👨💻 Features Schedule Triggers – Automates reporting on monthly or quarterly cycles. Stripe Charges & Refunds – Pulls transaction and refund data directly from Stripe API. Merge Node – Combines charges and refunds into a single dataset. Custom Code Metrics – Calculates revenue, net revenue, refund rates, and growth metrics. Top Customer Analysis – Highlights top revenue-generating customers. Payment Breakdown – Shows revenue split by card brand/payment method. Refund Analysis – Summarizes refund reasons and rates. Risk Analysis – Categorizes payments by low, medium, or high risk scores. Slack Integration – Delivers insights in a professional report format. Requirements n8n instance (cloud or self-hosted). Stripe API credentials with read access to charges and refunds. Slack Bot token with chat:write permission. Target Audience Finance teams needing automated recurring Stripe reports. 💼 SaaS companies monitoring MRR, ARR, and refunds. 🚀 Founders/Execs who want financial dashboards in Slack. 👩💼 Operations teams tracking risk and refund trends. 🛠️ Remote teams relying on Slack for reporting. 🌍 Step-by-Step Setup Instructions Connect your Stripe API credentials in n8n. 🔑 Connect your Slack API credentials and select your target channel. 💬 Adjust the schedule triggers (monthly/quarterly) if needed. ⏱️ Customize the Slack message formatting if you want branding or tone changes. 🎨 Test the workflow with sample data to confirm financial metrics. ✅
by jellyfish
Template Description This description details the template's purpose, how it works, and its key features. You can copy and use it directly. Overview This is a powerful n8n "meta-workflow" that acts as a Supervisor. Through a simple Telegram bot, you can dynamically create, manage, and delete countless independent, AI-driven market monitoring agents (Watchdogs). This template is a perfect implementation of the "Workflowception" (workflow managing workflows) concept in n8n, showcasing how to achieve ultimate automation by leveraging the the n8n API. How It Works ? Telegram Bot Interface: Execute all operations by sending commands to your own Telegram Bot: /add SYMBOL INTERVAL PROMPT: Add a new monitoring task. /delete SYMBOL: Delete an existing monitoring task. /list: List all currently running monitoring tasks. /help: Get help information. Use Telegram Bot to control The watchdog workfolw created in the below Dynamic Workflow Management: Upon receiving an /add command, the Supervisor system reads a "Watchdog" template, fills in your provided parameters (like trading pair and time interval), and then automatically creates a brand new, independent workflow via the n8n API and activates it. Persistent Storage: All monitoring tasks are stored in a PostgreSQL database, ensuring your configurations are safe even if n8n restarts. The ID of each newly created workflow is also written back to the database to facilitate future deletion operations. AI-Powered Analysis: Each created "Watchdog" workflow runs on schedule. It fetches the latest candlestick chart by calling a self-hosted tradingview-snapshot service. This service, available at https://github.com/0xcathiefish/tradingview-snapshot, works by simulating a login to your account and then using TradingView's official snapshot feature to generate an unrestricted, high-quality chart image. An example of a generated snapshot can be seen here: https://s3.tradingview.com/snapshots/u/uvxylM1Z.png. To use this, you need to download the Docker image from the packages in the GitHub repository mentioned above, and run it as a container. The n8n workflow then communicates directly with this container via an HTTP API to request and receive the chart snapshot. After obtaining the image, the workflow calls a multimodal AI model (Gemini). It sends both the chart image and your custom text-based conditions (e.g., "breakout above previous high on high volume" or "break below 4-hour MA20") to the AI for analysis, enabling truly intelligent chart interpretation and alert triggering. Key Features Workflowception: A prime example of one workflow using an API to create, activate, and delete other workflows. Full Control via Telegram: Manage your monitoring bots from anywhere, anytime, without needing to log into the n8n interface. AI Visual Analysis: Move beyond simple price alerts. Let an AI "read" the charts for you to enable complex, pattern-based, and indicator-based intelligent alerts. Persistent & Extensible: Built on PostgreSQL for stability and reliability. You can easily add more custom commands.
by Dataki
BigQuery RAG with OpenAI Embeddings This workflow demonstrates how to use Retrieval-Augmented Generation (RAG) with BigQuery and OpenAI. By default, you cannot directly use OpenAI Cloud Models within BigQuery. Try it This template comes with access to a *public BigQuery table** that stores part of the n8n documentation (about nodes and triggers), allowing you to try the workflow right away: n8n-docs-rag.n8n_docs.n8n_docs_embeddings* ⚠️ *Important:* BigQuery uses the *requester pays model.* The table is small (~40 MB), and BigQuery provides *1 TB of free processing per month**. Running 3–4 queries for testing should remain within the free tier, unless your project has already consumed its quota. More info here: BigQuery Pricing* Why this workflow? Many organizations already use BigQuery to store enterprise data, and OpenAI for LLM use cases. When it comes to RAG, the common approach is to rely on dedicated vector databases such as Qdrant, Pinecone, Weaviate, or PostgreSQL with pgvector. Those are good choices, but in cases where an organization already uses and is familiar with BigQuery, it can be more efficient to leverage its built-in vector capabilities for RAG. Then comes the question of the LLM. If OpenAI is the chosen provider, teams are often frustrated that it is not directly compatible with BigQuery. This workflow solves that limitation. Prerequisites To use this workflow, you will need: A good understanding of BigQuery and its vector capabilities A BigQuery table containing documents and an embeddings column The embeddings column must be of type FLOAT and mode REPEATED (to store arrays) A data pipeline that generates embeddings with the OpenAI API and stores them in BigQuery This template comes with a public table that stores part of the n8n documentation (about nodes and triggers), so you can try it out: n8n-docs-rag.n8n_docs.n8n_docs_embeddings How it works The system consists of two workflows: Main workflow** → Hosts the AI Agent, which connects to a subworkflow for RAG Subworkflow** → Queries the BigQuery vector table. The retrieved documents are then used by the AI Agent to generate an answer for the user.
by Rohit Dabra
WooCommerce AI Agent — n8n Workflow (Overview) Description: Turn your WooCommerce store into a conversational AI assistant — create products, place orders, run reports and manage coupons using natural language via n8n + an MCP Server. Key features Natural-language commands mapped to WooCommerce actions (products, orders, reports, coupons). Structured JSON outputs + lightweight mapping to avoid schema errors. Calls routed through your MCP Server for secure, auditable tool execution. Minimal user prompts — agent auto-fetches context and asks only when necessary. Extensible: add new tools or customize prompts/mappings easily. Demo of the workflow: Youtube Video 🚀 Setup Guide: WooCommerce + AI Agent Workflow in n8n 1. Prerequisites Running n8n instance WooCommerce store with REST API keys OpenAI API key MCP server (production URL) 2. Import Workflow Open n8n dashboard Go to Workflows → Import Upload/paste the workflow JSON Save as WooCommerce AI Agent 3. Configure Credentials OpenAI Create new credential → OpenAI API Add your API key → Save & test WooCommerce Create new credential → WooCommerce API Enter Base URL, Consumer Key & Secret → Save & test MCP Client In MCP Client node, set Server URL to your MCP server production URL Add authentication if required 4. Test Workflow Open workflow in editor Run a sample request (e.g., create a test product) Verify product appears in WooCommerce 5. Activate Workflow Once tested, click Activate in n8n Workflow is now live 🎉 6. Troubleshooting Schema errors** → Ensure fields match WooCommerce node requirements Connection issues** → Re-check credentials and MCP URL
by Rahul Joshi
Description Automatically compare candidate resumes to job descriptions (PDFs) from Google Drive, generate a 0–100 fit score with gap analysis, and update Google Sheets—powered by Azure OpenAI (GPT-4o-mini). Fast, consistent screening with saved reports in Drive. 📈📄 What This Template Does Fetches job descriptions and resumes (PDF) from Google Drive. 📥 Extracts clean text from both PDFs for analysis. 🧼 Generates an AI evaluation (score, must-have gaps, nice-to-have bonuses, summary). 🤝 Parses the AI output to structured JSON. 🧩 Delivers a saved text report in Drive and updates a Google Sheet. 🗂️ Key Benefits Saves time with automated, consistent scoring. ⏱️ Clear gap analysis for quick decisions. 🔍 Audit-ready reports stored in Drive. 🧾 Centralized tracking in Google Sheets. 📊 No-code operation after initial setup. 🧑💻 Features Google Drive search and download for JDs and resumes. 📂 PDF-to-text extraction for reliable parsing. 📝 Azure OpenAI (GPT-4o-mini) comparison and scoring. 🤖 Robust JSON parsing and error handling. 🛡️ Automatic report creation in Drive. 💾 Append or update candidate data in Google Sheets. 📑 Requirements n8n instance (cloud or self-hosted). Google Drive credentials in n8n with access to JD and resume folders (e.g., “JD store”, “Resume_store”). Azure OpenAI access with a deployed GPT-4o-mini model and credentials in n8n. Google Sheets credentials in n8n to append or update candidate rows. PDFs for job descriptions and resumes stored in the designated Drive folders. Target Audience Talent acquisition and HR operations teams. 🧠 Recruiters (in-house and agencies). 🧑💼 Hiring managers seeking consistent shortlisting. 🧭 Ops teams standardizing candidate evaluation records. 🗃️ Step-by-Step Setup Instructions Connect Google Drive and Google Sheets in n8n Credentials and verify folder access. 🔑 Add Azure OpenAI credentials and select GPT-4o-mini in the AI node. 🧠 Import the workflow and assign credentials to all nodes (Drive, AI, Sheets). 📦 Set folder references for JDs (“JD store”) and resumes (“Resume_store”). 📁 Run once to validate extraction, scoring, report creation, and sheet updates. ✅
by Connor Provines
Analyze email performance and optimize campaigns with AI using SendGrid and Airtable This n8n template creates an automated feedback loop that pulls email metrics from SendGrid weekly, tracks performance in Airtable, analyzes trends across the last 4 weeks, and generates specific recommendations for your next campaign. The system learns what works and provides data-driven insights directly to your email creation process. Who's it for Email marketers and growth teams who want to continuously improve campaign performance without manual analysis. Perfect for businesses running regular email campaigns who need actionable insights based on real data rather than guesswork. Good to know After 4-6 weeks, expect 15-30% improvement in primary metrics Requires at least 2 weeks of historical data to generate meaningful analysis System improves over time as it learns from your audience Implementation time: ~1 hour total How it works Schedule trigger runs weekly (typically Monday mornings) Pulls previous week's email statistics from SendGrid (delivered, opens, clicks, rates) Updates the previous week's record in Airtable with actual performance data GPT-4 analyzes trends across the last 4 weeks, identifying patterns and opportunities Creates a new Airtable record for the upcoming week with specific recommendations: what to test, how to change it, expected outcome, and confidence level Your email creation workflow pulls these recommendations when generating new campaigns After sending, the actual email content is saved back to Airtable to close the loop How to set up Create Airtable base: Make a table called "Email Campaign Performance" with fields for week_ending, delivered, unique_opens, unique_clicks, open_rate, ctr, decision, test_variable, test_hypothesis, confidence_level, test_directive, implementation_instruction, subject_line_used, email_body, icp, use_case, baseline_performance, success_metric, target_improvement Configure SendGrid: Add API key to the "SendGrid Data Pull" node and test connection Set up Airtable credentials: Add Personal Access Token and select your base/table in all Airtable nodes Add OpenAI credentials: Configure GPT-4 API key in the "Previous Week Analysis" node Test with sample data: Manually add 2-3 weeks of data to Airtable or run if you have historical data Schedule weekly runs: Set workflow to trigger every Monday at 9 AM (or after your weekly campaign sends) Integrate with email creation: Add an Airtable search node to your email workflow to retrieve current recommendations, and an update node to save what was sent Requirements SendGrid account with API access (or similar ESP with statistics API) Airtable account with Personal Access Token OpenAI API access (GPT-4) Customizing this workflow Use different email platform**: Replace SendGrid node with Mailchimp, Brevo, or any ESP that provides statistics API—adjust field mappings accordingly Add more metrics**: Extend Airtable fields to track bounce rate, unsubscribe rate, spam complaints, or revenue attribution Change analysis frequency**: Adjust schedule trigger for bi-weekly or monthly analysis instead of weekly Swap AI models**: Replace GPT-4 with Claude or Gemini in the analysis node Multi-campaign tracking**: Duplicate the workflow for different campaign types (newsletters, promotions, onboarding) with separate Airtable tables
by Avkash Kakdiya
How it works This workflow captures idea submissions from a webhook and enriches them using AI. It extracts key fields like Title, Tags, Submitted By, and Created date in IST format. The cleaned data is stored in a Notion database for centralized tracking. Finally, a confirmation message is posted in Slack to notify the team. Step-by-step Step-by-step 1. Capture and process submission Webhook** – Receives idea submissions with text and user ID. AI Agent & OpenAI Model** – Enrich and structure the input into Title, Tags, Submitted By, and Created fields. Code** – Extracts clean data, formats tags, and prepares the entry for Notion. 2. Store in Notion Add to Notion** – Creates a new database entry with mapped fields: Title, Submitted By, Tags, Created. 3. Notify in Slack Send Confirmation (Slack)** – Posts a confirmation message with the submitted idea title. Why use this? Centralizes idea collection directly into Notion for better organization. Eliminates manual formatting with AI-powered data structuring. Ensures consistency in tags, submitter info, and timestamps. Provides instant team-wide visibility via Slack notifications. Saves time while keeping idea management streamlined and transparent.
by Moka Ouchi
How it works This workflow automates the creation and management of a daily space-themed quiz in your Slack workspace. It's a fun way to engage your team and learn something new about the universe every day! Triggers Daily:** The workflow automatically runs at a scheduled time every day. Fetches NASA's Picture of the Day:** It starts by fetching the latest Astronomy Picture of the Day (APOD) from the official NASA API, including its title, explanation, and image URL. Generates a Quiz with AI:** Using the information from NASA, it prompts a Large Language Model (LLM) like OpenAI's GPT to create a unique, multiple-choice quiz question. Posts to Slack:** The generated quiz is then posted to a designated Slack channel. The bot automatically adds numbered reactions (1️⃣, 2️⃣, 3️⃣, 4️⃣) to the message, allowing users to vote. Waits and Tallies Results:** After a configurable waiting period, the workflow retrieves all reactions on the quiz message. A custom code node then tallies the votes, identifies the users who answered correctly, and calculates the total number of participants. Announces the Winner:** Finally, it posts a follow-up message in the same channel, revealing the correct answer, a detailed explanation, and mentions all the users who got it right. Set up steps This template should take about 10-15 minutes to set up. Credentials: NASA: Add your NASA API credentials in the Get APOD node. You can get a free API key from the NASA API website. OpenAI: Add your OpenAI API credentials in the OpenAI: Create Quiz node. Slack: Add your Slack API credentials to all the Slack nodes. You'll need to create a Slack App with the following permissions: chat:write, reactions:read, and reactions:write. Configuration: In the Workflow Configuration node, set your channelId to the Slack channel where you want the quiz to be posted. You can also customize the quizDifficulty, llmTone, and answerTimeoutMin to fit your audience. Activate Workflow: Once configured, simply activate the workflow. It will run automatically at the time specified in the Schedule Trigger node (default is 21:00 daily). Requirements An n8n instance A NASA API Key An OpenAI API Key A Slack App with the appropriate permissions and API credentials
by Julien DEL RIO
Who's it for This template is designed for content creators, podcasters, businesses, and researchers who need to transcribe long audio recordings that exceed OpenAI Whisper's 25 MB file size limit (~20 minutes of audio). How it works This workflow combines n8n, FileFlows, and OpenAI Whisper API to transcribe audio files of any length: User uploads an MP3 file through a web form and provides an email address n8n splits the file into 4 MiB chunks and uploads them to FileFlows FileFlows uses FFmpeg to segment the audio into 15-minute chunks (safely under the 25 MB API limit) Each segment is transcribed using OpenAI's Whisper API (configured for French by default) All transcriptions are merged into a single text file The complete transcription is automatically emailed to the user Processing time: Typically 10-15 minutes for a 1-hour audio file. Requirements n8n instance (self-hosted or cloud) FileFlows with Docker and FFmpeg installed OpenAI API key (Whisper API access) Gmail account for email delivery Network access between n8n and FileFlows Setup Complete setup instructions, including FileFlows workflow import, credentials configuration, and storage setup, are provided in the workflow's sticky notes. Cost OpenAI Whisper API: $0.006 per minute. A 1-hour recording costs approximately $0.36.
by Ali Muthana
Who’s it for This template is for professionals, students, and investors who want a simple daily finance briefing. It is useful for anyone who follows private equity, mergers & acquisitions, and general market news but prefers short summaries instead of reading long articles. How it works The workflow runs twice a day using a schedule trigger (default 09:00 and 15:00). It pulls articles from three RSS feeds: NYT Private Equity, DealLawyers M&A, and Yahoo Finance. The items are merged and limited to the five most recent stories. A code node formats them into a clean block of text. An AI Agent rewrites each article into a short, engaging 5–6 sentence summary. The results are delivered directly to your inbox via Gmail. How to set up Add your Gmail credential and replace {{RECIPIENT_EMAIL}} with your email. Insert your OpenAI API key. (Optional) Replace the RSS feed URLs with your preferred sources. Adjust the schedule times if needed. Requirements n8n v1.112+ Gmail credential OpenAI API key How to customize You can add more feeds, increase the number of articles, or translate summaries into another language. You can also deliver the summaries to Slack, Notion, or Google Sheets instead of email.
by koichi nagino
Description Start your day with the perfect outfit suggestion tailored to the local weather. This workflow runs automatically every morning, fetches the current weather forecast for your city, and uses an AI stylist to generate a practical, gender-neutral outfit recommendation. It then designs a clean, vertical image card with all the details—date, temperature, weather conditions, and the complete outfit advice—and posts it directly to your Slack channel. It’s like having a personal stylist and weather reporter deliver a daily briefing right where your team communicates. Who’s it for Teams working in a shared office location who want a fun, daily update. Individuals looking to automate their morning routine and take the guesswork out of getting dressed. Community managers wanting to add engaging, automated content to their Slack workspace. Anyone interested in a practical example of combining weather data, AI, and dynamic image generation. How it works / What it does Triggers Daily: The workflow automatically runs every day at 6 AM. Fetches Weather: It gets the current weather forecast for a specified city (default is Tokyo) using the OpenWeatherMap node. Consults AI Stylist: The weather data is sent to an AI model, which acts as a stylist and returns a practical, gender-neutral outfit suggestion. Designs an Image Card: It dynamically creates a vertical image and writes the date, detailed weather info, and the AI's full recommendation onto it. Posts to Slack: Finally, it uploads the completed image card to your designated Slack channel with a friendly morning greeting. Requirements An n8n instance. An OpenWeatherMap API Key. An OpenRouter API Key (or credentials for another compatible AI model). A Slack workspace and the necessary permissions to connect an app. How to set up Set Weather Location: In the Get Weather Data node, add your OpenWeatherMap API Key and change the city name if you wish. Configure AI Model: In the OpenRouter Chat Model node, add your API Key. Configure Slack: In the Upload a file node, add your Slack credentials and, most importantly, select the channel where you want the forecast to be posted. Adjust Schedule (Optional): You can change the trigger time in the Daily 6AM Trigger node. How to customize the workflow Change the AI's Personality: Edit the system message in the Generate Outfit Advice node. You could ask the AI to be a pirate, a 90s fashion icon, or a formal stylist. Customize the Image: In the Create Image Card node, you can change the background color, font sizes, colors, and the layout of the text. Use a Different Platform: Swap the Slack node for a Discord, Telegram, or Email node to send the forecast to your preferred platform.