by Olivier
This template is a pattern library (one importable workflow) that shows a repeatable way to structure n8n automations so they remain easy to extend, cheaper to run, and safer to scale. It’s intentionally opinionated and dry: the goal is not “plug & play”, but a set of proven building blocks you can copy into your own workflows. Problems this framework solves Spaghetti workflows that are hard to change** A consistent split into Trigger → Manager → Function → Utility so changes don’t ripple through everything. Duplicate processing when runs overlap** Uses “in progress / success / error” indicators so the trigger can skip items that are already being processed. Unnecessary re-runs that keep failing** Items that fail can be marked/parked, so you don’t burn executions repeating the same error. Execution costs exploding over time** Offers polling + batching alternatives when “one event = one execution” becomes too expensive. Rate limits and API throttling under load** Includes rate-limited processing patterns (delays/throttling) to smooth spikes. Missed items during downtime, deploys, or restarts** Stores sync state (e.g., lastSync) in n8n Data Tables instead of relying on in-memory state. Long-running pagination that becomes fragile** Demonstrates manual “page-wise” pagination (fetch N → process N → checkpoint → repeat) to avoid huge in-memory batches. Debugging incidents without visibility** Includes an error workflow pattern (Error Trigger + notification) and structured error logging. What you get in this template Trigger patterns (simple and rate-limited) Polling / batching patterns (basic → more robust → fully configurable with pagination) A “manager” pattern for stateful processing and overlap protection Function + utility workflow examples for reusability Error logging to a Data Table and an example Telegram alert Requirements / setup n8n version that includes the Data Table node Create/replace Data Tables used in the template (e.g. Timestamps, Errors) Example nodes use ProspectPro, HubSpot, and Telegram (optional). Replace these with your own tools if you’re not using them. Important notes This is not a finished automation. Import it, then choose the pattern(s) you need and swap the example “get items / process item” steps for your own logic. Some patterns include looping/recursion options—configure stop conditions carefully to avoid unintended infinite runs. This framework is one effective route to scalable n8n systems, not the only one. Note: this is a living document that will be updated periodically.
by Adam ABDELMOUMNI
Back up & restore n8n workflows with preserved folder structure with GoogleDrive A. Backup workflows solution to google Drive ✅ What problem does this workflow solve? If you’re building and managing multiple automations with well-organized nested folders structure, then, losing a workflow due to accidental deletion or misconfiguration, can cost you hours of work and headache. This can be even more impactful for self-hosted n8n instances. This template solves that for any n8n setup (cloud or self hosted) by exporting a perfect mirrored setup of a whole n8n project, preserving the nested folder structure and the workflows within it. All of it is uploaded in google drive under one main backup folder per execution to keep track of different setup versions along time. 🧑💻Who’s it for ? This workflow is ideal for any n8n user, from beginner to advanced solo/team “flowgrammer” who wants to have a reliable, safe, automated and easy to access backup solution that reflects a perfect mirror of their n8n setup. ✨What it does Scheduled Execution**: The workflow runs automatically according to the schedule, could be up to X times a day (or can be triggered manually). Creates backup Folder**: It creates a new overall backup folder with a naming convention as “n8n_backup_folder_structure_DDMMYYYY_HHmmss” (i.e “n8n_backup_folder_structure_02022026_123343”) where the whole n8n nested folder structure along with workflows (JSON files) will be saved. For example if an n8n workflow instance look like this, then same structure will be preserved and reflected on google drive along with the workflows within each folder : projects-root-folder/ └── Your-project-folder-name/ └── Utilities/ └──Error_management/ └── error_alerting.json └──Log_analysis/ └── Reports/ └── Clients/ └──Client_A/ └──Client_B/ └── client_reporting_standard.json └── ... └── workflow_test.json Fetches the non documented n8n API**: It connects to your n8n instance via the n8n API, not the documented one, but the one used directly from your UI when you manipulate your instance. This API has been used instead of the native n8n node because it gives some more features that are being used here such as : Retrieve the workflow’s parentFolder name Retrieve the workflow’s description Retrieve the n8n instance’s projectId Create an n8n data table Get properties of an n8n data table 🛠️How to set up ? 1. Configure Credentials: Make sure you have valid credentials for : Google Drive: To allow the workflow to create folders and upload files. 2. Set Your Variables: In the first Set node named "n8n instance/project access details": n8n_instance_URL: Paste your n8n instance URL without the “/” at the end. For example : https://myautomations.app.n8n.cloud emailOrLdapLoginId: your login email address password: your password 3. Create your main backup folder: Create on your google drive space the main folder where all your backup will be uploaded. For example “my_n8n_backup” and then select it in the node Create backup folder "n8n_backup_folder_structure_ddMMyyyy_HHmmss" ” from the List under “Parent Folder”, 🚀 Activate the workflow, and you're all set! B. Restore Backup workflows to n8n from google Drive You can run it manually by adding the specific google drive backup folder name that you want to restore on n8n inside the node "Search for the backup folder in Drive". After running the workflow you'll see on your n8n the backup folder restored with all your saved folder/subfolders and associated workflows. And when you open that folder you'll see all your setup successfully restored!
by Aziz B
Overview This workflow acts as an AI-powered research assistant that takes a topic from the user, performs multi-step intelligent research, and stores the final report in Notion. It uses advanced search, content extraction, and AI summarization to deliver a high-quality research report—fully automated from query to publication. How It Works User Interaction** The workflow starts by asking the user what topic they want to research. A “Strategy Agent” asks 2–3 clarifying questions to refine the scope. Once the user confirms, it creates a Notion database page with the research title. Search Query Generation** Generates up to 3 relevant search queries for the given topic. Data Gathering** (Loop over each query) Sends the query to Tavily Search API to find the most relevant blogs/articles. Picks the top-matched link and uses Tavily again to extract its content. Repeats the process for all 3 queries. Report Compilation** Aggregates extracted content from all sources. A Final Report Agent creates a well-structured research report in Markdown. Converts Markdown → HTML → splits into chunks. Pushes each chunk into the Notion report page. Delivery** Sends the final Notion report link back to the user. How to Use This workflow is triggered via Webhook. Attach the provided webhook URL** to any application, form, or chatbot to collect the user’s topic. Once triggered, the workflow will run automatically and deliver the research link without any manual steps. Requirements To use this workflow, you’ll need: n8n account** (self-hosted or cloud) Notion account** with a database where reports will be stored Tavily API Key** – for search & content extraction OpenRouter API key* *or OpenAI API key – for AI agents & report generation Google Gemini API Key** – for converting Markdown to HTML and splitting content for Notion Notion database ID connected in n8n
by InfyOm Technologies
✅ What problem does this workflow solve? Teachers, coaches, and educators spend hours manually creating quizzes from study notes and PDFs. This workflow eliminates that effort by using AI to convert PDF study material into fully functional quizzes, complete with scoring, student tracking, and Google Form integration — all automatically. ⚙️ What does this workflow do? Accepts study material as a PDF upload Extracts educational content from the document Uses AI to generate high-quality MCQ questions Automatically creates a Google Quiz Form Saves all questions and correct answers for teacher reference Enables instant scoring and response tracking for students 🧠 How It Works – Step-by-Step 1. 📄 Upload Study Material (PDF) Teachers upload a PDF via an n8n Form Trigger They specify how many quiz questions they want 2. 📚 PDF Parsing & AI Question Generation The workflow extracts text from the PDF An AI Teacher Agent powered by :contentReference[oaicite:1]{index=1}: Identifies key learning concepts Generates multiple-choice questions Ensures: 4 options per question One correct answer Clear and student-friendly language 3. 📝 Quiz Creation (Google Forms) A new Google Form is created automatically Quiz mode is enabled with: Point values Correct/incorrect feedback Option shuffling Student detail fields (Name, Email, ID, Class) are added 4. 📊 Teacher Reference & Record Keeping All generated questions are logged in Google Sheets Stored data includes: Question text All options Correct answers Quiz URL 5. 🎓 Student Submission & Scoring Students take the quiz via Google Forms Scores are calculated automatically Teachers receive all responses in the connected Google Sheet 🛠 Tools & Integrations Used n8n Form Trigger** – File upload & inputs PDF Parser** – Extracts text from documents OpenAI Chat Model** – AI question generation Google Forms API** – Quiz creation & scoring Google Sheets** – Question storage & response tracking 💡 Key Benefits ⏱ Saves hours of manual quiz creation 📚 Ensures pedagogically sound questions 📊 Automatic grading & analytics 🧠 Consistent difficulty & coverage 🔁 Easily reusable for different subjects 👤 Who can use this? Perfect for: 🧑🏫 Teachers & Professors 🏫 Coaching centers 🎓 EdTech & LMS platforms 🚀 SaaS founders building quiz tools If you want to transform static PDFs into interactive, AI-generated assessments, this workflow is built for you.
by zapgrow
AI E-E-A-T WordPress Blog Generator (n8n) This workflow generates SEO-optimized, E-E-A-T compliant blog posts using a form input and publishes them as WordPress drafts with featured images. Features Form-based blog brief SEO metadata + outline generation Full HTML blog writing Featured image generation WordPress draft creation Requirements n8n v1.40+ OpenAI API key WordPress REST API access Environment Variables WP_SITE_URL=https://example.com SITE_NAME=Your Website Name PROJECT_CONTEXT=Your niche description How to Use Import workflow JSON Configure OpenAI & WordPress credentials Set environment variables Open the Form Trigger Submit blog details Draft appears in WordPress Notes Content is created as draft No credentials are included
by Alok Kumar
Parse, Normalize, Extract, and Store PDF Content for RAG in Pinecone This workflow automates a full RAG pipeline for structured documents (like insurance policies). What it does Watches a Google Drive folder for new PDFs Uploads to LlamaIndex Cloud for parsing → returns clean Markdown Normalizes text (removes headers, footers, page numbers, formatting artifacts) Splits text into chunks (~1200 chars with 150 overlap) Generates embeddings with OpenAI Stores vectors in Pinecone with metadata Connects a Chat Agent that retrieves answers from Pinecone Who’s it for Developers building chatbots or Q&A systems for structured docs Teams working with insurance, compliance, or legal PDFs Anyone who needs to normalize & store documents for semantic search Requirements Google Drive connected (for source PDFs) LlamaIndex Cloud account (parsing API key) Pinecone account (vector DB) OpenAI account (LLM and embeddings) How to use and customize Update the folder name in google drive trigger node. Place a pdf file in the same folder in google drive. Customize the Normalized Content function node to adjust regex for headers/footers specific to your documents. Adjust chunk size or metadata namespace in the Pinecone node to fit your project needs.
by Yulia
This template helps you to create an intelligent document assistant that can answer questions from uploaded files. It shows a complete single-vector RAG (Retrieval-Augmented Generation) system that automatically processes documents, lets you chat with it in natural language and provides accurate, source-cited responses. The workflow consists of two parts: the data loading pipeline and RAG AI Agent that answers your questions based on the uploaded documents. To test tis workflow, you can use the following example files in a shared Google Drive folder. 💡 Find more information on creating RAG AI agents in n8n on the official page. 🔗Example files The template uses the following example files in the Google Docs format: German Data Protection law: Bundesdatenschutzgesetz (BDSG) Computer Security Incident Handling Guide (NIST.SP.800-61r2) Berkshire Hathaway letter to shareholders from 2024 🚀How to get started Copy or import the template to your n8n instance. Create your Google Drive credentials via the Google Cloud Console and add them to the trigger node "Detect New Files". A detailed walk-through can be found in the n8n docs. Create a Qdrant API key and add it to the "Insert into Vector Store" node credentials. The API key will be displayed after you have logged into Qdrant and created a Cluster. Create or activate your OpenAI API key. 1️⃣ Import your data and store it in a vector database ✅ Upload files to Google Drive. IMPORTANT: This template supports files in Google Docs format. New files will be downloaded in HTML format and converted to Markdown. This preserves the overall document structure and improves the quality of responses. Open the shared Google Drive folder Create a new folder on your Google Drive Activate the workflow Copy the files from the shared folder to your new folder The webhook will catch the added files and you will see the execution in your "Executions" tab. Note: If the webhook doesn’t see the files you copied, try adding them to your Google Drive folder from the opened shared files via the **Move to feature.* ✅ Chunk, embed, and store your data with a connected OpenAI embedding model and Qdrant vector store. A Qdrant collection – vector storage for your data – will be created automatically after the n8n webhook has caught your data from Google Drive. You can name your collection in the "Insert into Vector Store" node. 2️⃣ Add retrieval capabilities and chat with your data ✅ Select the database with imported data in the “Search Documents” sub-node of an AI Agent. ✅ Start a chat with your agent via the chat interface: it will retrieve data from the vector store and provide a response. ❓You can ask the following questions based on the example files to test this workflow: What are the main steps in incident handling? What does Warren Buffett say about mistakes at Berkshire? What are the requirements for processing personal data? Do any documents mention data breach notification? 🌟Adapt the workflow to your own use case Knowledge management** - Query company docs, policies, and procedures Research assistance** - Search through academic papers and reports Customer support** - Build agents that reference product documentation Legal/compliance** - Query contracts, regulations, and legal documents Personal productivity** - Chat with your notes, articles, and saved content The workflow automatically detects new files, processes them into searchable vector chunks, and maintains conversation context. Just drop files in your Google Drive folder and start asking questions. 💻 📞Get in touch with me if you want to customise this workflow or have any questions.
by kote2
This workflow allows a LINE user to send either text or an image of food to a connected LINE bot. If text is sent, the AI agent responds directly via LINE. If an image is sent, the workflow downloads it from LINE’s API, analyzes it using OpenAI’s Vision model, estimates calories (only if the image contains food), and formats the result into JSON. Detected dishes and calories are appended to a Google Sheet, and a confirmation message is sent back to the user via LINE. Key Features: Integrates LINE Messaging API webhook with n8n Uses OpenAI Vision to detect food and estimate calories Automatically logs results into Google Sheets Sends real-time feedback to the LINE user How to use: Set up a LINE Messaging API channel and get your channel access token. Add your OpenAI API credentials in n8n. Replace placeholders for {channel access token}, {your id}, and Google Sheet IDs with your own. Activate the workflow and send a food image or text message to your LINE bot.
by Billy Christi
Who is this for? This workflow is ideal for: Business analysts* and *data professionals** who need to quickly analyze spreadsheet data through natural conversation Small to medium businesses** seeking AI-powered insights from their Google Sheets without complex dashboard setups Sales teams* and *marketing professionals** who want instant access to customer, product, and order analytics What problem is this workflow solving? Traditional data analysis requires technical skills and time-consuming manual work. This AI data analyst chatbot solves that by: Eliminating the need for complex formulas or pivot tables** - just ask questions in plain text Providing real-time insights** from live Google Sheets data whenever you need them Making data analysis accessible** to non-technical team members across the organization Maintaining conversation context** so you can ask follow-up questions and dive deeper into insights Combining multiple data sources** for comprehensive business intelligence What this workflow does This workflow creates an intelligent chatbot that can analyze data from Google Sheets in real time, providing AI-powered business intelligence and data insights through a conversational interface. Step by step: Chat Trigger receives incoming chat messages with session ID tracking for conversation context Parallel Data Retrieval fetches live data from multiple Google Sheets simultaneously Data Aggregation combines data from each sheet into structured objects for analysis AI Analysis processes user queries using OpenAI's language model with the combined data context Intelligent Response delivers analytical insights, summaries, or answers back to the chat interface How to set up Connect your Google Sheets account to all Google Sheets nodes for data access View & Copy the example Google Sheet template here: 👉 Smart AI Data Analyst Chatbot – Google Sheet Template Update Google Sheets document ID in all Google Sheets nodes to point to your specific spreadsheet Configure sheet names to match your Google Sheets structure Add your OpenAI API key to the OpenAI Chat Model node for AI-powered analysis Customize the AI Agent system message to reflect your specific data schema and analysis requirements Configure the chat trigger webhook for your specific chat interface implementation Test the workflow by sending sample queries about your data through the chat interface Monitor responses to ensure the AI is correctly interpreting and analyzing your Google Sheets data How to customize this workflow to your needs Replace with your own Google Sheets**: update the Google Sheets nodes to connect to your specific spreadsheets based on your use case. Replace with different data sources**: swap Google Sheets nodes with other data connectors like Airtable, databases (PostgreSQL, MySQL), or APIs to analyze data from your preferred platforms Modify AI instructions**: customize the Data Analyst AI Agent system message to focus on specific business metrics or analysis types Change AI model**: Switch to different LLM models such as Gemini, Claude, and others based on your complexity and cost requirements. Need help customizing? Contact me for consulting and support: 📧 billychartanto@gmail.com
by AFK Crypto
Try It Out! The Daily AI-Powered Global Trend Analysis Workflow transforms your Discord server into a real-time AI-driven global intelligence dashboard. Every 6 hours, this automation gathers worldwide data from GDELT, Hacker News, and NewsAPI — analyzing patterns in technology, economics, and geopolitics to uncover emerging global narratives before they hit mainstream awareness. An integrated AI Trend Analyzer Agent distills this massive dataset into concise, actionable insights including: Top 5 emerging global trends A short AI-written daily summary Regional intelligence highlights** Notable mentions** in innovation, finance, and politics Each insight is automatically posted to your Discord channel, formatted for quick scanning and decision-making — keeping your team or community ahead of the curve. How It Works Automated Trigger (Schedule Node) – Executes every 6 or 24 hours (customizable) to fetch the latest global data. Multi-Source Intelligence Aggregation: GDELT – Captures worldwide media signals and geopolitical movements. Hacker News API – Surfaces trending stories in startups, AI, and innovation. NewsAPI – Collects major headlines across global media outlets filtered by defined keywords. Data Normalization (JavaScript Node) – Cleans and merges all incoming data into a unified format with timestamps. AI Trend Analyzer (LLM Node) – Evaluates data contextually to identify: 📰 Top 5 Global Trends 🌍 Regional Highlights 💡 Key Industry Insights 📈 100–150 Word Summary Output Structuring Node – Parses and formats AI responses into a clean, Discord-friendly layout. Discord Delivery – Sends the compiled report to your specified channel using a webhook or bot token. How to Use Import the workflow into n8n. Configure the following credentials: NewsAPI Key – for aggregating headlines. LLM API Key (OpenAI or Gemini) – for AI-based summarization. Discord Webhook URL or Bot Token – for automated posting. Edit NewsAPI keywords to match your industry focus (e.g., “AI,” “blockchain,” “defense,” “renewable energy”). Adjust the schedule trigger interval as desired (default: every 6 hours). Activate the workflow — and start receiving continuous, AI-curated global intelligence in Discord. (Optional) Extend This Workflow Sector Prioritization:** Focus on AI, finance, energy, or web3 insights only. Regional Filters:** Segment analysis by continent or language. Trend Scoring:** Introduce a numeric score to rank importance. Cross-Platform Broadcast:** Expand reports to Telegram, Slack, or X (Twitter). Knowledge Archive:** Auto-store each daily report in Notion or Airtable. Requirements n8n Instance** with HTTP Request, LLM, and Discord Nodes NewsAPI Key** Access to GDELT** (no authentication required) OpenAI or Gemini Key for AI Analysis** Discord Webhook URL or Bot Token** APIs Used GET https://api.gdeltproject.org/api/v2/doc/doc?query=crypto&format=json GET https://hn.algolia.com/api/v1/search?query=startup%20OR%20trend&tags=story&hitsPerPage=10 GET https://newsapi.org/v2/everything?q=crypto OR bitcoin OR web3 OR AI&language=en&sortBy=publishedAt&pageSize=10 Summary The Daily AI-Powered Global Trend Analysis Workflow (Discord Edition) delivers machine-curated global intelligence right where your community communicates. It combines AI-driven reasoning with real-time data aggregation from open sources — converting raw news into structured, actionable insights. Ideal for founders, analysts, researchers, and DAOs, this workflow ensures your Discord server becomes a live intelligence hub — automatically updated with what truly matters worldwide. Our Website: https://afkcrypto.com/ Check our blogs: https://www.afkcrypto.com/blog
by Han
How it Works This workflow automates Invoice & Payment Tracking (with Approvals) across Notion and Slack. Ingest — You drop invoices/receipts (PDF/IMG/JSON) into the flow. Extract — OCR + parsing pulls out key fields (invoice no, vendor, currency, totals, receipt paid amount/date). De-dup & Match — We canonicalize vendor + invoice_no and search Notion: Primary match: Invoice No (+ optional Currency / Vendor (Canon)). Fallback: uses document Amount Total and dates. Decide the action create_unpaid — new invoice (no payment). create_paid — new invoice fully paid (unverified). create_partial — new invoice with a first partial payment. update_partial — add a partial to an existing invoice. update_mark_paid — mark existing invoice paid in full. manual_review — currency mismatch / overpayment / ambiguous. archive — push to archive logs (from manual review). Slack approvals (one-click) — A message shows previous paid, this receipt, new total, and Approve buttons (links to a Wait for Webhook resumeUrl). Reviewer picks: Approve Partial / Mark Paid / Manual Review / Archive. Notion updates We only write editable fields: Paid Amount (number), Status (select), Last Payment Date (date). Formulas (e.g., Amount Total, Amount Due) recompute automatically. Receipts are saved in a Receipts DB and related back to the invoice. Notifications & duplicates — If duplicates are detected, Slack posts a simple list with clickable invoice names. Archiving — From Manual Review, Archive goes straight to Archived Invoice DB (and optional Archived Source File DB) as a log entry—no pre-checks needed. Set up Steps Prerequisites Notion DB 4 Slack Channel (Invoice Input, Notification, Manual Review, Duplicate Alert (Optional)) AI Model (We use Claude 3.5 Haiku, Feel free to use Latest Model) OCR Parsing (We Used ocr.space, Feel Free to Change into any OCR Parsing you have) Create Notion DBs:** Invoice DB: Title Invoice No; Number Paid Amount (editable); Select Status; Dates (Issue/Due/Last Payment Date); Formulas: Amount Total = round(Subtotal - Discount Amount + Tax Total, 2) Amount Due = max(0, round(Amount Total - Paid Amount, 2)) Receipts DB: Invoice No, Vendor, Paid Amount (number), Currency (select), Paid Date (date), Receipt No, Source URL; Relation → Invoice. Archived Invoice DB: Invoice No, Vendor, Reason, Source URL, Original Page ID, Archived At (date). (Optional) Source File / Archived Source File DBs. Share all DBs with your Notion integration (Add connections). Add credentials in n8n:** Notion (integration token) and Slack (bot token). Invite the bot to your channel. Import the workflow/template:* Set each Notion node’s *Database ID* and each Slack node’s *Channel/Credential**. Map updates:* In the Invoice *Update Page* node, map *Paid Amount, **Status, Last Payment Date. In Create Receipt, map Invoice relation + receipt fields. Test:** Run with a sample invoice/receipt → click a Slack button → verify Invoice/Receipt updates in Notion → try Archive from Manual Review.
by Viktor Klepikovskyi
Configurable Multi-Page Web Scraper Introduction This n8n workflow provides a robust and highly reusable solution for scraping data from paginated websites. Instead of building a complex series of nodes for every new site, you only need to update a simple JSON configuration in the initial Input Node, making your scraping tasks faster and more standardized. Purpose The core purpose of this template is to automate the extraction of structured data (e.g., product details, quotes, articles) from websites with multiple pages. It is designed to be fully recursive: it follows the "next page" link until no link is found, aggregates the results from all pages, and cleanly structures the final output into a single list of items. Setup and Configuration Locate the Input Node: The entire configuration for the scraper is held within the first node of the workflow. Update the JSON: Replace the existing JSON content with your target website's details: startUrl: The URL of the first page to begin scraping. nextPageSelector: The CSS selector for the "Next" or "Continue" link element that leads to the next page. This is crucial for the pagination loop. fields: An array of objects defining the data to extract on each page. For each field, specify the name (the output key), the selector (the CSS selector pointing to the data), and the value (the HTML attribute to pull, usually text or href). Run the Workflow: After updating the configuration, execute the workflow. It will automatically loop through all pages and deliver a final, structured list of the scraped data. For a detailed breakdown of the internal logic, including how the loop is constructed using the Set, If, and HTTP Request nodes, please refer to the original blog post: Flexible Web Scraping with n8n: A Configurable, Multi-Page Template