by Philflow
This n8n template lets you run prompts against 350+ LLM models and see exactly what each request costs with real-time pricing from OpenRouter Use cases are many: Compare costs across different models, plan your AI budget, optimize prompts for cost efficiency, or track expenses for client billing! Good to know OpenRouter charges a platform fee on top of model costs. See OpenRouter Pricing for details. You need an OpenRouter account with API credits. Free signup available with some free models included. Pricing data is fetched live from OpenRouter's API, so costs are always up-to-date. How it works All available models are fetched from OpenRouter's API when you start. You select a model and enter your prompt via the form (or just use the chat). The prompt is sent to OpenRouter and the response is captured. Token usage (input/output) is extracted from the response using a LangChain Code node. Real-time pricing for your selected model is fetched from OpenRouter. The exact cost is calculated and displayed alongside your AI response. How to use Chat interface: Quick testing - just type a prompt and get the response with costs. Form interface: Select from all available models via dropdown, enter your prompt, and get a detailed cost breakdown. Click "Show Details" on the result form to see the full breakdown (input tokens, output tokens, cost per type). Requirements OpenRouter account with API key (Get one here) Customising this workflow Add a database node to log all requests and costs over time Connect to Google Sheets for cost tracking and reporting Extend with LLM-as-Judge evaluation to also check response quality
by Avkash Kakdiya
How it works This workflow runs daily to collect the latest funding round data from Crunchbase. It retrieves up to 100 recent funding events, including company, investors, funding amount, and industry details. The data is cleaned and filtered to only include rounds announced in the last 30 days. Finally, the results are saved into both Google Sheets for reporting and Airtable for structured database management. Step-by-step Trigger & Data Fetching Schedule Trigger node – Runs the workflow once a day. HTTP Request node – Calls the Crunchbase API to fetch the latest 100 funding rounds with relevant details. Data Processing Code node – Parses the raw API response into clean fields such as company name, funding type, funding amount, investors, industry, and Crunchbase URL. Filter node – Keeps only funding rounds from the last 30 days to ensure the dataset remains fresh and relevant. Storage & Outputs Google Sheets node – Appends or updates the filtered funding records in a Google Sheet for easy sharing and reporting. Airtable node – Stores the same records in Airtable for more structured, database-style organization and management. Why use this? Automates daily collection of startup funding data from Crunchbase. Keeps only the most recent and relevant records for faster insights. Ensures data is consistently stored in both Google Sheets and Airtable. Supports reporting, collaboration, and database management in one flow.
by Viktor Klepikovskyi
Configurable Multi-Page Web Scraper Introduction This n8n workflow provides a robust and highly reusable solution for scraping data from paginated websites. Instead of building a complex series of nodes for every new site, you only need to update a simple JSON configuration in the initial Input Node, making your scraping tasks faster and more standardized. Purpose The core purpose of this template is to automate the extraction of structured data (e.g., product details, quotes, articles) from websites with multiple pages. It is designed to be fully recursive: it follows the "next page" link until no link is found, aggregates the results from all pages, and cleanly structures the final output into a single list of items. Setup and Configuration Locate the Input Node: The entire configuration for the scraper is held within the first node of the workflow. Update the JSON: Replace the existing JSON content with your target website's details: startUrl: The URL of the first page to begin scraping. nextPageSelector: The CSS selector for the "Next" or "Continue" link element that leads to the next page. This is crucial for the pagination loop. fields: An array of objects defining the data to extract on each page. For each field, specify the name (the output key), the selector (the CSS selector pointing to the data), and the value (the HTML attribute to pull, usually text or href). Run the Workflow: After updating the configuration, execute the workflow. It will automatically loop through all pages and deliver a final, structured list of the scraped data. For a detailed breakdown of the internal logic, including how the loop is constructed using the Set, If, and HTTP Request nodes, please refer to the original blog post: Flexible Web Scraping with n8n: A Configurable, Multi-Page Template
by Aziz B
Overview This workflow acts as an AI-powered research assistant that takes a topic from the user, performs multi-step intelligent research, and stores the final report in Notion. It uses advanced search, content extraction, and AI summarization to deliver a high-quality research report—fully automated from query to publication. How It Works User Interaction** The workflow starts by asking the user what topic they want to research. A “Strategy Agent” asks 2–3 clarifying questions to refine the scope. Once the user confirms, it creates a Notion database page with the research title. Search Query Generation** Generates up to 3 relevant search queries for the given topic. Data Gathering** (Loop over each query) Sends the query to Tavily Search API to find the most relevant blogs/articles. Picks the top-matched link and uses Tavily again to extract its content. Repeats the process for all 3 queries. Report Compilation** Aggregates extracted content from all sources. A Final Report Agent creates a well-structured research report in Markdown. Converts Markdown → HTML → splits into chunks. Pushes each chunk into the Notion report page. Delivery** Sends the final Notion report link back to the user. How to Use This workflow is triggered via Webhook. Attach the provided webhook URL** to any application, form, or chatbot to collect the user’s topic. Once triggered, the workflow will run automatically and deliver the research link without any manual steps. Requirements To use this workflow, you’ll need: n8n account** (self-hosted or cloud) Notion account** with a database where reports will be stored Tavily API Key** – for search & content extraction OpenRouter API key* *or OpenAI API key – for AI agents & report generation Google Gemini API Key** – for converting Markdown to HTML and splitting content for Notion Notion database ID connected in n8n
by InfyOm Technologies
✅ What problem does this workflow solve? Teachers, coaches, and educators spend hours manually creating quizzes from study notes and PDFs. This workflow eliminates that effort by using AI to convert PDF study material into fully functional quizzes, complete with scoring, student tracking, and Google Form integration — all automatically. ⚙️ What does this workflow do? Accepts study material as a PDF upload Extracts educational content from the document Uses AI to generate high-quality MCQ questions Automatically creates a Google Quiz Form Saves all questions and correct answers for teacher reference Enables instant scoring and response tracking for students 🧠 How It Works – Step-by-Step 1. 📄 Upload Study Material (PDF) Teachers upload a PDF via an n8n Form Trigger They specify how many quiz questions they want 2. 📚 PDF Parsing & AI Question Generation The workflow extracts text from the PDF An AI Teacher Agent powered by :contentReference[oaicite:1]{index=1}: Identifies key learning concepts Generates multiple-choice questions Ensures: 4 options per question One correct answer Clear and student-friendly language 3. 📝 Quiz Creation (Google Forms) A new Google Form is created automatically Quiz mode is enabled with: Point values Correct/incorrect feedback Option shuffling Student detail fields (Name, Email, ID, Class) are added 4. 📊 Teacher Reference & Record Keeping All generated questions are logged in Google Sheets Stored data includes: Question text All options Correct answers Quiz URL 5. 🎓 Student Submission & Scoring Students take the quiz via Google Forms Scores are calculated automatically Teachers receive all responses in the connected Google Sheet 🛠 Tools & Integrations Used n8n Form Trigger** – File upload & inputs PDF Parser** – Extracts text from documents OpenAI Chat Model** – AI question generation Google Forms API** – Quiz creation & scoring Google Sheets** – Question storage & response tracking 💡 Key Benefits ⏱ Saves hours of manual quiz creation 📚 Ensures pedagogically sound questions 📊 Automatic grading & analytics 🧠 Consistent difficulty & coverage 🔁 Easily reusable for different subjects 👤 Who can use this? Perfect for: 🧑🏫 Teachers & Professors 🏫 Coaching centers 🎓 EdTech & LMS platforms 🚀 SaaS founders building quiz tools If you want to transform static PDFs into interactive, AI-generated assessments, this workflow is built for you.
by Alok Kumar
Parse, Normalize, Extract, and Store PDF Content for RAG in Pinecone This workflow automates a full RAG pipeline for structured documents (like insurance policies). What it does Watches a Google Drive folder for new PDFs Uploads to LlamaIndex Cloud for parsing → returns clean Markdown Normalizes text (removes headers, footers, page numbers, formatting artifacts) Splits text into chunks (~1200 chars with 150 overlap) Generates embeddings with OpenAI Stores vectors in Pinecone with metadata Connects a Chat Agent that retrieves answers from Pinecone Who’s it for Developers building chatbots or Q&A systems for structured docs Teams working with insurance, compliance, or legal PDFs Anyone who needs to normalize & store documents for semantic search Requirements Google Drive connected (for source PDFs) LlamaIndex Cloud account (parsing API key) Pinecone account (vector DB) OpenAI account (LLM and embeddings) How to use and customize Update the folder name in google drive trigger node. Place a pdf file in the same folder in google drive. Customize the Normalized Content function node to adjust regex for headers/footers specific to your documents. Adjust chunk size or metadata namespace in the Pinecone node to fit your project needs.
by Yulia
This template helps you to create an intelligent document assistant that can answer questions from uploaded files. It shows a complete single-vector RAG (Retrieval-Augmented Generation) system that automatically processes documents, lets you chat with it in natural language and provides accurate, source-cited responses. The workflow consists of two parts: the data loading pipeline and RAG AI Agent that answers your questions based on the uploaded documents. To test tis workflow, you can use the following example files in a shared Google Drive folder. 💡 Find more information on creating RAG AI agents in n8n on the official page. 🔗Example files The template uses the following example files in the Google Docs format: German Data Protection law: Bundesdatenschutzgesetz (BDSG) Computer Security Incident Handling Guide (NIST.SP.800-61r2) Berkshire Hathaway letter to shareholders from 2024 🚀How to get started Copy or import the template to your n8n instance. Create your Google Drive credentials via the Google Cloud Console and add them to the trigger node "Detect New Files". A detailed walk-through can be found in the n8n docs. Create a Qdrant API key and add it to the "Insert into Vector Store" node credentials. The API key will be displayed after you have logged into Qdrant and created a Cluster. Create or activate your OpenAI API key. 1️⃣ Import your data and store it in a vector database ✅ Upload files to Google Drive. IMPORTANT: This template supports files in Google Docs format. New files will be downloaded in HTML format and converted to Markdown. This preserves the overall document structure and improves the quality of responses. Open the shared Google Drive folder Create a new folder on your Google Drive Activate the workflow Copy the files from the shared folder to your new folder The webhook will catch the added files and you will see the execution in your "Executions" tab. Note: If the webhook doesn’t see the files you copied, try adding them to your Google Drive folder from the opened shared files via the **Move to feature.* ✅ Chunk, embed, and store your data with a connected OpenAI embedding model and Qdrant vector store. A Qdrant collection – vector storage for your data – will be created automatically after the n8n webhook has caught your data from Google Drive. You can name your collection in the "Insert into Vector Store" node. 2️⃣ Add retrieval capabilities and chat with your data ✅ Select the database with imported data in the “Search Documents” sub-node of an AI Agent. ✅ Start a chat with your agent via the chat interface: it will retrieve data from the vector store and provide a response. ❓You can ask the following questions based on the example files to test this workflow: What are the main steps in incident handling? What does Warren Buffett say about mistakes at Berkshire? What are the requirements for processing personal data? Do any documents mention data breach notification? 🌟Adapt the workflow to your own use case Knowledge management** - Query company docs, policies, and procedures Research assistance** - Search through academic papers and reports Customer support** - Build agents that reference product documentation Legal/compliance** - Query contracts, regulations, and legal documents Personal productivity** - Chat with your notes, articles, and saved content The workflow automatically detects new files, processes them into searchable vector chunks, and maintains conversation context. Just drop files in your Google Drive folder and start asking questions. 💻 📞Get in touch with me if you want to customise this workflow or have any questions.
by Billy Christi
Who is this for? This workflow is ideal for: Business analysts* and *data professionals** who need to quickly analyze spreadsheet data through natural conversation Small to medium businesses** seeking AI-powered insights from their Google Sheets without complex dashboard setups Sales teams* and *marketing professionals** who want instant access to customer, product, and order analytics What problem is this workflow solving? Traditional data analysis requires technical skills and time-consuming manual work. This AI data analyst chatbot solves that by: Eliminating the need for complex formulas or pivot tables** - just ask questions in plain text Providing real-time insights** from live Google Sheets data whenever you need them Making data analysis accessible** to non-technical team members across the organization Maintaining conversation context** so you can ask follow-up questions and dive deeper into insights Combining multiple data sources** for comprehensive business intelligence What this workflow does This workflow creates an intelligent chatbot that can analyze data from Google Sheets in real time, providing AI-powered business intelligence and data insights through a conversational interface. Step by step: Chat Trigger receives incoming chat messages with session ID tracking for conversation context Parallel Data Retrieval fetches live data from multiple Google Sheets simultaneously Data Aggregation combines data from each sheet into structured objects for analysis AI Analysis processes user queries using OpenAI's language model with the combined data context Intelligent Response delivers analytical insights, summaries, or answers back to the chat interface How to set up Connect your Google Sheets account to all Google Sheets nodes for data access View & Copy the example Google Sheet template here: 👉 Smart AI Data Analyst Chatbot – Google Sheet Template Update Google Sheets document ID in all Google Sheets nodes to point to your specific spreadsheet Configure sheet names to match your Google Sheets structure Add your OpenAI API key to the OpenAI Chat Model node for AI-powered analysis Customize the AI Agent system message to reflect your specific data schema and analysis requirements Configure the chat trigger webhook for your specific chat interface implementation Test the workflow by sending sample queries about your data through the chat interface Monitor responses to ensure the AI is correctly interpreting and analyzing your Google Sheets data How to customize this workflow to your needs Replace with your own Google Sheets**: update the Google Sheets nodes to connect to your specific spreadsheets based on your use case. Replace with different data sources**: swap Google Sheets nodes with other data connectors like Airtable, databases (PostgreSQL, MySQL), or APIs to analyze data from your preferred platforms Modify AI instructions**: customize the Data Analyst AI Agent system message to focus on specific business metrics or analysis types Change AI model**: Switch to different LLM models such as Gemini, Claude, and others based on your complexity and cost requirements. Need help customizing? Contact me for consulting and support: 📧 billychartanto@gmail.com
by AFK Crypto
Try It Out! The Daily AI-Powered Global Trend Analysis Workflow transforms your Discord server into a real-time AI-driven global intelligence dashboard. Every 6 hours, this automation gathers worldwide data from GDELT, Hacker News, and NewsAPI — analyzing patterns in technology, economics, and geopolitics to uncover emerging global narratives before they hit mainstream awareness. An integrated AI Trend Analyzer Agent distills this massive dataset into concise, actionable insights including: Top 5 emerging global trends A short AI-written daily summary Regional intelligence highlights** Notable mentions** in innovation, finance, and politics Each insight is automatically posted to your Discord channel, formatted for quick scanning and decision-making — keeping your team or community ahead of the curve. How It Works Automated Trigger (Schedule Node) – Executes every 6 or 24 hours (customizable) to fetch the latest global data. Multi-Source Intelligence Aggregation: GDELT – Captures worldwide media signals and geopolitical movements. Hacker News API – Surfaces trending stories in startups, AI, and innovation. NewsAPI – Collects major headlines across global media outlets filtered by defined keywords. Data Normalization (JavaScript Node) – Cleans and merges all incoming data into a unified format with timestamps. AI Trend Analyzer (LLM Node) – Evaluates data contextually to identify: 📰 Top 5 Global Trends 🌍 Regional Highlights 💡 Key Industry Insights 📈 100–150 Word Summary Output Structuring Node – Parses and formats AI responses into a clean, Discord-friendly layout. Discord Delivery – Sends the compiled report to your specified channel using a webhook or bot token. How to Use Import the workflow into n8n. Configure the following credentials: NewsAPI Key – for aggregating headlines. LLM API Key (OpenAI or Gemini) – for AI-based summarization. Discord Webhook URL or Bot Token – for automated posting. Edit NewsAPI keywords to match your industry focus (e.g., “AI,” “blockchain,” “defense,” “renewable energy”). Adjust the schedule trigger interval as desired (default: every 6 hours). Activate the workflow — and start receiving continuous, AI-curated global intelligence in Discord. (Optional) Extend This Workflow Sector Prioritization:** Focus on AI, finance, energy, or web3 insights only. Regional Filters:** Segment analysis by continent or language. Trend Scoring:** Introduce a numeric score to rank importance. Cross-Platform Broadcast:** Expand reports to Telegram, Slack, or X (Twitter). Knowledge Archive:** Auto-store each daily report in Notion or Airtable. Requirements n8n Instance** with HTTP Request, LLM, and Discord Nodes NewsAPI Key** Access to GDELT** (no authentication required) OpenAI or Gemini Key for AI Analysis** Discord Webhook URL or Bot Token** APIs Used GET https://api.gdeltproject.org/api/v2/doc/doc?query=crypto&format=json GET https://hn.algolia.com/api/v1/search?query=startup%20OR%20trend&tags=story&hitsPerPage=10 GET https://newsapi.org/v2/everything?q=crypto OR bitcoin OR web3 OR AI&language=en&sortBy=publishedAt&pageSize=10 Summary The Daily AI-Powered Global Trend Analysis Workflow (Discord Edition) delivers machine-curated global intelligence right where your community communicates. It combines AI-driven reasoning with real-time data aggregation from open sources — converting raw news into structured, actionable insights. Ideal for founders, analysts, researchers, and DAOs, this workflow ensures your Discord server becomes a live intelligence hub — automatically updated with what truly matters worldwide. Our Website: https://afkcrypto.com/ Check our blogs: https://www.afkcrypto.com/blog
by Han
How it Works This workflow automates Invoice & Payment Tracking (with Approvals) across Notion and Slack. Ingest — You drop invoices/receipts (PDF/IMG/JSON) into the flow. Extract — OCR + parsing pulls out key fields (invoice no, vendor, currency, totals, receipt paid amount/date). De-dup & Match — We canonicalize vendor + invoice_no and search Notion: Primary match: Invoice No (+ optional Currency / Vendor (Canon)). Fallback: uses document Amount Total and dates. Decide the action create_unpaid — new invoice (no payment). create_paid — new invoice fully paid (unverified). create_partial — new invoice with a first partial payment. update_partial — add a partial to an existing invoice. update_mark_paid — mark existing invoice paid in full. manual_review — currency mismatch / overpayment / ambiguous. archive — push to archive logs (from manual review). Slack approvals (one-click) — A message shows previous paid, this receipt, new total, and Approve buttons (links to a Wait for Webhook resumeUrl). Reviewer picks: Approve Partial / Mark Paid / Manual Review / Archive. Notion updates We only write editable fields: Paid Amount (number), Status (select), Last Payment Date (date). Formulas (e.g., Amount Total, Amount Due) recompute automatically. Receipts are saved in a Receipts DB and related back to the invoice. Notifications & duplicates — If duplicates are detected, Slack posts a simple list with clickable invoice names. Archiving — From Manual Review, Archive goes straight to Archived Invoice DB (and optional Archived Source File DB) as a log entry—no pre-checks needed. Set up Steps Prerequisites Notion DB 4 Slack Channel (Invoice Input, Notification, Manual Review, Duplicate Alert (Optional)) AI Model (We use Claude 3.5 Haiku, Feel free to use Latest Model) OCR Parsing (We Used ocr.space, Feel Free to Change into any OCR Parsing you have) Create Notion DBs:** Invoice DB: Title Invoice No; Number Paid Amount (editable); Select Status; Dates (Issue/Due/Last Payment Date); Formulas: Amount Total = round(Subtotal - Discount Amount + Tax Total, 2) Amount Due = max(0, round(Amount Total - Paid Amount, 2)) Receipts DB: Invoice No, Vendor, Paid Amount (number), Currency (select), Paid Date (date), Receipt No, Source URL; Relation → Invoice. Archived Invoice DB: Invoice No, Vendor, Reason, Source URL, Original Page ID, Archived At (date). (Optional) Source File / Archived Source File DBs. Share all DBs with your Notion integration (Add connections). Add credentials in n8n:** Notion (integration token) and Slack (bot token). Invite the bot to your channel. Import the workflow/template:* Set each Notion node’s *Database ID* and each Slack node’s *Channel/Credential**. Map updates:* In the Invoice *Update Page* node, map *Paid Amount, **Status, Last Payment Date. In Create Receipt, map Invoice relation + receipt fields. Test:** Run with a sample invoice/receipt → click a Slack button → verify Invoice/Receipt updates in Notion → try Archive from Manual Review.
by Growth AI
Ultimate n8n Agentic RAG Template Author: Cole Medin What is this? This template provides a complete implementation of an Agentic RAG (Retrieval Augmented Generation) system in n8n that can be extended easily for your specific use case and knowledge base. Unlike standard RAG which only performs simple lookups, this agent can reason about your knowledge base, self-improve retrieval, and dynamically switch between different tools based on the specific question. Why Agentic RAG? Standard RAG has significant limitations: Poor analysis of numerical/tabular data Missing context due to document chunking Inability to connect information across documents No dynamic tool selection based on question type What makes this template powerful: Intelligent tool selection**: Switches between RAG lookups, SQL queries, or full document retrieval based on the question Complete document context**: Accesses entire documents when needed instead of just chunks Accurate numerical analysis**: Uses SQL for precise calculations on spreadsheet/tabular data Cross-document insights**: Connects information across your entire knowledge base Multi-file processing**: Handles multiple documents in a single workflow loop Efficient storage**: Uses JSONB in Supabase to store tabular data without creating new tables for each CSV Getting Started Run the table creation nodes first to set up your database tables in Supabase Upload your documents through Google Drive (or swap out for a different file storage solution) The agent will process them automatically (chunking text, storing tabular data in Supabase) Start asking questions that leverage the agent's multiple reasoning approaches Customization This template provides a solid foundation that you can extend by: Tuning the system prompt for your specific use case Adding document metadata like summaries Implementing more advanced RAG techniques Optimizing for larger knowledge bases I do intend on making a local version of this agent very soon!
by zapgrow
AI E-E-A-T WordPress Blog Generator (n8n) This workflow generates SEO-optimized, E-E-A-T compliant blog posts using a form input and publishes them as WordPress drafts with featured images. Features Form-based blog brief SEO metadata + outline generation Full HTML blog writing Featured image generation WordPress draft creation Requirements n8n v1.40+ OpenAI API key WordPress REST API access Environment Variables WP_SITE_URL=https://example.com SITE_NAME=Your Website Name PROJECT_CONTEXT=Your niche description How to Use Import workflow JSON Configure OpenAI & WordPress credentials Set environment variables Open the Form Trigger Submit blog details Draft appears in WordPress Notes Content is created as draft No credentials are included