by JJ Tham
Stop manually digging through Meta Ads data and spending hours trying to connect the dots. This workflow turns n8n into an AI-powered media buyer that automatically analyzes your ad performance, categorizes your creatives, and delivers insights directly into a Google Sheet. ➡️ Watch the full 4-part setup and tutorial on YouTube: https://youtu.be/hxQshcD3e1Y About This 4-Part Automation Series As a media buyer, I built this system to automate the heavy lifting of analyzing ad data and brainstorming new creative ideas. This template is the first foundational part of that larger system. ✅ Part 1 (This Template): Pulling Ad Data & Getting Quick Insights Automatically pulls data into a Google Sheet and uses an LLM to categorize ad performance. ✅ Part 2: Finding the Source Files for the Best Ads Fetches the image or video files for top-performing ads. ✅ Part 3: Using AI to Understand Why an Ad Works Sends your best ads to Google Gemini for structured notes on hooks, transcripts, and visuals. ✅ Part 4: Getting the AI to Suggest New Creative Ideas Uses all the insights to generate fresh ad concepts, scripts, and creative briefs. What This Template (Part 1) Does Secure Token Management Automatically retrieves and refreshes your Facebook long-term access token. Fetch Ad Data Pulls the last 28 days of ad-level performance data from your Facebook Ads account. Process & Clean Parses raw data, standardizes key e-commerce metrics (like ROAS), and filters for sales-focused campaigns. Benchmark Calculation Aggregates all data to create an overall performance benchmark (e.g., average Cost Per Purchase). AI Analysis A “Senior Media Buyer” AI persona evaluates each ad against the benchmark and categorizes it as “HELL YES,” “YES,” or “MAYBE,” with justifications. Output to Google Sheets Updates your Google Sheet with both raw performance data and AI-generated insights. Who Is It For? E-commerce store owners Digital marketing agencies Facebook Ads media buyers How to Set It Up Credentials Connect your Google Gemini and Google Sheets accounts in the respective nodes. The template uses NocoDB for token management. Configure the “Getting Long-Term Token” and “Updating Token” nodes — or replace them with your preferred credential storage method. Update Your IDs In the “Getting Data For the Past 28 Days…” HTTP Request node, replace act_XXXXXX in the URL with your Facebook Ad Account ID. In both Google Sheets nodes (“Sending Raw Data…” and “Updating Ad Insights…”), update the Document ID with your target Google Sheet’s ID. Run the Workflow Click “Test workflow” to run your first AI-powered analysis! Tools Used n8n Facebook for Developers Google AI Studio (Gemini) NocoDB (or any credential database of your choice)
by Rahul Joshi
Description Automatically extract a structured skill matrix from PDF resumes in a Google Drive folder and store results in Google Sheets. Uses Azure OpenAI (GPT-4o-mini) to analyze predefined tech stacks and filters for relevant proficiency. Fast, consistent insights ready for review. 🔍📊 What This Template Does Fetches all resumes from a designated Google Drive folder (“Resume_store”). 🗂️ Downloads each resume file securely via Google Drive API. ⬇️ Extracts text from PDF files for analysis. 📄➡️📝 Analyzes skills with Azure OpenAI (GPT-4o-mini), rating 1–5 and estimating years. 🤖 Parses and filters to include only skills with proficiency > 2, then updates Google Sheets (“Resume store” → “Sheet2”). ✅ Key Benefits Saves hours on manual resume screening. ⏱️ Produces a consistent, structured skill matrix. 📐 Focuses on intermediate to expert skills for faster shortlisting. 🎯 Centralizes candidate data in Google Sheets for easy sharing. 🗃️ Features Predefined tech stack focus: React, Node.js, Angular, Python, Java, SQL, Docker, Kubernetes, AWS, Azure, GCP, HTML, CSS, JavaScript. 🧰 Proficiency scoring (1–5) and estimated years of experience. 📈 PDF-to-text extraction for robust parsing. 🧾 JSON parsing with error handling for invalid outputs. 🛡️ Manual Trigger to run on demand. ▶️ Requirements n8n instance (cloud or self-hosted). Google Drive access with credentials to the “Resume_store” folder. Google Sheets access to the “Resume store” spreadsheet and “Sheet2” tab. Azure OpenAI with GPT-4o-mini deployed and connected via secure credentials. PDF text extraction enabled within n8n. Target Audience HR and Talent Acquisition teams. 👥 Recruiters and staffing agencies. 🧑💼 Operations teams managing hiring pipelines. 🧭 Tech hiring managers seeking consistent skill insights. 💡 Step-by-Step Setup Instructions Place candidate resumes (PDF) into Google Drive → “Resume_store”. In n8n, add Google Drive and Google Sheets credentials and authorize access. In n8n, add Azure OpenAI credentials (GPT-4o-mini deployment). Import the workflow, assign credentials to each node, and confirm folder/sheet names. Run the Manual Trigger to execute the flow and verify data in “Resume store” → “Sheet2”.
by Oneclick AI Squad
Automatically creates complete videos from a text prompt—script, voiceover, stock footage, and subtitles all assembled and ready. How it works Send a video topic via webhook (e.g., "Create a 60-second video about morning exercise"). The workflow uses OpenAI to generate a structured script with scenes, converts text to natural-sounding speech, searches Pexels for matching B-roll footage, and downloads everything. Finally, it merges audio with video, generates SRT subtitles, and prepares all components for final assembly. The workflow handles parallel processing—while generating voiceover, it simultaneously searches and downloads stock footage to save time. Setup steps Add OpenAI credentials for script generation and text-to-speech Get a free Pexels API key from pexels.com/api for stock footage access Connect Google Drive for storing the final video output Install FFmpeg (optional) for automated video assembly, or manually combine the components Test the webhook by sending a POST request with your video topic Input format: { "prompt": "Your video topic here", "duration": 60, "style": "motivational" } What you get ✅ AI-generated script broken into scenes ✅ Professional voiceover audio (MP3) ✅ Downloaded stock footage clips (MP4) ✅ Timed subtitles file (SRT) ✅ All components ready for final editing Note: The final video assembly requires FFmpeg or a video editor. All components are prepared and organized by scene number for easy manual editing if needed.
by Daniel Shashko
This workflow automates the creation of user-generated-content-style product videos by combining Gemini's image generation with OpenAI's SORA 2 video generation. It accepts webhook requests with product descriptions, generates images and videos, stores them in Google Drive, and logs all outputs to Google Sheets for easy tracking. Main Use Cases Automate product video creation for e-commerce catalogs and social media. Generate UGC-style content at scale without manual design work. Create engaging video content from simple text prompts for marketing campaigns. Build a centralized library of product videos with automated tracking and storage. How it works The workflow operates as a webhook-triggered process, organized into these stages: Webhook Trigger & Input Accepts POST requests to the /create-ugc-video endpoint. Required payload includes: product prompt, video prompt, Gemini API key, and OpenAI API key. Image Generation (Gemini) Sends the product prompt to Google's Gemini 2.5 Flash Image model. Generates a product image based on the description provided. Data Extraction Code node extracts the base64 image data from Gemini's response. Preserves all prompts and API keys for subsequent steps. Video Generation (SORA 2) Sends the video prompt to OpenAI's SORA 2 API. Initiates video generation with specifications: 720x1280 resolution, 8 seconds duration. Returns a video generation job ID for polling. Video Status Polling Continuously checks video generation status via OpenAI API. If status is "completed": proceeds to download. If status is still processing: waits 1 minute and retries (polling loop). Video Download & Storage Downloads the completed video file from OpenAI. Uploads the MP4 file to Google Drive (root folder). Generates a shareable Google Drive link. Logging to Google Sheets Records all generation details in a tracking spreadsheet: Product description Video URL (Google Drive link) Generation status Timestamp Summary Flow: Webhook Request → Generate Product Image (Gemini) → Extract Image Data → Generate Video (SORA 2) → Poll Status → If Complete: Download Video → Upload to Google Drive → Log to Google Sheets → Return Response If Not Complete: Wait 1 Minute → Poll Status Again Benefits: Fully automated video creation pipeline from text to finished product. Scalable solution for generating multiple product videos on demand. Combines cutting-edge AI models (Gemini + SORA 2) for high-quality output. Centralized storage in Google Drive with automatic logging in Google Sheets. Flexible webhook interface allows integration with any application or service. Retry mechanism ensures videos are captured even with longer processing times. Created by Daniel Shashko
by Rahul Joshi
Description Keep your CRM pipeline clean and actionable by automatically archiving inactive deals, logging results to Google Sheets, and sending Slack summary reports. This workflow ensures your sales team focuses on active opportunities while maintaining full audit visibility. 🚀📈 What This Template Does Triggers daily at 9 AM to check all GoHighLevel CRM opportunities. ⏰ Filters deals that have been inactive for 10+ days using last activity or update date. 🔍 Automatically archives inactive deals to keep pipelines clutter-free. 📦 Formats and logs deal details into Google Sheets for record-keeping. 📊 Sends a Slack summary report with total archived count, value, and deal names. 💬 Key Benefits ✅ Keeps pipelines organized by removing stale opportunities. ✅ Saves time through fully automated archiving and reporting. ✅ Maintains a transparent audit trail in Google Sheets. ✅ Improves sales visibility with automated Slack summaries. ✅ Easily adjustable inactivity threshold and scheduling. Features Daily scheduled trigger (9 AM) with adjustable cron expression. GoHighLevel CRM integration for fetching and updating opportunities. Conditional logic to detect inactivity periods. Google Sheets logging with automatic updates. Slack integration for real-time reporting and team visibility. Requirements GoHighLevel API credentials (OAuth2) with opportunity access. Google Sheets OAuth2 credentials with edit permissions. Slack Bot token with chat:write permission. A connected n8n instance (cloud or self-hosted). Target Audience Sales and operations teams managing CRM hygiene. Business owners wanting automated inactive deal cleanup. Agencies monitoring client pipelines across teams. CRM administrators ensuring data accuracy and accountability. Step-by-Step Setup Instructions Connect your GoHighLevel OAuth2 credentials in n8n. 🔑 Link your Google Sheets document and replace the Sheet ID. 📋 Configure Slack credentials and specify your target channel. 💬 Adjust inactivity threshold (default: 10 days) as needed. ⚙️ Update the cron schedule (default: 9 AM daily). ⏰ Test the workflow manually to verify end-to-end automation. ✅
by Amine ARAGRAG
This n8n template automates the collection and enrichment of Product Hunt posts using AI and Google Sheets. It fetches new tools daily, translates content, categorizes them intelligently, and saves everything into a structured spreadsheet—ideal for building directories, research dashboards, newsletters, or competitive intelligence assets. Good to know Sticky notes inside the workflow explain each functional block and required configurations. Uses cursor-based pagination to safely fetch Product Hunt data. AI agent handles translation, documentation generation, tech extraction, and function area classification. Category translations are synced with a Google Sheets dictionary to avoid duplicates. All enriched entries are stored in a clean “Tools” sheet for easy filtering or reporting. How it works A schedule trigger starts the workflow daily. Product Hunt posts are retrieved via GraphQL and processed in batches. A code node restructures each product into a consistent schema. The workflow checks if a product already exists in Google Sheets. For new items, the AI agent generates metadata, translations, and documentation. Categories are matched or added to a Google Sheets dictionary. The final enriched product entry is appended or updated in the spreadsheet. Pagination continues until no next page remains. How to use Connect Product Hunt OAuth2, Google Sheets, and OpenAI credentials. Adjust the schedule trigger to your preferred frequency. Optionally expand enrichment fields (tags, scoring, custom classifications). Replace the trigger with a webhook or manual trigger if needed. Requirements Product Hunt OAuth2 credentials Google Sheets account OpenAI (or compatible) API access Customising this workflow Add Slack or Discord notifications for new tools. Push enriched data to Airtable, Notion, or a database. Extend AI enrichment with summaries or SEO fields. Use the Google Sheet as a backend for dashboards or frontend applications.
by Yasir
🧠 Workflow Overview — AI-Powered Jobs Scraper & Relevancy Evaluator This workflow automates the process of finding highly relevant job listings based on a user’s resume, career preferences, and custom filters. It scrapes fresh job data, evaluates relevance using OpenAI GPT models, and automatically appends the results to your Google Sheet tracker — while skipping any jobs already in your sheet, so you don’t have to worry about duplicates. Perfect for recruiters, job seekers, or virtual assistants who want to automate job research and filtering. ⚙️ What the Workflow Does Takes user input through a form — including resume, preferences, target score, and Google Sheet link. Fetches job listings via an Apify LinkedIn Jobs API actor. Filters and deduplicates results (removes duplicates and blacklisted companies). Evaluates job relevancy using GPT-4o-mini, scoring each job (0–100) against the user’s resume & preferences. Applies a relevancy threshold to keep only top-matching jobs. Checks your Google Sheet for existing jobs and prevents duplicates. Appends new, relevant jobs directly into your provided Google Sheet. 📋 What You’ll Get A personal Job Scraper Form (public URL you can share or embed). Automatic job collection & filtering based on your inputs. Relevance scoring** (0–100) for each job using your resume and preferences. Real-time job tracking Google Sheet that includes: Job Title Company Name & Profile Job URLs Location, Salary, HR Contact (if available) Relevancy Score 🪄 Setup Instructions 1. Required Accounts You’ll need: ✅ n8n account (self-hosted or Cloud) ✅ Google account (for Sheets integration) ✅ OpenAI account (for GPT API access) ✅ Apify account (to fetch job data) 2. Connect Credentials In your n8n instance: Go to Credentials → Add New: Google Sheets OAuth2 API Connect your Google account. OpenAI API Add your OpenAI API key. Apify API Replace <your_apify_api> with your apify api key. Set Up Apify API Get your Apify API key Visit: https://console.apify.com/settings/integrations Copy your API key. Rent the required Apify actor before running this workflow Go to: https://console.apify.com/actors/BHzefUZlZRKWxkTck/input Click “Rent Actor”. Once rented, it can be used by your Apify account to fetch job listings. 3. Set Up Your Google Sheet Make a copy of this template: 📄 Google Sheet Template Enable Edit Access for anyone with the link. Copy your sheet’s URL — you’ll provide this when submitting the workflow form. 4. Deploy & Run Import this workflow (jobs_scraper.json) into your n8n workspace. Activate the workflow. Visit your form trigger endpoint (e.g. https://your-n8n-domain/webhook/jobs-scraper). Fill out the form with: Job title(s) Location Contract type, Experience level, Working mode, Date posted Target relevancy score Google Sheet link Resume text Job preferences or ranking criteria Submit — within minutes, new high-relevance job listings will appear in your Google Sheet automatically. 🧩 Example Use Cases Automate daily job scraping for clients or yourself. Filter jobs by AI-based relevance instead of keywords. Build a smart job board or job alert system. Support a career agency offering done-for-you job search services. 💡 Tips Adjust the “Target Relevancy Score” (e.g., 70–85) to control how strict the filtering is. You can add your own blacklisted companies in the Filter & Dedup Jobs node.
by Lidia
Who’s it for Teams who want to automatically generate structured meeting minutes from uploaded transcripts and instantly share them in Slack. Perfect for startups, project teams, or any company that collects meeting transcripts in Google Drive. How it works / What it does This workflow automatically turns raw meeting transcripts into well-structured minutes in Markdown and posts them to Slack: Google Drive Trigger – Watches a specific folder. Any new transcript file added will start the workflow. Download File – Grabs the transcript. Prep Transcript – Converts the file into plain text and passes the transcript downstream. Message a Model – Sends the transcript to OpenAI GPT for summarization using a structured system prompt (action items, decisions, N/A placeholders). Make Minutes – Formats GPT’s response into a Markdown file. Slack: Send a message – Posts a Slack message announcing the auto-generated minutes. Slack: Upload a file – Uploads the full Markdown minutes file into the chosen Slack channel. End result: your Slack channel always has clear, standardized minutes right after a meeting. How to set up Google Drive Create a folder where you’ll drop transcript files. Configure the folder ID in the Google Drive Trigger node. OpenAI Add your OpenAI API credentials in the Message a Model node. Select a supported GPT model (e.g., gpt-4o-mini or gpt-4). Slack Connect your Slack account and set the target channel ID in the Slack nodes. Run the workflow and drop a transcript file into Drive. Minutes will appear in Slack automatically. Requirements Google Drive account (for transcript upload) OpenAI API key (for text summarization) Slack workspace (for message posting and file upload) How to customize the workflow Change summary structure*: Adjust the system prompt inside *Message a Model (e.g., shorter summaries, language other than English). Different output format*: Modify *Make Minutes to output plain text, PDF, or HTML instead of Markdown. New destinations**: Add more nodes to send minutes to email, Notion, or Confluence in parallel. Multiple triggers**: Replace Google Drive trigger with Webhook if you want to integrate with Zoom or MS Teams transcript exports. Good to know OpenAI API calls are billed separately. See OpenAI pricing. Files must be text-based (.txt or .md). For PDFs or docs, add a conversion step before summarization. Slack requires the bot user to be a member of the target channel, otherwise you’ll see a not_in_channel error.
by 小林幸一
Template Description 📝 Template Title Analyze Amazon product reviews with Gemini and save to Google Sheets 📄 Description This workflow automates the process of analyzing customer feedback on Amazon products. Instead of manually reading through hundreds of reviews, this template scrapes reviews (specifically targeting negative feedback), uses Google Gemini (AI) to analyze the root causes of dissatisfaction, and generates specific improvement suggestions. The results are automatically logged into a Google Sheet for easy tracking, and a Slack notification is sent to keep your team updated. This tool is essential for understanding "Voice of Customer" data efficiently without manual data entry. 🧍 Who is this for Product Managers** looking for product improvement ideas. E-commerce Sellers (Amazon FBA, D2C)** monitoring brand reputation. Market Researchers** analyzing competitor weaknesses. Customer Support Teams** identifying recurring issues. ⚙️ How it works Data Collection: The workflow triggers the Apify actor (junglee/amazon-reviews-scraper) to fetch reviews from a specified Amazon product URL. It is currently configured to filter for 1 and 2-star reviews to focus on complaints. AI Analysis: It loops through each review and sends the content to Google Gemini. The AI determines a sentiment score (1-5), categorizes the issue (Quality, Design, Shipping, etc.), summarizes the complaint, and proposes a concrete improvement plan. Formatting: A Code node parses the AI's response to ensure it is in a clean JSON format. Storage: The structured data is appended as a new row in a Google Sheet. Notification: A Slack message is sent to your specified channel to confirm the batch analysis is complete. 🛠️ Requirements n8n** (Self-hosted or Cloud) Apify Account:** You need to rent the junglee/amazon-reviews-scraper actor. Google Cloud Account:** For accessing the Gemini (PaLM) API and Google Sheets API. Slack Account:** For receiving notifications. 🚀 How to set up Apify Config: Enter your Apify API token in the credentials. In the "Run an Actor" node, update the startUrls to the Amazon product page you want to analyze. Google Sheets: Create a new Google Sheet with the following header columns: sentiment_score, category, summary, improvement. Copy the Spreadsheet ID into the Google Sheets node. AI Prompt: The "Message a model" node contains the prompt. It is currently set to output results in Japanese. If you need English output, simply translate the prompt text inside this node. Slack: Select the channel where you want to receive notifications in the Slack node.
by iamvaar
Youtube Video: https://youtu.be/dEtV7OYuMFQ?si=fOAlZWz4aDuFFovH Workflow Pre-requisites Step 1: Supabase Setup First, replace the keys in the "Save the embedding in DB" & "Search Embeddings" nodes with your new Supabase keys. After that, run the following code snippets in your Supabase SQL editor: Create the table to store chunks and embeddings: CREATE TABLE public."RAG" ( id bigserial PRIMARY KEY, chunk text NULL, embeddings vector(1024) NULL ) TABLESPACE pg_default; Create a function to match embeddings: DROP FUNCTION IF EXISTS public.matchembeddings1(integer, vector); CREATE OR REPLACE FUNCTION public.matchembeddings1( match_count integer, query_embedding vector ) RETURNS TABLE ( chunk text, similarity float ) LANGUAGE plpgsql AS $$ BEGIN RETURN QUERY SELECT R.chunk, 1 - (R.embeddings <=> query_embedding) AS similarity FROM public."RAG" AS R ORDER BY R.embeddings <=> query_embedding LIMIT match_count; END; $$; Step 2: Create Jotform with these fields Your full name email address Upload PDF Document [field where you upload the knowledgebase in PDF] Step 3: Get Together AI API Key Get a Together AI API key and paste it into the "Embedding Uploaded document" node and the "Embed User Message" node. Here is a detailed, node-by-node explanation of the n8n workflow, which is divided into two main parts. Part 1: Ingesting Knowledge from a PDF This first sequence of nodes runs when you submit a PDF through a Jotform. Its purpose is to read the document, process its content, and save it in a specialized database for the AI to use later. JotForm Trigger Type: Trigger What it does: This node starts the entire workflow. It's configured to listen for new submissions on a specific Jotform. When someone uploads a file and submits the form, this node activates and passes the submission data to the next step. Grab New knowledgebase Type: HTTP Request What it does: The initial trigger from Jotform only contains basic information. This node makes a follow-up call to the Jotform API using the submissionID to get the complete details of that submission, including the specific link to the uploaded file. Grab the uploaded knowledgebase file link Type: HTTP Request What it does: Using the file link obtained from the previous node, this step downloads the actual PDF file. It's set to receive the response as a file, not as text. Extract Text from PDF File Type: Extract From File What it does: This utility node takes the binary PDF file downloaded in the previous step and extracts all the readable text content from it. The output is a single block of plain text. Splitting into Chunks Type: Code What it does: This node runs a small JavaScript snippet. It takes the large block of text from the PDF and chops it into smaller, more manageable pieces, or "chunks," each of a predefined length. This is critical because AI models work more effectively with smaller, focused pieces of text. Embedding Uploaded document Type: HTTP Request What it does: This is a key AI step. It sends each individual text chunk to an embeddings API. A specified AI model converts the semantic meaning of the chunk into a numerical list called an embedding or vector. This vector is like a mathematical fingerprint of the text's meaning. Save the embedding in DB Type: Supabase What it does: This node connects to your Supabase database. For every chunk, it creates a new row in a specified table and stores two important pieces of information: the original text chunk and its corresponding numerical embedding (its "fingerprint") from the previous step. Part 2: Answering Questions via Chat This second sequence starts when a user sends a message. It uses the knowledge stored in the database to find relevant information and generate an intelligent answer. When chat message received Type: Chat Trigger What it does: This node starts the second part of the workflow. It listens for any incoming message from a user in a connected chat application. Embend User Message Type: HTTP Request What it does: This node takes the user's question and sends it to the exact same embeddings API and model used in Part 1. This converts the question's meaning into the same kind of numerical vector or "fingerprint." Search Embeddings Type: HTTP Request What it does: This is the "retrieval" step. It calls a custom database function in Supabase. It sends the question's embedding to this function and asks it to search the knowledge base table to find a specified number of top text chunks whose embeddings are mathematically most similar to the question's embedding. Aggregate Type: Aggregate What it does: The search from the previous step returns multiple separate items. This utility node simply bundles those items into a single, combined piece of data. This makes it easier to feed all the context into the final AI model at once. AI Agent & Google Gemini Chat Model Type: LangChain Agent & AI Model What it does: This is the "generation" step where the final answer is created. The AI Agent node is given a detailed set of instructions (a prompt). The prompt tells the Google Gemini Chat Model to act as a professional support agent. Crucially, it provides the AI with the user's original question and the aggregated text chunks from the Aggregate node as its only source of truth. It then instructs the AI to formulate an answer based only on that provided context, format it for a specific chat style, and to say "I don't know" if the answer cannot be found in the chunks. This prevents the AI from making things up.
by Bhavy Shekhaliya
Overview This n8n template demonstrates how to use AI to automatically analyze WordPress blog content and generate relevant, SEO-optimized tags for WordPress posts. Use cases Automate content tagging for WordPress blogs, maintain consistent taxonomy across large content libraries, save hours of manual tagging work, or improve SEO by ensuring every post has relevant, searchable tags! Good to know The workflow creates new tags automatically if they don't exist in WordPress. Tag generation is intelligent - it avoids duplicates by mapping to existing tag IDs. How it works We fetch a WordPress blog post using the WordPress node with sticky data enabled for testing. The post content is sent to GPT-4.1-mini which analyzes it and generates 5-10 relevant tags using a structured output parser. All existing WordPress tags are fetched via HTTP Request to check for matches. A smart loop processes each AI-generated tag: If the tag already exists, it maps to the existing tag ID If it's new, it creates the tag via WordPress API All tag IDs are aggregated and the WordPress post is updated with the complete tag list. How to use The manual trigger node is used as an example but feel free to replace this with other triggers such as webhook, schedule, or WordPress webhook for new posts. Modify the "Fetch One WordPress Blog" node to fetch multiple posts or integrate with your publishing workflow. Requirements WordPress site with REST API enabled OpenAI API Customising this workflow Adjust the AI prompt to generate tags specific to your industry or SEO strategy Change the tag count (currently 5-10) based on your needs Add filtering logic to only tag posts in specific categories
by Jadai kongolo
🚀 n8n Local AI Agentic RAG Template Author: Jadai kongolo What is this? This template provides an entirely local implementation of an Agentic RAG (Retrieval Augmented Generation) system in n8n that can be extended easily for your specific use case and knowledge base. Unlike standard RAG which only performs simple lookups, this agent can reason about your knowledge base, self-improve retrieval, and dynamically switch between different tools based on the specific question. Why Agentic RAG? Standard RAG has significant limitations: Poor analysis of numerical/tabular data Missing context due to document chunking Inability to connect information across documents No dynamic tool selection based on question type What makes this template powerful: Intelligent tool selection**: Switches between RAG lookups, SQL queries, or full document retrieval based on the question Complete document context**: Accesses entire documents when needed instead of just chunks Accurate numerical analysis**: Uses SQL for precise calculations on spreadsheet/tabular data Cross-document insights**: Connects information across your entire knowledge base Multi-file processing**: Handles multiple documents in a single workflow loop Efficient storage**: Uses JSONB in Supabase to store tabular data without creating new tables for each CSV Getting Started Run the table creation nodes first to set up your database tables in Supabase Upload your documents to the folder on your computer that is mounted to /data/shared in the n8n container. This folder by default is the "shared" folder in the local AI package. The agent will process them automatically (chunking text, storing tabular data in Supabase) Start asking questions that leverage the agent's multiple reasoning approaches Customization This template provides a solid foundation that you can extend by: Tuning the system prompt for your specific use case Adding document metadata like summaries Implementing more advanced RAG techniques Optimizing for larger knowledge bases The non-local ("cloud") version of this Agentic RAG agent can be found here.