by iamvaar
Youtube Video: https://youtu.be/dEtV7OYuMFQ?si=fOAlZWz4aDuFFovH Workflow Pre-requisites Step 1: Supabase Setup First, replace the keys in the "Save the embedding in DB" & "Search Embeddings" nodes with your new Supabase keys. After that, run the following code snippets in your Supabase SQL editor: Create the table to store chunks and embeddings: CREATE TABLE public."RAG" ( id bigserial PRIMARY KEY, chunk text NULL, embeddings vector(1024) NULL ) TABLESPACE pg_default; Create a function to match embeddings: DROP FUNCTION IF EXISTS public.matchembeddings1(integer, vector); CREATE OR REPLACE FUNCTION public.matchembeddings1( match_count integer, query_embedding vector ) RETURNS TABLE ( chunk text, similarity float ) LANGUAGE plpgsql AS $$ BEGIN RETURN QUERY SELECT R.chunk, 1 - (R.embeddings <=> query_embedding) AS similarity FROM public."RAG" AS R ORDER BY R.embeddings <=> query_embedding LIMIT match_count; END; $$; Step 2: Create Jotform with these fields Your full name email address Upload PDF Document [field where you upload the knowledgebase in PDF] Step 3: Get Together AI API Key Get a Together AI API key and paste it into the "Embedding Uploaded document" node and the "Embed User Message" node. Here is a detailed, node-by-node explanation of the n8n workflow, which is divided into two main parts. Part 1: Ingesting Knowledge from a PDF This first sequence of nodes runs when you submit a PDF through a Jotform. Its purpose is to read the document, process its content, and save it in a specialized database for the AI to use later. JotForm Trigger Type: Trigger What it does: This node starts the entire workflow. It's configured to listen for new submissions on a specific Jotform. When someone uploads a file and submits the form, this node activates and passes the submission data to the next step. Grab New knowledgebase Type: HTTP Request What it does: The initial trigger from Jotform only contains basic information. This node makes a follow-up call to the Jotform API using the submissionID to get the complete details of that submission, including the specific link to the uploaded file. Grab the uploaded knowledgebase file link Type: HTTP Request What it does: Using the file link obtained from the previous node, this step downloads the actual PDF file. It's set to receive the response as a file, not as text. Extract Text from PDF File Type: Extract From File What it does: This utility node takes the binary PDF file downloaded in the previous step and extracts all the readable text content from it. The output is a single block of plain text. Splitting into Chunks Type: Code What it does: This node runs a small JavaScript snippet. It takes the large block of text from the PDF and chops it into smaller, more manageable pieces, or "chunks," each of a predefined length. This is critical because AI models work more effectively with smaller, focused pieces of text. Embedding Uploaded document Type: HTTP Request What it does: This is a key AI step. It sends each individual text chunk to an embeddings API. A specified AI model converts the semantic meaning of the chunk into a numerical list called an embedding or vector. This vector is like a mathematical fingerprint of the text's meaning. Save the embedding in DB Type: Supabase What it does: This node connects to your Supabase database. For every chunk, it creates a new row in a specified table and stores two important pieces of information: the original text chunk and its corresponding numerical embedding (its "fingerprint") from the previous step. Part 2: Answering Questions via Chat This second sequence starts when a user sends a message. It uses the knowledge stored in the database to find relevant information and generate an intelligent answer. When chat message received Type: Chat Trigger What it does: This node starts the second part of the workflow. It listens for any incoming message from a user in a connected chat application. Embend User Message Type: HTTP Request What it does: This node takes the user's question and sends it to the exact same embeddings API and model used in Part 1. This converts the question's meaning into the same kind of numerical vector or "fingerprint." Search Embeddings Type: HTTP Request What it does: This is the "retrieval" step. It calls a custom database function in Supabase. It sends the question's embedding to this function and asks it to search the knowledge base table to find a specified number of top text chunks whose embeddings are mathematically most similar to the question's embedding. Aggregate Type: Aggregate What it does: The search from the previous step returns multiple separate items. This utility node simply bundles those items into a single, combined piece of data. This makes it easier to feed all the context into the final AI model at once. AI Agent & Google Gemini Chat Model Type: LangChain Agent & AI Model What it does: This is the "generation" step where the final answer is created. The AI Agent node is given a detailed set of instructions (a prompt). The prompt tells the Google Gemini Chat Model to act as a professional support agent. Crucially, it provides the AI with the user's original question and the aggregated text chunks from the Aggregate node as its only source of truth. It then instructs the AI to formulate an answer based only on that provided context, format it for a specific chat style, and to say "I don't know" if the answer cannot be found in the chunks. This prevents the AI from making things up.
by Pedro Entringer
🧠 Export Tawk.to Help Center Articles to Google Drive as Markdown Files Transform the way you manage your knowledge base with this fully automated N8N workflow! This automation connects directly to your Tawk.to Help Center, reads all published categories and articles, converts them to Markdown (.md) format, and uploads each file to Google Drive 🔹 Key Benefits 🚀 Complete Extraction Automatically captures all categories and articles from your Tawk.to Help Center, even without direct API integration. 🧩 Automatic Conversion Transforms HTML content into clean Markdown files — perfect for editing, version control, or migration to another CMS. ☁️ Native Google Drive Integration Saves each article with a structured filename, avoids duplicates, and organizes them by category. 🔁 Fully Customizable Easily adapt the workflow to export to Notion, GitHub, Dropbox, or any other platform supported by N8N. 💡 Ideal Use Cases Migrating your Tawk.to Help Center Creating automated content backups Integrating documentation across multiple systems ⚙️ Prerequisites Before running this workflow, make sure you have: An active Tawk.to account with access to your Help Center. A Google Drive account (personal or workspace). Access to N8N (self-hosted or cloud). 🧰 Setup Instructions Import the Workflow Download the JSON file from the provided link or your N8N community instance. In N8N, click Import Workflow and upload the file. Authenticate Google Drive Open the Google Drive node. Click Connect, choose your Google account, and allow access. Configure Output Folder Choose or create a target folder in your Google Drive where articles will be saved. Run the Workflow Click Execute Workflow. The automation will read all Help Center articles, convert them to Markdown, and save them to your Drive.
by Pixcels Themes
AI Assignment Grader with Automated Reporting Who’s it for This workflow is designed for educators, professors, academic institutions, coaching centers, and edtech platforms that want to automate the grading of written assignments or test papers. It’s ideal for scenarios where consistent evaluation, detailed feedback, and structured result storage are required without manual effort. What it does / How it works This workflow automates the end-to-end grading process for student assignments submitted as PDFs. A student’s test paper is uploaded via a webhook endpoint. The workflow extracts text from the uploaded PDF file. Student metadata (name, assignment title) is prepared and combined with the extracted answers. A predefined answer script (model answers with marking scheme) is loaded into the workflow. An AI grading agent powered by Gemini compares the student’s responses against the answer script. The AI: Evaluates each question Assigns marks based on correctness and completeness Generates per-question feedback Calculates total marks, percentage, and grade The structured grading output is converted into: An HTML grading report A CSV file for records The final CSV grading report is automatically uploaded to Google Drive for storage and sharing. All grading logic runs automatically once the test paper is submitted. Requirements Google Gemini (PaLM) API credentials Google Drive OAuth2 credentials A webhook endpoint configured in n8n PDF test papers submitted in a supported format A predefined answer script with marks per question How to set up Connect your Google Gemini credentials in n8n. Connect your Google Drive account and select the destination folder. Enable and copy the webhook URL for test paper uploads. Customize the Load Answer Script node with your assignment’s correct answers and marking scheme. (Optional) Adjust grading instructions or output format in the AI Agent prompt. Test the workflow by uploading a sample PDF assignment. How to customize the workflow Update the AI grading rubric to be stricter or more lenient. Modify feedback style (short comments vs detailed explanations). Change grading scales, total marks, or grade boundaries. Store results in additional systems (LMS, database, email notifications). Add plagiarism checks or similarity scoring before grading. Generate PDF reports instead of CSV/HTML if required. This workflow enables fast, consistent, and scalable assignment grading while giving students clear, structured feedback and educators reliable records.
by Supira Inc.
Overview This template automates invoice processing for teams that currently copy data from PDFs into spreadsheets by hand. It is ideal for small businesses, back-office teams, accounting, and operations who want to reduce manual entry, avoid human error, and never miss a payment deadline. The workflow watches a structured Google Drive folder, performs OCR, converts the text into clean structured JSON with an LLM, and appends one row per invoice into Google Sheets. It preserves a link back to the original file for easy review and audit. Designed for small businesses and back-office teams.** Eliminates manual typing** and reduces errors. Prevents missed due dates** by centralizing data. Works with monthly subfolders like "2025年10月分" (meaning "October 2025"). Keeps a Google Drive link to each invoice file. How It Works The workflow runs on a schedule, scans your Drive folder hierarchy, OCRs the PDFs/images, cleans the text, extracts key fields with an LLM, and appends a row to Google Sheets per invoice. Each step is modular so you can swap services or tweak prompts without breaking the flow. Scheduled trigger** runs on a recurring cadence. Scan the parent folder** in Google Drive. Auto-detect the current-month folder** (e.g., a folder named "2025年10月分" meaning "October 2025"). Download PDFs/images** from the detected folder. Extract text** using the OCR.Space API. Clean noise** and normalize with a Code node. Use an OpenAI model** to extract invoice_date, due_date, client_name, line items, totals, and bank info to JSON. Append one row per invoice** to Google Sheets. Requirements Before you start, make sure you have access to the required services and that your Drive is organized into monthly subfolders so the workflow can find the right files. n8n account.** Google Drive access.** Google Sheets access.** OCR.Space API key** (set as <your_ocr_api_key>). OpenAI / LLM API credential** (e.g., <your_openai_credential_name>). Invoice PDFs organized by month** on Google Drive (e.g., folders like "2025年10月分"). Setup Instructions Import the workflow, replace placeholder credentials and IDs with your own, and enable the schedule. You can also run it manually for testing. The parent-folder query and sheet ID must reflect your environment. Replace <your_google_drive_credential_id> and <your_google_drive_credential_name> with your Google Drive Credential. Adjust the parent folder search query to your invoice repository name. Replace the Sheets document ID <your_google_sheet_id> with your spreadsheet ID. Ensure your OpenAI credential <your_openai_credential_name> is selected. Set your OCR.Space key as <your_ocr_api_key>. Enable the Schedule Trigger** after testing. Customization This workflow is easily extensible. You can adapt folder naming rules, enrich the spreadsheet schema, and expand the AI prompt to extract custom fields specific to your company. It also works beyond invoices, covering receipts, quotes, or purchase orders with minor changes. Change the monthly folder naming rule such as {{$now.setZone("Asia/Tokyo").format("yyyy年MM月")}}分 to match your convention. Modify or extend Google Sheets column mappings as needed. Tune the AI prompt to extract project codes, owner names, or custom fields. Repurpose for receipts, quotes, or purchase orders. Localize date formats and tax calculation rules to your standards.
by Jitesh Dugar
Transform your educational business with a fully automated mobile storefront. This workflow manages the entire student journey—from browsing course catalogues to secure payment processing and enrollment tracking—all within WhatsApp by combining WATI, Razorpay, and Google Sheets. 🎯 What This Workflow Does Turns WhatsApp into a 24/7 automated enrollment desk: 📝 Captures Student Intent Receives text commands like courses, enroll, or pay via the WATI Trigger from the student's phone. 🚦 Smart Message Routing A Switch node identifies the keyword to trigger the correct path: courses: Displays the full course catalogue. enroll : Shows specific course details and a payment CTA. pay : Generates a unique Razorpay payment link. mystatus: Fetches the student's personal enrollment history. 👁️ Dynamic Catalogue Generation Fetches live data from Google Sheets to build a formatted WhatsApp message with course codes, prices, and durations. 💳 Instant Payment Processing Integrates with the Razorpay API to create secure, short-URL payment links tailored to the specific course and student. 📊 Automated CRM Logging Logs every enrollment attempt as "Pending" in Google Sheets, capturing timestamps, phone numbers, and unique payment IDs. ✨ Key Features White-Label Automation:** Sell courses under your own brand without needing a complex website or LMS. Real-Time Status Tracking:** Students can instantly view their active and pending enrollments with the mystatus command. Native Razorpay Integration:** Uses a clean REST API approach (HTTP Request) to generate payment links without requiring external SDKs. Formatted Course Cards:** Automatically generates detailed summaries for each course, including instructor info and start dates. Multi-Category Support:** Organizes your catalogue by subject (e.g., Programming, Marketing) for a professional user experience. 💼 Perfect For Independent Tutors:** Selling recorded workshops or live sessions without manual billing. Coaching Institutes:** Automating the registration process for high-volume course launches. Skill-Based Bootcamps:** Providing a low-friction "chat-to-pay" experience for mobile users. Corporate Trainers:** Tracking employee registrations for internal certification programs. 🔧 What You'll Need Required Integrations WATI** – To handle WhatsApp message triggers and delivery. Razorpay** – To generate unique payment links via REST API. Google Sheets** – To manage your course database and enrollment logs. Optional Customizations Payment Confirmation:** Set up a Razorpay Webhook to automatically update enrollment status from "Pending" to "Enrolled" upon payment. Automated Welcome:** Add a node to send a "Course Access Guide" PDF once the payment is verified. 🚀 Quick Start Import Template – Copy the JSON and import it into your n8n instance. Set Credentials – Connect your WATI, Razorpay (Basic Auth), and Google Sheets accounts. Configure Sheets – Ensure your Google Sheet has headers for: Courses Tab: name, code, category, price, duration, shortDesc, description, instructor, startDate Enrollments Tab: timestamp, phone, courseCode, courseName, amount, status, paymentlinkId, paymentlinkUrl Test Browsing – Send the word courses to your WATI number. Simulate Payment – Send pay <course_code> to receive your first automated payment link. 🎨 Customization Options Currency Setup:** Change the currency from INR to USD or EUR in the Razorpay Payload node. Personalized Feedback:** Edit the Build Enrollment Status code to change how the student’s history is displayed. Custom CTA:** Modify the "Enroll Detail Card" to include links to your YouTube demo or LinkedIn profile. 📈 Expected Results 95% reduction in manual coordination for course registrations and link sharing. Faster conversions by allowing students to pay the moment they show interest. Organized data with every student interaction logged in a single spreadsheet. Professional image using automated, well-formatted WhatsApp cards and official payment links. 🏆 Use Cases Upskilling Bootcamps A programming school sends the courses list to a leads group; students enroll and pay for "Python 101" entirely through the chat. Skill Progress Tracking A student types mystatus to see which courses they have paid for and which enrollments are still pending. Flash Sales Promote a course code on Instagram; when users message that code to your WhatsApp, the bot handles the sale 24/7. 💡 Pro Tips Shorthand Commands:** The bot is built to handle case-insensitive commands, so PAY PY101 and pay py101 work equally well. Razorpay Test Mode:** Always test your payment links using Razorpay's "Test Mode" keys before going live to ensure the links generate correctly. Clean Database:** The Build Enrollment Status node uses phone number filtering to ensure students only see their own private history. Ready to start enrolling students? Import this template and connect your Razorpay account to automate your sales today!
by Atik
Automate video transcription and Q&A with async VLM processing that scales from short clips to long recordings. What this workflow does Monitors Google Drive for new files in a specific folder and grabs the file ID on create Automatically downloads the binary to hand off for processing Sends the video to VLM Run for async transcription with a callback URL that posts results back to n8n Receives the transcript JSON via Webhook and appends a row in Google Sheets with the video identifier and transcript data Enables chat Q&A through the Chat Trigger + AI Agent. The agent fetches relevant rows from Sheets and answers only from those segments using the connected chat model Setup Prerequisites: Google Drive and Google Sheets accounts, VLM Run API credentials, OpenAI (or another supported) chat model credentials, n8n instance. Install the verified VLM Run node by searching for VLM Run in the nodes list, then click Install. You can also confirm on npm if needed. After install, it integrates directly for robust async transcription. Quick Setup: Google Drive folder watch Add Google Drive Trigger and choose Specific folder. Set polling to every minute, event to File Created. Connect Drive OAuth2. Download the new file Add Google Drive node with Download. Map {{$json.id}} and save the binary as data. Async transcription with VLM Run Add VLM Run node. Operation: video. Domain: video.transcription. Enable Process Asynchronously and set Callback URL to your Webhook path (for example /transcript-video). Add your VLM Run API key. Webhook to receive results Add Webhook node with method POST and path /transcript-video. This is the endpoint VLM Run calls when the job completes. Use When Last Node Finishes or respond via a Respond node if you prefer. Append to Google Sheets Add Google Sheets node with Append. Point to your spreadsheet and sheet. Map: Video Name → the video identifier from the webhook payload Data → the transcript text or JSON from the webhook payload Connect Google Sheets OAuth2. Chat entry point and Agent Add Chat Trigger to receive user questions. Add AI Agent and connect: a Chat Model (for example OpenAI Chat Model) the Google Sheets Tool to read relevant rows In the Agent system message, instruct: Use the Sheets tool to fetch transcript rows matching the question Answer only from those rows Cite or reference row context as needed Test and activate Upload a sample video to the watched Drive folder. Wait for the callback to populate your sheet. Ask a question through the Chat Trigger and confirm the agent quotes only from the retrieved rows. Activate your template and let it automate the task. How to take this further Team memory:** Ask “What did we decide on pricing last week?” and get the exact clip and answer. Study helper:** Drop classes in, then ask for key points or formulas by topic. Customer FAQ builder:** Turn real support calls into answers your team can reuse. Podcast highlights:** Find quotes, tips, and standout moments from each episode. Meeting catch-up:** Get decisions and action items from any recording, fast. Marketing snippets:** Pull short, social-ready lines from long demos or webinars. Team learning hub:** Grow a searchable video brain that remembers everything. This workflow uses the VLM Run node for scalable, async video transcription and the AI Agent for grounded Q&A from Sheets, giving you a durable pipeline from upload to searchable answers with minimal upkeep.
by Nasser
Who’s it for? Content Creators E-commerce Stores Marketing Team Description: Generate unique UGC images for your products. Simply upload a product image into a Google Drive folder, and the workflow will instantly generate 50 unique, high-quality AI UGC images using Nano Banana via Fal.ai. All results are automatically saved back into the same folder, ready to use across social media, e-commerce stores, and marketing campaigns. How it works? 📺 YouTube Video Tutorial: 1 - Trigger: Upload a new Product Image (with white background) to a Folder in your Google Drive 2 - Generate 50 different Image Prompts for your Product 3 - Loop over each Prompt Generated 4 - Generate UGC Content thanks to Fal.ai (Nano Banana) 5 - Upload UGC Content on the initial Google Drive Folder Cost: 0.039$ / image== How to set up? 1. Accounts & APIs In the Edit Field "Setup" Node replace all ==[YOUR_API_TOKEN]== with your API Token : Fal.ai (gemini-25-flash-image/edit): https://fal.ai/models/fal-ai/gemini-25-flash-image/edit/api In Credentials on your n8n Dashboard, connect the following accounts using ==Client ID / Secret==: Google Drive: https://docs.n8n.io/integrations/builtin/credentials/google/ 2. Requirements Base Image of your Product preferably have a White Background Your Google Drive Folder and every Files it contains should be publicly available 3. Customizations Change the amount of total UGC Generated: In Generate Prompts → Message → "Your task is to generate 50" Modify the instructions to generate the UGC Prompts: In Generate Prompts → Message Change the amount of Base Image: In Generate Image → Body Parameters → JSON → image_urls Change the amount of UGC Generated per prompt: In Generate Image → Body Parameters → JSON → num_images Modify the Folder where UGC Generated are stored: In Upload File → Parent Folder
by Anurag Patil
Geekhack Discord Updater How It Works This n8n workflow automatically monitors GeekHack forum RSS feeds every hour for new keyboard posts in Interest Checks and Group Buys sections. When it finds a new thread (not replies), it: Monitors RSS Feeds: Checks two GeekHack RSS feeds for new posts (50 items each) Filters New Threads: Removes reply posts by checking for "Re:" prefix in titles Prevents Duplicates: Queries PostgreSQL database to skip already-processed threads Scrapes Content: Fetches the full thread page and extracts the original post Extracts Images: Uses regex to find all images in the post content Creates Discord Embed: Formats the post data into a rich Discord embed with up to 4 images Sends to Multiple Webhooks: Retrieves all webhook URLs from database and sends to each one Logs Processing: Records the thread as processed to prevent duplicates The workflow includes a webhook management system with a web form to add/remove Discord webhooks dynamically, allowing you to send notifications to multiple Discord servers or channels. Steps to Set Up Prerequisites n8n instance running PostgreSQL database Discord webhook URL(s) 1. Database Setup Create PostgreSQL tables: Processed threads table: CREATE TABLE processed_threads ( topic_id VARCHAR PRIMARY KEY, title TEXT, processed_at TIMESTAMP DEFAULT NOW() ); Webhooks table: CREATE TABLE webhooks ( id SERIAL PRIMARY KEY, url TEXT NOT NULL, created_at TIMESTAMP DEFAULT NOW() ); 2. n8n Configuration Import Workflow Copy the workflow JSON Go to n8n → Workflows → Import from JSON Paste the JSON and import Configure Credentials PostgreSQL: Create new PostgreSQL credential with your database connection details All PostgreSQL nodes should use the same credential 3. Node Configuration Schedule Trigger Already configured for 1-hour intervals Modify if different timing needed PostgreSQL Nodes Ensure all PostgreSQL nodes use your PostgreSQL credential: "Check if Processed" "Update entry" "Insert rows in a table" "Select rows from a table" Database schema should be "public" Table names: "processed_threads" and "webhooks" RSS Feed Limits Both RSS feeds are set to limit=50 items Adjust if you need more/fewer items per check 4. Webhook Management Adding Webhooks via Web Form The workflow creates a form trigger for adding webhooks Access the form URL from the "On form submission" node Submit Discord webhook URLs through the form Webhooks are automatically stored in the database Manual Webhook Addition Alternatively, insert webhooks directly into the database: INSERT INTO webhooks (url) VALUES ('https://discord.com/api/webhooks/YOUR_WEBHOOK_URL'); 5. Testing Test the Main Workflow Ensure you have at least one webhook in the database Activate the workflow Use "Execute Workflow" to test manually Check Discord channels for test messages Test Webhook Form Get the form URL from "On form submission" node Submit a test webhook URL Verify it appears in the webhooks table 6. Monitoring Check execution history for errors Monitor both database tables for entries Verify all registered webhooks receive notifications Adjust schedule timing if needed 7. Managing Webhooks Use the web form to add new webhook URLs Remove webhooks by deleting from the database: DELETE FROM webhooks WHERE url = 'webhook_url_to_remove'; The workflow will now automatically post new GeekHack threads to all registered Discord webhooks every hour, with the ability to dynamically manage webhook destinations through the web form interface.
by Oneclick AI Squad
This AI-powered workflow automatically searches LinkedIn for relevant jobs, scores them using Claude AI based on your profile, sends personalized applications or connection requests, and logs everything to a Google Sheet for tracking. How it works Trigger - Runs on a schedule or via webhook to start a new job search Search LinkedIn - Fetches job listings based on keywords, location, and filters Filter & Deduplicate - Removes already-applied or seen jobs Analyze with Claude AI - Scores each job against your resume/profile Decision Gate - Only proceeds with jobs above your score threshold Apply or Connect - Sends Easy Apply or connection request to recruiter Log Results - Records all actions in Google Sheets for tracking Setup Steps Import this workflow into your n8n instance Configure credentials: LinkedIn OAuth2 - LinkedIn Developer Portal Anthropic API - For Claude AI job scoring Google Sheets - To track applications Update your profile/resume text in the Build Search Context node Set your job keywords and location preferences Activate the workflow Sample Trigger Payload { "keywords": "Product Manager", "location": "Bangalore, India", "experienceLevel": "mid-senior", "jobType": "full-time", "scoreThreshold": 70 } Features AI-powered job scoring** based on your skills and experience Duplicate prevention** - tracks seen and applied jobs Auto Easy Apply** for matching jobs Recruiter outreach** with personalized messages Full audit log** in Google Sheets Explore More LinkedIn & Social Automation: Contact us to design AI-powered lead nurturing, content engagement, and multi-platform reply workflows tailored to your growth strategy.
by Oneclick AI Squad
This n8n template demonstrates how to use AI to automatically review technical documentation against predefined compliance standards — and alert your team via Slack. Use cases: Engineering teams maintaining API docs, product teams enforcing terminology standards, or localization teams checking readiness before translation handoff. How it works A document URL or raw text is passed in via manual trigger (or webhook). The document content is fetched and extracted as plain text. An AI model scores the document across 4 compliance dimensions: Structure, Terminology, Localization Readiness, and Completeness. The AI returns a structured JSON report with scores and gap descriptions per dimension. A compliance status is determined: PASS / WARNING / FAIL based on the overall score. A formatted Slack alert is sent to the appropriate channel based on status severity. All results are logged for historical tracking. How to use Replace the document URL in the Set Document Input node with your own source. Update the AI prompt in the Score Documentation with AI node to match your standards. Add your Slack webhook URLs for each severity channel. Optionally connect to Google Sheets or PostgreSQL for audit logging. Requirements OpenAI or Anthropic API key for AI scoring Slack incoming webhook URLs Document accessible via URL or passed as text
by Ahmed Salama
Categories Marketing Intelligence, Ad Operations, Competitive Research, Creative Analysis Build a Facebook Ad Intelligence Pipeline with Apify, AI, and Google Sheets This workflow creates an end-to-end Facebook Ad Intelligence pipeline that automatically collects live ads from the Facebook Ad Library, analyzes their creative and messaging using AI, and stores structured insights in Google Sheets for reuse. The system continuously pulls ads from specified Facebook Ad Library URLs, filters for high-signal advertisers, classifies ads by format (video, image, or text), and applies AI analysis to extract strategy, positioning, and reusable creative angles. The result is a reliable, no-manual-work system that turns competitor ads into structured, reusable intelligence instead of screenshots and guesswork. Benefits Evidence-Based Creative Decisions Learn from ads that are already live and funded, not assumptions. Faster Creative Iteration Reuse proven angles, hooks, and formats without starting from zero. Centralized Ad Intelligence All insights are stored in a searchable Google Sheet. Format-Aware Analysis Images, videos, and text ads are analyzed using the correct method. Low Noise, High Signal Filters remove weak advertisers and surface serious spenders. How It Works Ad Data Collection (Apify) Scrapes Facebook Ad Library URLs automatically Retrieves raw ad snapshots including: Copy and headlines Media URLs Page metadata Engagement and page signals Signal Filtering Filters ads by page popularity, for example pages with more than 5,000 likes Ensures only proven advertisers move forward Creative Type Detection A switch node classifies ads as: Video Image Text only Each format is routed to its own processing path AI Creative Analysis Image ads** Converted to binary Visually analyzed Summarized and reverse-engineered Video ads** Downloaded and analyzed frame and content wise Described strategically Text ads** Summarized and rewritten directly Outputs include: Deep strategic summaries Repurposed ad copy Structured Intelligence Storage (Google Sheets) All insights are appended automatically Ads are deduplicated by ad archive ID Each row becomes a reusable asset: Summary Rewritten copy Image or video prompts Page and source data Required Setup Apify Active Apify account Access to the Facebook Ad Library scraping actor OpenAI API key with text and image analysis access Google Gemini API access for video analysis Google Sheets OAuth access Edit permissions on the target spreadsheet Dropbox (optional) OAuth access for video storage and reuse Business Use Cases Performance Marketers Identify winning creative patterns faster Media Buyers Reduce testing waste and creative fatigue Agencies Deliver competitive intelligence as a service Founders Validate messaging before launching campaigns Growth Teams Build creative systems instead of one-off ads Difficulty Level Intermediate Estimated Build Time 60–90 minutes Monthly Operating Cost Apify: Usage-based OpenAI: Usage-based Google Sheets: Free Dropbox: Free or paid plan n8n: Self-hosted or cloud Typical range: $20–60/month depending on volume Why This Workflow Works Live ads represent real budget decisions Filtering enforces signal quality Format-specific analysis improves insight accuracy Structured storage enables reuse at scale Automation replaces manual ad spying Possible Extensions Auto-tag ads by funnel stage Add spend or duration heuristics Track creative trends over time Generate landing page hypotheses Trigger creative briefs from winning ads
by Kevin Meneses
What this workflow does This workflow scrapes customer reviews from Trustpilot, analyzes them with AI, and keeps both Salesforce and Google Sheets automatically updated with customer sentiment insights. It uses Decodo to reliably extract review content from Trustpilot, processes the text with OpenAI, and orchestrates everything using n8n. 👉 Deocodo How it works (high level) Reads Trustpilot review URLs and Salesforce Account IDs from Google Sheets Scrapes Trustpilot reviews using Decodo Uses AI to summarize sentiment, trends, and key positives/negatives Generates two outputs in parallel Outputs generated 1. Salesforce Account update The workflow updates an existing Salesforce Account by writing the AI-generated sentiment summary into a custom text field (e.g. recent_trend_summary__c). This brings external customer feedback directly into Salesforce, allowing teams to work with real market perception inside the CRM. 2. Google Sheets analytics dataset At the same time, structured review metrics are stored in Google Sheets: Ratings and sentiment distribution Top positive and negative keywords Trend summaries over time This creates a reusable dataset for dashboards and reporting. How to configure it (general) Google Sheets**: add review URLs + Salesforce Account IDs Decodo**: add your API key to scrape Trustpilot reliably OpenAI**: add your API key for AI analysis Salesforce**: Create a custom Text (255) field on Account Connect Salesforce credentials in n8n Only existing Accounts are updated