by Easy8.ai
Auto-Generate SEO FAQ Answers from Google Sheets with OpenAI Intro/Overview This workflow automates the process of generating SEO-optimized FAQ answers using AI, pulling questions from a Google Sheet and writing answers back into the same sheet. It’s ideal for content marketers, SEO specialists, and digital teams looking to scale FAQ content generation with minimal manual input. By combining the power of Google Sheets, AI, and WordPress, the workflow transforms raw questions into structured, keyword-targeted answers tailored for specific audiences — ready for use on landing pages, blogs, or help centers, and automatically publishes them as WordPress posts. How it works Schedule Trigger**: Executes the workflow at a set interval to check for new or unprocessed questions in the Google Sheet. Get Questions from Sheet**: Reads from a specific Google Sheet, targeting columns for: Question (FAQ prompt) KW (target SEO keyword) Audience (intended reader) Article (desired WordPress post title) Filter**: Ensures only rows without an existing answer are processed (i.e., empty "Answer" column). Generate FAQ Answer**: Passes the question, keyword, and audience to the OpenAI Chat Model using a structured prompt to generate: A concise TL;DR-style summary A detailed, SEO-optimized markdown-formatted answer OpenAI Chat Model**: Utilizes GPT-4 Turbo with a controlled temperature (0.7) and token limit (1000) to produce structured, on-brand, keyword-optimized content. Parse FAQ Answer**: Extracts and formats the AI response into separate fields for writing back to the sheet. Update Sheet with Answer**: Writes the AI-generated answer into the Answer column of the same row in the source Google Sheet. WordPress Node**: Publishes each generated answer as a new WordPress post Uses “Create Post” operation Title: Taken from the Article column in the sheet Content: Uses the detailed AI-generated answer Requires valid WordPress credentials (REST API / Application Password) How to Use Importing the Workflow Download or import the workflow JSON into your n8n instance. Credential Setup Connect your Google Sheets credentials. Add your OpenAI API Key in the relevant node. Connect your WordPress credentials for content publishing. Node Assignment Update the following: Google Sheet ID Sheet range (ensure it includes all relevant columns) Timezone & Schedule Adjust the Schedule Trigger node to match your preferred time and frequency (e.g., every weekday at 9 AM). Testing Guidance Add a few sample FAQ entries in your sheet. Run the workflow manually to verify: Prompt quality Answer accuracy Proper sheet update Successful WordPress post creation Example Use Cases Marketing teams generating bulk FAQ content for landing pages SEO professionals creating keyword-optimized responses for user queries Agencies producing personalized FAQ sections for multiple client niches SaaS companies automating knowledge base content with targeted messaging Content teams publishing AI-generated FAQs directly to WordPress blogs Requirements ✅ Google Account with access to the target Google Sheet ✅ OpenAI API Key (GPT-4 Turbo or equivalent) ✅ WordPress account with REST API or Application Password access ✅ Google Sheet with the following columns: Question: The FAQ prompt KW: Target keyword for SEO Audience: Intended reader persona Article: Desired WordPress post title Answer: Output column (leave empty initially) Customization (Optional Section) Tone & Style**: Modify the system prompt to reflect your brand voice (e.g., friendly, expert, concise). Model**: Use a different AI model (e.g., Gemini, Claude, or OpenAI GPT-4.1). Output Format**: Adjust the markdown output to use different heading levels, bullet styles, or HTML if required. Audience Logic**: Expand the input options to fine-tune responses for more specific demographics or buyer personas. Multi-output Options**: Extend the workflow to post content to Notion, CMS, or documentation platforms alongside Google Sheets and WordPress. This automation accelerates content creation, automatically keeps your FAQ sections SEO-friendly, and publishes the results directly to WordPress — keeping your content pipeline running hands-free once deployed.
by PTS
Who this is for Anybody using Firefly III, especially home/self-hosted users, who want to add some level of automation to their transaction tracking, either in addition to or because they can't or don't want to use the dataimporter How it works - posting transactions User sends a transaction screenshot/image or statement to a Telegram bot Gemini analyzes it based on the user's requirements (asset account IDs & categories) The transaction information is parsed to create a suitable POST to a Firefly instance The transaction(s) are posted to Firefly via its API, using an OAuth2 credential How it works - requesting budget reports User sends the word 'Report' via telegram A 'GET' API request is sent to Firefly for all budgets between the beginning of the month and the request date, including remaining amounts for each This is converted to a CSV file The CSV is sent to the user via Telegram Prerequisites Telegram, and knowledge of how to set up a bot (search for BotFather in Telegram) An existing instance of Firefly III with admin access for creating OAuth2 credentials How to set it up - Credentials Open Telegram, and search for BotFather Create a new bot by following the instructions Save the API key provided In n8n, create a new Telegram credential using the info for the new bot Create an OAuth client in Firefly, using the redirect URL found in n8n's OAuth2 API credential creator Fill the n8n OAuth2 API credential form as Authorization Code, filling in the remaining parameters from the info created in Firefly Create a Gemini credential following the instructions in n8n How to set it up - the workflow Set the credential in each Telegram node Set the Firefly credential in each http node Set the correct base URL for the Firefly instance in each http node Set the desired Gemini credential and model in each AI node Set the correct Bank IDs (as per Firefly) and preferred categories in the AI node system message Customization options The user can specify all types of asset and expense accounts, as well as a specific list of categories and descriptions for Gemini to use. Gemini can also be swapped out for any other AI/LLM. Additionally, anyone can build on this by reviewing the Firefly API documents to automate almost any other part of the Firefly software.
by Miftah Rahmat
Automate Water Bill Calculations with Telegram, Gemini AI, and Google Sheets This workflow automates the calculation of monthly water bills. Residents can send a photo of their water meter along with their name via Telegram. The workflow uses Gemini AI to extract the meter reading, calculates the usage difference compared to the previous month, and updates a Google Sheet with the billing details. Finally, the workflow sends a summary back via Telegram. Don’t hesitate to reach out if you have any questions or run into issues! 🙌 Requirements A Telegram bot token (created via BotFather). A Google account with access to Google Sheets. A Gemini API key (). A pre-created Google Sheet with the required columns. Google Sheet Setup Create a new Google Sheet with the following columns: Nama, Volume Sebelumnya, Volume Saat Ini, Harga/m³, Jumlah Bayar, Beban, Total Bayar, Tanggal Input Workflow Setup Instructions Connect Google Sheets Add your Google Sheets credentials in n8n. Link the workflow to your sheet with the structure above. Set Up Telegram Bot Create a Telegram bot via BotFather. Copy your bot token into the Telegram Trigger node. Configure Gemini AI Obtain a Gemini API key from Google AI Studio. Add it to your n8n credentials. The workflow will parse the meter reading from the uploaded image. Example Calculation Previous Volume: 535 m³ Current Volume: 545 m³ Usage: 10 m³ Price per m³: Rp3.000 Fixed cost: Rp3.000 Total Bill: Rp33.000 How It Works User sends a photo of the water meter with caption (name). Telegram Trigger receives the message. Gemini AI reads the meter number from the photo. Workflow fetches previous volume from Google Sheets. Usage and total bill are calculated. Data is stored back into Google Sheets. Bot replies in Telegram with detailed bill info. Customization Change Harga/m³ in the sheet to match your community’s water price. Update Beban if your community uses a different fixed fee. Edit the Telegram reply message node to adjust wording. With this workflow, you can streamline water billing for residents, ensure accuracy, and save time on manual calculations.
by MUHAMMAD SHAHEER
Who’s it for This template is designed for creators, researchers, freelance writers, founders, and automation professionals who want a reliable way to generate structured, citation-backed research content without doing manual data collection. Anyone creating blog posts, reports, briefs, or research summaries will benefit from this system. What it does This workflow turns a simple form submission into a complete research pipeline. It accepts a topic, determines what needs to be researched, gathers information from the web, writes content, fact-checks it against the collected sources, edits the draft for clarity, and compiles a final report. It behaves like a small agentic research team inside n8n. How it works A form collects the research topic, depth, and desired output format. A research agent generates focused search queries. SERP API retrieves real-time results for each query. The workflow aggregates and structures all findings. A writing agent creates the first draft based on the data. A fact-checking agent verifies statements against the sources. An editor agent improves tone, flow, and structure. A final review agent produces the completed research document with citations. This workflow includes annotated sticky notes to explain each step and guide configuration. Requirements Groq API key for running the Llama 3.3 model. SERP API key for performing web searches. An n8n instance (cloud or self-hosted). No additional dependencies are required. How to set up Add your Groq and SERP API credentials using n8n’s credential manager. Update the form fields if you want custom depth or output formats. Follow the sticky notes for detailed configuration. Run the workflow and submit a topic through the form to generate your first research report. How to customize Replace the writer agent with a different model if you prefer a specific writing style. Adjust the number of search queries or SERP results for deeper research. Add additional steps such as PDF generation, sending outputs to Notion, or publishing to WordPress. Modify the form to suit industry-specific content needs.
by Ilyass Kanissi
🤖 Simple RAG Customer Support Chatbot 📋 Overview This intelligent customer support chatbot leverages Retrieval-Augmented Generation (RAG) to provide accurate, contextual responses by combining your knowledge base with AI capabilities. The system automatically retrieves relevant documents from your Pinecone vector store and uses them to generate informed responses through OpenAI's language models. ⚡ Quick Setup Import Workflow Import this workflow template into your n8n instance Configure Credentials Add the following API credentials: OpenAI API Key: For chat completions and embeddings Pinecone API Key: For vector database operations Google Drive: For document auto ingestion Initialize Vector Store Use the "Insert documents into Pinecone" workflow to populate your knowledge base Activate Workflow Enable the main chat workflow to start receiving requests 🔧 How it Works Main Chat Flow (Agent Workflow) User Message → Memory Retrieval → Vector Search → Context Assembly → AI Response → Memory Update → Response Process Flow: Message Reception: Webhook receives user chat messages with session management Memory Retrieval: Loads conversation history for context continuity Semantic Search: Queries Pinecone vector store for relevant documents Context Assembly: Combines retrieved documents with conversation history AI Generation: OpenAI generates contextual response using assembled context Memory Storage: Updates conversation memory for future interactions Response Delivery: Returns formatted response to user interface Document Ingestion Flow Document Source → Text Extraction → Chunking → Embedding → Vector Storage Process Flow: Document Trigger: Google Drive or manual file upload detection Content Extraction: Extracts text from various file formats (PDF, DOC, TXT) Text Chunking: Splits documents into optimal chunks for embedding Embedding Generation: Creates vector embeddings using OpenAI Vector Storage: Stores embeddings in Pinecone with metadata Index Update: Updates search index for immediate availability
by Cheng Siong Chin
Introduction Generates complete scientific papers from title and abstract using AI. Designed for researchers, automating literature search, content generation, and citation formatting. How It Works Extracts input, searches academic databases (CrossRef, Semantic Scholar, OpenAlex), merges sources, processes citations, generates AI sections (Introduction, Literature Review, Methodology, Results, Discussion, Conclusion), compiles document. Workflow Template Webhook → Extract Data → Search (CrossRef + Semantic Scholar + OpenAlex) → Merge Sources → Process References → Prepare Context → AI Generate (Introduction + Literature Review + Methodology + Results + Discussion + Conclusion via OpenAI) → Merge Sections → Compile Document Workflow Steps Input & Search: Webhook receives title/abstract; searches CrossRef, Semantic Scholar, OpenAlex; merges and processes references AI Generation: OpenAI generates six sections with in-text citations using retrieved references Assembly: Merges sections; compiles formatted document with reference list Setup Instructions Trigger & APIs: Configure webhook URL; add OpenAI API key; customize prompts Databases: Set up CrossRef, Semantic Scholar, OpenAlex API access; configure search parameters Prerequisites OpenAI API, CrossRef API, Semantic Scholar API, OpenAlex API, webhook platform, n8n instance Customization Adjust reference limits, modify prompts for research fields, add citation styles (APA/IEEE), integrate databases (PubMed, arXiv), customize outputs (DOCX/LaTeX/PDF) Benefits Automates paper drafting, comprehensive literature integration, proper citations
by Anirudh Aeran
This template creates a comprehensive, production-ready Retrieval-Augmented Generation (RAG) system. It builds a sophisticated AI agent that can answer questions based on documents stored in a specific Google Drive folder, and it automatically keeps its knowledge base up-to-date as you add, update, or remove files. Who’s it for? This workflow is perfect for developers, businesses, and AI agencies looking to: Create an internal knowledge base chatbot for employees (e.g., for HR policies, technical documentation, or project information). Build an intelligent support agent that uses your company's official documents as its source of truth. Develop advanced AI solutions for clients that require a self-maintaining knowledge base. How it works? This workflow is divided into three distinct, powerful systems: The RAG Agent: This is the core chatbot. It receives a user's question, uses a Supabase Vector Store to find the most relevant document snippets, leverages a Cohere Reranker to improve accuracy, and uses a Postgres database to maintain conversation history (memory). It then uses Google Gemini to generate a final, context-aware answer. The Ingestion Pipeline: This system automates the process of learning new information. It triggers whenever a file is created or updated in your designated Google Drive folder. It intelligently detects the file type (Google Doc or PDF), extracts the text, splits it into manageable chunks, generates embeddings using Gemini, and stores them in your Supabase vector database. The Cleanup System: To ensure your knowledge base remains accurate, a scheduled process runs periodically to find and remove data from Supabase that corresponds to files that have been deleted from the Google Drive folder. This prevents the agent from using outdated information. How to set up To get this workflow running, you will need to configure the following: Credentials: Connect your accounts in the n8n credential manager for: Google Drive (OAuth2) Supabase (API Key) Postgres Google Gemini (API Key from Google AI Studio) Cohere (API Key) Google Drive Folder: In the Search files and folders node, replace the placeholder folder ID with the ID of the Google Drive folder you want to monitor. Database Setup: Ensure your Supabase and Postgres instances are set up with the necessary tables. You'll need a documents table in Supabase for the vectors and a document_metadata table in Postgres. How to customize the workflow This template is a powerful starting point. You can easily customize it by: Swapping out the LLM (e.g., use OpenAI or Anthropic instead of Gemini). Changing the vector database (e.g., Pinecone, Weaviate). Adding more data sources, such as Notion, Slack, or websites.
by Jaruphat J.
⚠️ Note: All sensitive credentials should be set via n8n Credentials or environment variables. Do not hardcode API keys in nodes. Who’s it for Marketers, creators, and automation builders who want to generate UGC-style ad images and short videos automatically from a Google Sheet. Ideal for e‑commerce SKUs, agencies, or teams that need many variations quickly. What it does (Overview) This template turns a spreadsheet row into ad images and optionally 5–8s videos. Zone 0 — Image-only pipeline (Gemini/OpenRouter)**: Creates an ad image from a product link and prompt, uploads it to Drive, and updates the sheet (no video step). Zone 1 — Create image (Fal nano‑banana) + prepare for video**: Generates an image via Fal.ai, polls status, fetches URL, then analyzes the image with LLM to prepare scene prompts. Zone 2 — Generate video (WAN2.2 & Veo3)**: Uses the generated image + structured scene prompts to create short clips, uploads them to Drive, and writes the video URL back to the sheet. Requirements Fal.ai API key** (env: FAL_KEY) Google Sheets / Google Drive** OAuth2 credentials OpenAI / Gemini (via OpenRouter)** for image analysis or alternative image generation A Google Sheet with columns, e.g.: product | presenter | prompt | img_url | video_url Google Drive files set to Anyone with link → Viewer so APIs can fetch them How to set up Credentials: Add Google Sheets + Google Drive (OAuth2), Fal.ai (Header Auth with Authorization: Key {{$env.FAL_KEY}}), and OpenAI/OpenRouter. Google Sheet: Create the columns above. Paste product image Drive links (the workflow converts them to direct links automatically). Import the workflow: Use the provided JSON. Confirm node credentials resolve. Run: Start with Zone 0 to verify image-only flow, then test Zone 1 + Zone 2 for full image→video. Zone 0 — Create Ad Image (Image-only) This path is for creating just an image and stopping. It reads the Gemini tab in the Sheet, generates an image via OpenRouter/Gemini, converts base64 to a file, uploads to Drive, and writes back img_url. Key nodes Get Data1 (Google Sheets)** → reads Gemini tab setImgeURL (Set)** → converts Drive URLs to direct (uc?export=view&id=...) CreateImagebyOpernRouter (Gemini)** → calls google/gemini-2.5-flash-image-preview:free wait20sec (Wait)** → small delay setBase64data (Code)** → splits data URI into { data, mimeType, fileName } Convert to File** → creates binary uploadImagetoGdrive (Google Drive)** → uploads image updateImageURL (Google Sheets)** → writes back img_url Zone 1 — Create Image (Fal nano‑banana) + Prepare for Video Reads product rows, normalizes Drive links, generates image with Fal nano‑banana, polls until complete, fetches the output image URL, then runs an image analysis (OpenAI Vision) to prepare structured text for the video step. Key nodes Get Data (Google Sheets)** → reads nanoBanana tab Edit Fields (Set)** → converts Drive links to direct (uc?export=view&id=...) Call Fal.ai API (nanoBanana)** → POST https://queue.fal.run/fal-ai/nano-banana/edit Get image status / If / Wait / Get the image** → job polling until complete Analyze image (OpenAI Vision)** → returns structured description (brand text, colors, type, short description) Zone 2 — Generate Video (WAN2.2 & Veo3) Creates a 5–8s UGC clip using the generated image + structured scene prompt. Key nodes Describe Each Scene for Video (AI Agent)** → expands analysis + user intent into detailed scene sections (Characters, Scene Background, Camera Movement, Movement in Scene, Sound Design) Structured Output Parser2 (Schema)** → enforces consistent JSON structure Veo3 (HTTP)** → POST /fal-ai/veo3/image-to-video with prompt + image_url Call Fal.ai API (WAN2.2) [Optional]** → POST /fal-ai/wan/v2.2-a14b/image-to-video Wait for the video / Get the video status / Video status / Get the video** → polling loop HTTP Request (Download File)** → downloads MP4 uploadImagetoGdrive1 (Google Drive)** → uploads video updateVideoURL (Google Sheets)** → writes back video_url Node settings (high‑level) Drive Link Parser (Set)** {{ (() => { const u = $json.product || ''; const q = u.match(/[?&]id=([-\w]{25,})/); const d = u.match(/\/d\/([-\w]{25,})/); const any = u.match(/[-\w]{25,}/); const id = q?.[1] || d?.[1] || (any ? any[0] : ''); return id ? 'https://drive.google.com/uc?export=view&id=' + id : ''; })() }} How to customize the workflow Adjust AI prompts to change ad style (funny, luxury, cozy, techy). Change video aspect ratio for TikTok/IG/Shorts (9:16, 1:1, 16:9). Extend Sheet schema for campaign labels, audiences, hashtags. Add distribution (Slack/LINE/Telegram) after Drive upload. Troubleshooting JSON parameter needs to be valid JSON** → Ensure expressions return objects, not strings. 403 on images** → Make Drive files public (Viewer) and convert links. Video never completes* → Check status_url, retry with -fast models or off‑peak times. Template metadata Uses:** Google Sheets, Google Drive, HTTP Request, Wait/If/Switch, Code, Convert to File, OpenAI/Gemini (optional), Fal.ai models (nano‑banana, WAN2.2, Veo3) Source workflow JSON:** Gemini\_NanoBanana\_Template.json (node names and connections match) Product Image Product Image - nano Banana Product Video - Veo3 Product Video - Wan2.2
by Pawan
Who's it for? This template is perfect for educational institutions, coaching centers (like UPSC, GMAT, or specialized technical training), internal corporate knowledge bases, and SaaS companies that need to provide instant, accurate, and source-grounded answers based on proprietary documents. It's designed for users who want to leverage Google Gemini's powerful reasoning but ensure its answers are strictly factual and based only on their verified knowledge repository. How it works / What it does This workflow establishes a Retrieval-Augmented Generation (RAG) pipeline to build a secure, fact-based AI Agent. It operates in two main phases: 1. Knowledge Ingestion: When a new document (e.g., a PDF, lecture notes, or policy manual) is uploaded via a form or Google Drive, the Embeddings Google Gemini node converts the content into numerical vectors. These vectors are then stored in a secure MongoDB Atlas Vector Store, creating a private knowledge base. 2. AI Query & Response: A user asks a question via Telegram. The AI Agent uses the question to perform a semantic search on the MongoDB Vector Store, retrieving the most relevant, source-specific passages. It then feeds this retrieved context to the Google Gemini Chat Model to generate a precise, factual answer, which is sent back to the user on Telegram. This process ensures the agent never "hallucinates" or uses general internet knowledge, making the responses accurate and trustworthy. Requirements To use this template, you will need the following accounts and credentials: n8n Account Google Gemini API Key: For generating vector embeddings and powering the AI Agent. MongoDB Atlas Cluster: A free-tier cluster is sufficient, configured with a Vector Search index. Telegram Bot: A bot created via BotFather and a Chat ID where the bot will listen for and send messages. Google Drive Credentials (if using the Google Drive ingestion path). How to set up Set up MongoDB Atlas:** Create a free cluster and a database. Create a Vector Search Index on your collection to enable efficient searching. Configure Ingestion Path:** Set up the Webhook trigger for your "On form submission" or connect your Google Drive credentials. Configure the Embeddings Google Gemini node with your API Key. Connect the MongoDB Atlas Vector Store node with your database credentials, collection name, and index name. Configure Chat Path:** Set up the Telegram Trigger with your Bot Token to listen for incoming messages. Configure the Google Gemini Chat Model with your API Key. Connect the MongoDB Atlas Vector Store 1 node as a Tool within the AI Agent. Ensure it points to the same vector store as the ingestion path. Final Step:* Configure the Send a text message node with your *Telegram Bot Token and the Chat ID**. How to customize the workflow Change Knowledge Source:** Replace the Google Drive nodes with nodes for Notion, SharePoint, Zendesk, or another document source. Change Chat Platform:** Replace the Telegram nodes with a Slack, Discord, or WhatsApp Cloud trigger and response node. Refine the Agent's Persona:** Open the AI Agent node and edit the System Instruction to give the bot a specific role (e.g., "You are a senior UPSC coach. Answer questions politely and cite sources."). 💡 Example Use Case An UPSC/JEE/NEET coaching uploads NCERT summaries and previous year notes to Google Drive. Students ask questions in the Telegram group — the bot instantly replies with contextually accurate answers from the uploaded materials. The same agent can generate daily quizzes or concise notes from this curated content automatically.
by Ruben AI
AI-Powered Flyer & Video Generator with Airtable, Klie.ai, and n8n Who is this for? This template is perfect for e-commerce entrepreneurs, marketers, agencies, and creative teams who want to turn simple product photos and short descriptions into professional flyers or product videos—automatically and at scale. If you want to generate polished marketing assets without relying on designers or editors, this is for you. What problem is this workflow solving? Creating product ads, flyers, or videos usually involves multiple tools and manual steps: Collecting and cleaning product photos Writing ad copy or descriptions Designing flyers or visuals for campaigns Producing animations or video ads Managing multiple revisions and approvals This workflow automates the entire pipeline. Upload a raw product image into Airtable, type a quick description, and receive back a flyer or video animation tailored to your brand and context—ready to use for ads, websites, or campaigns. What this workflow does Uses Airtable as the central interface where you upload raw product photos and enter descriptions Processes the content automatically via n8n Generates flyers and visuals using OpenAI Image 1 Produces custom product videos with Google’s VEO3 Runs through Klie.ai to unify the image + video generation process Sends the final creative assets back into Airtable for review and download Setup Download n8n files and connect your Airtable token to n8n Duplicate the Airtable base and make sure you’re on an Airtable Team plan Add your API key on the Airtable interface under API setup Create your agency inside the interface Start generating concept images and videos instantly How to customize this workflow to your needs Edit the prompts to match your brand voice and ad style Extend Airtable fields to include more creative parameters (colors, layout, target audience) Add approval steps via email, Slack, or Airtable statuses before finalizing Integrate with publishing platforms (social media, e-commerce CMS) for auto-posting Track generated assets inside Airtable for team collaboration 🎥 Demo Video: Demo Video
by JJ Tham
Stop manually digging through Meta Ads data and spending hours trying to connect the dots. This workflow turns n8n into an AI-powered media buyer that automatically analyzes your ad performance, categorizes your creatives, and delivers insights directly into a Google Sheet. ➡️ Watch the full 4-part setup and tutorial on YouTube: https://youtu.be/hxQshcD3e1Y About This 4-Part Automation Series As a media buyer, I built this system to automate the heavy lifting of analyzing ad data and brainstorming new creative ideas. This template is the first foundational part of that larger system. ✅ Part 1 (This Template): Pulling Ad Data & Getting Quick Insights Automatically pulls data into a Google Sheet and uses an LLM to categorize ad performance. ✅ Part 2: Finding the Source Files for the Best Ads Fetches the image or video files for top-performing ads. ✅ Part 3: Using AI to Understand Why an Ad Works Sends your best ads to Google Gemini for structured notes on hooks, transcripts, and visuals. ✅ Part 4: Getting the AI to Suggest New Creative Ideas Uses all the insights to generate fresh ad concepts, scripts, and creative briefs. What This Template (Part 1) Does Secure Token Management Automatically retrieves and refreshes your Facebook long-term access token. Fetch Ad Data Pulls the last 28 days of ad-level performance data from your Facebook Ads account. Process & Clean Parses raw data, standardizes key e-commerce metrics (like ROAS), and filters for sales-focused campaigns. Benchmark Calculation Aggregates all data to create an overall performance benchmark (e.g., average Cost Per Purchase). AI Analysis A “Senior Media Buyer” AI persona evaluates each ad against the benchmark and categorizes it as “HELL YES,” “YES,” or “MAYBE,” with justifications. Output to Google Sheets Updates your Google Sheet with both raw performance data and AI-generated insights. Who Is It For? E-commerce store owners Digital marketing agencies Facebook Ads media buyers How to Set It Up Credentials Connect your Google Gemini and Google Sheets accounts in the respective nodes. The template uses NocoDB for token management. Configure the “Getting Long-Term Token” and “Updating Token” nodes — or replace them with your preferred credential storage method. Update Your IDs In the “Getting Data For the Past 28 Days…” HTTP Request node, replace act_XXXXXX in the URL with your Facebook Ad Account ID. In both Google Sheets nodes (“Sending Raw Data…” and “Updating Ad Insights…”), update the Document ID with your target Google Sheet’s ID. Run the Workflow Click “Test workflow” to run your first AI-powered analysis! Tools Used n8n Facebook for Developers Google AI Studio (Gemini) NocoDB (or any credential database of your choice)
by Guillaume Duvernay
Move beyond generic AI-generated content and create articles that are high-quality, factually reliable, and aligned with your unique expertise. This template orchestrates a sophisticated "research-first" content creation process. Instead of simply asking an AI to write an article from scratch, it first uses an AI planner to break your topic down into logical sub-questions. It then queries a Lookio assistant—which you've connected to your own trusted knowledge base of uploaded documents—to build a comprehensive research brief. Only then is this fact-checked brief handed to a powerful AI writer to compose the final article, complete with source links. This is the ultimate workflow for scaling expert-level content creation. Who is this for? Content marketers & SEO specialists:** Scale the creation of authoritative, expert-level blog posts that are grounded in factual, source-based information. Technical writers & subject matter experts:** Transform your complex internal documentation into accessible public-facing articles, tutorials, and guides. Marketing agencies:** Quickly generate high-quality, well-researched drafts for clients by connecting the workflow to their provided brand and product materials. What problem does this solve? Reduces AI "hallucinations":** By grounding the entire writing process in your own trusted knowledge base, the AI generates content based on facts you provide, not on potentially incorrect information from its general training data. Ensures comprehensive topic coverage:** The initial AI-powered "topic breakdown" step acts like an expert outliner, ensuring the final article is well-structured and covers all key sub-topics. Automates source citation:** The workflow is designed to preserve and integrate source URLs from your knowledge base directly into the final article as hyperlinks, boosting credibility and saving you manual effort. Scales expert content creation:** It effectively mimics the workflow of a human expert (outline, research, consolidate, write) but in an automated, scalable, and incredibly fast way. How it works This workflow follows a sophisticated, multi-step process to ensure the highest quality output: Decomposition: You provide an article title and guidelines via the built-in form. An initial AI call then acts as a "planner," breaking down the main topic into an array of 5-8 logical sub-questions. Fact-based research (RAG): The workflow loops through each of these sub-questions and queries your Lookio assistant. This assistant, which you have pre-configured by uploading your own documents, finds the relevant information and source links for each point. Consolidation: All the retrieved question-and-answer pairs are compiled into a single, comprehensive research brief. Final article generation: This complete, fact-checked brief is handed to a final, powerful AI writer (e.g., GPT-4o). Its instructions are clear: write a high-quality article using only the provided information and integrate the source links as hyperlinks where appropriate. Building your own RAG pipeline VS using Lookio or alternative tools Building a RAG system natively within n8n offers deep customization, but it requires managing a toolchain for data processing, text chunking, and retrieval optimization. An alternative is to use a managed service like Lookio, which provides RAG functionality through an API. This approach abstracts the backend infrastructure for document ingestion and querying, trading the granular control of a native build for a reduction in development and maintenance tasks. Implementing the template 1. Set up your Lookio assistant (Prerequisite): Lookio is a platform for building intelligent assistants that leverage your organization's documents as a dedicated knowledge base. First, sign up at Lookio. You'll get 50 free credits to get started. Upload the documents you want to use as your knowledge base. Create a new assistant and then generate an API key. Copy your Assistant ID and your API Key for the next step. 2. Configure the workflow: Connect your AI provider (e.g., OpenAI) credentials to the two Language Model nodes. In the Query Lookio Assistant (HTTP Request) node, paste your Assistant ID in the body and add your Lookio API Key for authentication (we recommend using a Bearer Token credential). 3. Activate the workflow: Toggle the workflow to "Active" and use the built-in form to generate your first fact-checked article! Taking it further Automate publishing:* Connect the final *Article result* node to a *Webflow* or *WordPress** node to automatically create a draft post in your CMS. Generate content in bulk:* Replace the *Form Trigger* with an *Airtable* or *Google Sheet** trigger to automatically generate a whole batch of articles from your content calendar. Customize the writing style:* Tweak the system prompt in the final *New content - Generate the AI output** node to match your brand's specific tone of voice, add SEO keywords, or include specific calls-to-action.