by Ranjan Dailata
This workflow automates brand intelligence analysis across AI-powered search results by combining SE Ranking’s AI Search data with structured processing in n8n. It retrieves real AI-generated prompts, answers, and cited sources where a brand appears, then normalizes and consolidates this data into a clean, structured format. The workflow eliminates manual review of AI SERPs and makes it easy to understand how AI search engines describe, reference, and position a brand. Who this is for This workflow is designed for: SEO strategists and growth marketers** analyzing brand visibility in AI-powered search engines Content strategists** identifying how brands are represented in AI answers Competitive intelligence teams** tracking brand mentions and narratives Agencies and consultants** building AI SERP reports for clients Product and brand managers** monitoring AI-driven brand perception What problem is this workflow solving? Traditional SEO tools focus on rankings and keywords but do not capture how AI search engines talk about brands. Key challenges this workflow addresses: No visibility into AI-generated prompts and answers mentioning a brand Difficulty extracting linked sources and references from AI SERPs Manual effort required to normalize and structure AI search responses Lack of export-ready datasets for reporting or downstream automation 3. What this workflow does At a high level, this workflow: Accepts a brand name and AI search parameters Fetches real AI search prompts, answers, and citations from SE Ranking Extracts and normalizes: Prompts with answers Supporting reference links Raw AI SERP JSON Merges all outputs into a unified structured dataset Exports the final result as structured JSON ready for analysis, reporting, or storage This enables brand-level AI SERP intelligence in a repeatable and automated way Setup Prerequisites n8n (self-hosted or cloud) Active SE Ranking API access HTTP Header authentication configured in n8n Local or server file system access for JSON export Setup Steps If you are new to SE Ranking, please signup on seranking.com Configure Credentials SE Ranking using HTTP Header Authentication. Please make sure to set the header authentication as below. The value should contain a Token followed by a space with the SE Ranking API Key. Set Input Parameters Brand name AI engine (e.g., Perplexity) Source/region Sorting preferences Result limits Configure Output Update file path in the “Write File to Disk” node Ensure write permissions are available Execute Workflow Click Execute Workflow Generated brand intelligence is saved as structured JSON How to customize this workflow You can easily adapt this workflow to your needs: Change Brand Focus** Modify the brand input to analyze competitors or product names Switch AI Engines** Compare brand narratives across different AI search engines Add AI Enrichment** Insert OpenAI or Gemini nodes to summarize brand sentiment or themes Classification & Tagging** Categorize prompts into awareness, comparison, pricing, reviews, etc. Replace File Export** Send results to: Databases Google Sheets Dashboards Webhooks or APIs Scale for Monitoring** Schedule runs to track brand perception changes over time Summary This workflow delivers true AI SERP brand intelligence by combining SE Ranking’s AI Search data with structured extraction and automation in n8n. It transforms opaque AI-generated brand mentions into actionable, exportable insights, enabling SEO, content, and brand teams to stay ahead in the era of AI-first search.
by Robin Geuens
Overview Turn your keyword research into a clear, fact-based content outline with this workflow. It splits your keyword into 5-6 subtopics, makes research questions for those subtopics, and uses Tavily to pull answers from real search results. This way your outline is based on real data, not just AI training data, so you can create accurate and reliable content. How it works Enter a keyword in the form to start the workflow The OpenAI node splits the keyword into 5-6 research subtopics and makes a research question for each one. These questions will be used to enrich the outline later on We split the research questions into separate items so we can process them one by one Each research question is sent to Tavily. Tavily searches the web for answers and returns a short summary Next, we add the answers to our JSON sections We take all the separate items and join them into one list again The JSON outline is converted into Markdown using a code node. The code takes the JSON headers, turns them into Markdown headings (level 2), and puts the answers underneath Setup steps Get an OpenAI API key and set up your credentials inside n8n Sign up for a Tavily account and get an API key — you can use a free account for testing Install the Tavily community node. If you don’t want to use a community node, you can call Tavily directly using an HTTP node. Check their API reference for what endpoints to call Run the workflow and enter the keyword you want to target in the form Adjust the workflow to decide what to do with the Markdown outline Requirements An OpenAI API key A Tavily account The Tavily community node installed (Optional) If you don’t want to use the Tavily community node, use a regular HTTP node and call the API directly. Check their API reference for what endpoints to call Workflow customizations Instead of using a form to enter your keyword, you can keep all your research in a Google Doc and go through it row by row You can add another AI node at the end to turn the outline into a full article You can put the outline in a Google Doc and send it to a writer using the Google Docs node and the Gmail node
by Eugene
Who is this for SEO agencies doing competitor analysis for clients Content teams planning content strategies Marketing teams tracking competitive performance SEO professionals measuring AI search visibility What this workflow does Automatically discover competitors, analyze keyword gaps, identify quick wins, and track your visibility across AI search engines (ChatGPT, Perplexity, Gemini, AI Overview). What you'll get Domain performance baseline (keywords, traffic, traffic value) Top 5 competitors discovered by keyword overlap Keyword gap analysis with up to 500 filtered opportunities Lost keywords you recently ranked for (quick wins) Topic expansion from related keyword research AI visibility metrics across 4 search engines Priority-scored opportunities (HIGH/MEDIUM/LOW) Actionable recommendations per keyword Automated export to Google Sheets How it works Fetches your domain's worldwide performance metrics Discovers top 5 organic competitors automatically Analyzes keyword gaps for each competitor Identifies keywords you recently lost rankings for Expands top opportunities into topic clusters Tracks AI search visibility (ChatGPT, Perplexity, Gemini, AI Overview) Scores and prioritizes all opportunities Exports structured data to Google Sheets Requirements SE Ranking account with API access (Get one here) SE Ranking node v1.3.5+ installed (Install from npm) Google Sheets account (optional) Setup Install the SE Ranking community node v1.3.5+ Add your SE Ranking API credentials Update the Configuration node with: Your domain and brand name Target country code (us, uk, de, etc.) Minimum search volume threshold Maximum keyword difficulty Known competitors for AI comparison (optional) Connect Google Sheets credentials (optional) Select or create a spreadsheet for export (optional) Customization Adjust min_volume and max_difficulty for more/fewer opportunities Change source for different countries (us, uk, de, fr, etc.) Modify competitor_count to analyze more or fewer competitors Add known_competitors for AI Leaderboard comparison Filter ai_engines list to track specific AI platforms only
by Niksa Perovic
Who is this for Organizations using Keephub for form management and task tracking. Perfect for HR teams handling employee requests (leave, equipment, onboarding), retail managers reviewing store submissions, or any workflow where form responses need manager follow-up. What it does Transforms form submissions into intelligent, contextual tasks for managers—automatically. When an employee submits a form, AI analyzes the data and generates a custom follow-up task with relevant fields, assigned directly to their supervisor. Zero manual configuration per form. 📋 How it works Capture — Webhook receives new form submission from Keephub Identify — Fetches submitter profile and resolves their manager (parent org node) Analyze — AI reads form data + schema and designs contextual task Validate — Checks AI output for required fields and malformed data Create — Posts task to Keephub with proper targeting, dates, and permissions Link — Includes URL back to original submission for manager reference How to set up 📦 Prerequisites: n8n-nodes-keephub verified node (v1.5+) installed Keephub account with API access (Login + Bearer tokens) OpenAI API key (GPT-4.1 recommended) Knowledge of Keephub orgchart relations and groups Setup steps: Install n8n-nodes-keephub from Community Nodes in n8n settings Create Keephub Login credential → connect to 4 blue nodes (Get Submitter, Get Parent Node, Get Root Node, Get Form Schema) Create Keephub Bearer credential → connect to "Create Task" node Create OpenAI credential → connect to "Design Task with AI" node Open ⚙️ Config node → set groupId to target specific user groups (find in Keephub admin) Activate workflow and copy webhook URL Configure webhook in Keephub form settings Test with a form submission and n8n Webhook trigger
by Guillaume Duvernay
This template provides a straightforward technique to measure and raise awareness about the environmental impact of your AI automations. By adding a simple calculation step to your workflow, you can estimate the carbon footprint (in grams of CO₂ equivalent) generated by each call to a Large Language Model. Based on the open methodology from Ecologits.ai, this workflow empowers you to build more responsible AI applications. You can use the calculated footprint to inform your users, track your organization's impact, or simply be more mindful of the resources your workflows consume. Who is this for? Environmentally-conscious developers:** Build AI-powered applications with an awareness of their ecological impact. Businesses and organizations:** Track and report on the carbon footprint of your AI usage as part of your sustainability goals. Any n8n user using AI:** A simple and powerful snippet that can be added to almost any AI workflow to make its invisible environmental costs visible. Educators and advocates:** Use this as a practical tool to demonstrate and discuss the real-world impact of AI technologies. What problem does this solve? Makes the abstract tangible:** The environmental cost of a single AI call is often overlooked. This workflow translates it into a concrete, measurable number (grams of CO₂e). Promotes responsible AI development:** Encourages builders to consider the efficiency of their prompts and models by showing the direct impact of the generated output. Provides a standardized starting point:** Offers a simple, transparent, and extensible method for carbon accounting in your AI workflows, based on a credible, open-source methodology. Facilitates transparent communication:** Gives you the data needed to transparently communicate the impact of your AI features to stakeholders and users. How it works This template demonstrates a simple calculation snippet that you can adapt and add to your own workflows. Set conversion factor: A dedicated Conversion factor node at the beginning of the workflow holds the gCO₂e per token value. This makes it easy to configure. AI generates output: An AI node (in this example, a Basic LLM Chain) runs and produces a text output. Estimate token count: The Calculate gCO₂e node takes the character length of the AI's text output and divides it by 4. This provides a reasonable estimate of the number of tokens generated. Calculate carbon footprint: The estimated token count is then multiplied by the conversion factor defined in the first node. The result is the carbon footprint for that single AI call. Setup Set your conversion factor (Critical Step): The default factor (0.0612) is for GPT-4o hosted in the US. Visit ecologits.ai/latest to find the specific conversion factor for your AI model and server region. In the Conversion factor node, replace the default value with the correct factor. Integrate the snippet into your workflow: Copy the Conversion factor and Calculate gCO₂e nodes from this template. Place the Conversion factor node near the start of your workflow (before your AI node). Place the Calculate gCO₂e node after your AI node. Link your AI output: Click on the Calculate gCO₂e node. In the AI output field, replace the expression with the output from your AI node (e.g., {{ $('My OpenAI Node').item.json.choices[0].message.content }}). The carbon calculation will now work with your data. Activate your workflow. The carbon footprint will now be calculated with each execution. Taking it further Improve accuracy with token counts:* If your AI node (like the native *OpenAI** node) directly provides the number of output tokens (e.g., completion_tokens), use that number instead of estimating from the text length. This will give you a more precise calculation. Calculate total workflow footprint:* If you have multiple AI nodes, add a calculation step after each one. Then, add a final *Set** node at the end of your workflow to sum all the individual gCO₂e values. Display the impact:** Add the final AI output gCO₂e value to your workflow's results, whether it's a Slack message, an email, or a custom dashboard, to keep the environmental impact top-of-mind. A note on AI agents:** This estimation method is difficult to apply accurately to AI Agents at this time, as the token usage of their intermediary "thinking" steps is not yet exposed in the workflow data.
by Eugene
Track who wins AI search and find the topics you're missing with SE Ranking Who is this for SEO teams comparing AI search visibility against competitors Content strategists planning editorial calendars around AI search gaps Marketing managers reporting share of voice across ChatGPT, Perplexity, and Gemini What this workflow does See who's winning AI search across all major LLMs, then find the organic keyword gaps and unanswered questions your competitors are capturing that you're not — and save everything to Google Sheets. What you'll get AI search leaderboard with share of voice across ChatGPT, Perplexity, Gemini, AI Overviews, and AI Mode Organic keyword gaps where competitors rank but you don't, sorted by volume Question keywords your audience asks around your seed topic — ready to write against Prompts where you already appear in AI search results SEO topics where your competitor shows up in AI answers but you don't How it works Add your domain and 2 competitors in the form — it looks up the AI search leaderboard across all 5 LLM engines and shows who's winning Pulls organic keyword gaps against both competitors sorted by volume, filtered for English keywords Finds question keywords your audience asks around your seed topic, filtered by informational intent Gets the top prompts where you already show up in AI search results Pulls your competitors' top SEO prompts and compares against your footprint to find where they appear but you don't Saves all five data sets to separate tabs in Google Sheets Requirements SE Ranking community node installed SE Ranking API token (Get one here) Google Sheets account (optional) Setup Install the SE Ranking community node Add your SE Ranking API credentials Connect your Google Sheets account and set a spreadsheet URL in each export node Activate the workflow — n8n generates a unique form URL you can share or embed Open the form, fill in your domain and competitors, and the workflow runs automatically Customization Change volume_min in the Configuration node to raise or lower the keyword volume threshold Change source in the Configuration node for a different regional database (us, uk, de, fr, es, etc.) Swap seed_topic in Configuration to any keyword your business targets to get a fresh set of questions
by Devon Toh
Email List Personalization - Icebreaker & Subject Line Generator Reads an enriched lead list from Google Sheets, generates a personalized icebreaker + subject line for each lead using OpenAI, and writes results back to the same sheet -- ready for your email sequencer. Who is this for? SDRs, founders, and agency owners running cold outbound who want personalized first lines without spending 2-3 minutes per lead on manual research. Sits between your Apollo / Sales Nav export and Instantly / Smartlead / Lemlist. What problem does this solve? Generic cold emails get ignored. Writing unique icebreakers for 200+ leads/day is not realistic by hand. This workflow does it at ~$0.002/lead with GPT-4.1-mini -- spartan, human-sounding copy that references real prospect details. How it works Read Lead Sheets from Google Sheets (enriched with LinkedIn data) Filter Empty Rows rows where icebreakers or subjectLine is empty (safe to re-run) Limit To 200 Leads batch to 200 leads per run Process One-By-One with an IF safety check before each API call Genrates Icebreaker a JSON response with few-shot examples locking the tone: icebreaker -- spartan one-liner referencing their company/role/industry subjectLine -- personalized with name + relevant hook shortenedCompanyName -- strips "Agency", "Inc.", "LLC" for natural copy verdict -- true for real people, false for company pages (filter before sending) Write Result To Sheet to the same row via row_number match key Rate Limit Delay between calls to avoid throttling Setup Google Sheets -- Connect OAuth2 in both sheet nodes. Point to your lead spreadsheet. OpenAI -- Connect API key. Default: GPT-4.1-mini. Swap to GPT-4o for premium campaigns. Sheet columns needed: first_name, last_name, email, job_title, company, linkedin_industry, location, summary, linkedin_description, linkedin_specialities, linkedin_company_employee_count, linkedin_founded_year, icebreakers, subjectLine, row_number Customize the prompt -- Swap the few-shot examples in the OpenAI node to match your voice and offer. Test first -- Set Limit to 5, run manually, review output, then scale. Customization tips Trigger** -- Replace Manual Trigger with Schedule or Google Drive Trigger for automation Batch size** -- Adjust the Limit node to match your daily send volume Model** -- GPT-4.1-mini ($0.002/lead) vs GPT-4o ($0.01/lead) Built by Devon Toh
by Eugene
Who is this for Marketing teams tracking AI SEO performance Content strategists planning editorial calendars SEO teams doing competitive intelligence What this workflow does Identify content opportunities by analyzing where competitors outrank you in AI search and traditional SEO. What you'll get AI visibility gaps across ChatGPT, Perplexity, and Gemini Keyword gaps with search volume and difficulty Competitor backlink authority metrics Prioritized opportunities with HIGH/MEDIUM/LOW scoring Actionable recommendations for each gap How it works Fetches AI search visibility for your domain and competitor Compares metrics across ChatGPT, Perplexity, and Gemini Extracts competitor's top-performing prompts and keywords Analyzes competitor backlink authority Calculates opportunity scores and prioritizes gaps Exports ranked opportunities to Google Sheets Requirements Self-hosted n8n instance SE Ranking community node installed SE Ranking API token (Get one here) Google Sheets account (optional) Setup Install the SE Ranking community node Add your SE Ranking API credentials Update the Configuration node with your domain and competitor Connect Google Sheets for export (optional) Customization Change source for different regions (us, uk, de, fr, etc.) Adjust volume/difficulty thresholds in Code nodes Modify priority scoring weights
by Victor Manuel Lagunas Franco
Generate complete illustrated stories using AI. This workflow creates engaging narratives with custom DALL-E 3 images for each scene and saves everything to Firebase. How it works User fills out a form with story topic, language, art style, and number of scenes GPT-4 generates a complete story with scenes, characters, and image prompts DALL-E 3 creates unique illustrations for each scene Images are uploaded to Firebase Storage for permanent hosting Complete story data is saved to Firestore Returns JSON with the full story and image URLs Set up steps (10-15 minutes) Add your OpenAI API credentials Add Google Service Account credentials for Firebase Update 3 variables in the Code nodes: OPENAI_API_KEY, FIREBASE_BUCKET, FIREBASE_PROJECT_ID Activate and test with the form Features 12 languages supported 10 art styles to choose from 1-12 scenes per story Automatic image hosting on Firebase Full story saved to Firestore database
by Biznova
Transform Product Photos into Marketing Images with AI Made by Biznova | TikTok 🎯 Who's it for E-commerce sellers, social media marketers, small business owners, and content creators who need professional product advertising images without expensive photoshoots or graphic designers. ✨ What it does This workflow automatically transforms simple product photos into polished, professional marketing images featuring: Professional models showcasing your product Aesthetically pleasing, contextual backgrounds Professional lighting and composition Lifestyle scenes that help customers envision using the product Commercial-ready quality suitable for ads and e-commerce 🚀 How it works Upload your basic product photo via the web form AI analyzes your product and generates a complete marketing scene Download your professional marketing image automatically Use it immediately in ads, social media, or product listings ⚙️ Setup Requirements OpenRouter Account: Create a free account at openrouter.ai API Key: Generate your API key from the OpenRouter dashboard Add Credentials: Configure the OpenRouter API credentials in the "AI Marketing Image Generator" node Test: Upload a sample product image to test the workflow 🎨 How to customize Edit the prompt** in the "AI Marketing Image Generator" node to match your brand style Adjust file formats** in the upload form (currently accepts JPG/PNG) Modify the response message** in the final form node Add your branding** by including brand colors or style preferences in the prompt 💡 Pro Tips Use high-resolution product images for best results Test different prompt variations to find your ideal style Save successful prompts for consistent brand imagery Batch process multiple products by running the workflow multiple times 🔧 Quick Setup Guide Prerequisites OpenRouter account (Sign up here) API key from OpenRouter dashboard Configuration Steps Click on "AI Marketing Image Generator" node Add your OpenRouter API credentials Save and activate the workflow Test with a product image Customization To change the image style: Edit the prompt in the "AI Marketing Image Generator" node Add specific instructions about colors, mood, or setting Include brand-specific requirements Example custom prompt additions: "Use a minimalist white background" "Feature a modern, urban setting" "Include warm, natural lighting" "Show the product in a luxury lifestyle context"
by PDF Vector
Legal professionals spend countless hours manually checking citations and building citation indexes for briefs, memoranda, and legal opinions. This workflow automates the extraction, validation, and analysis of legal citations from any legal document, including scanned court documents, photographed case files, and image-based legal materials (PDFs, JPGs, PNGs). Target Audience: Attorneys, paralegals, legal researchers, judicial clerks, law students, and legal writing professionals who need to extract, validate, and manage legal citations efficiently across multiple jurisdictions. Problem Solved: Manual citation checking is extremely time-consuming and error-prone. Legal professionals struggle to ensure citation accuracy, verify case law is still good law, and build comprehensive citation indexes. This template automates the entire citation management process while ensuring compliance with citation standards like Bluebook format. Setup Instructions: Configure Google Drive credentials for secure legal document access Install the PDF Vector community node from the n8n marketplace Configure PDF Vector API credentials Set up connections to legal databases (Westlaw, LexisNexis if available) Configure jurisdiction-specific citation rules Set up validation preferences and citation format standards Configure citation reporting and export formats Key Features: Automatic retrieval of legal documents from Google Drive OCR support for handwritten annotations and scanned legal documents Comprehensive extraction of case law, statutes, regulations, and academic citations Bluebook citation format validation and standardization Automated Shepardizing to verify cases are still good law Pinpoint citation detection and parenthetical extraction Citation network analysis showing case relationships Support for federal, state, and international law references Customization Options: Set jurisdiction-specific citation rules and formats Configure automated alerts for superseded statutes or overruled cases Customize citation validation criteria and standards Set up integration with legal research platforms (Westlaw, LexisNexis) Configure export formats for different legal document types Add support for specialty legal domains (tax law, patent law, etc.) Set up collaborative citation checking for legal teams Implementation Details: The workflow uses advanced legal domain knowledge to identify and extract citations in various formats across multiple jurisdictions. It processes both digital and scanned documents, validates citations against legal standards, and builds comprehensive citation networks. The system automatically checks citation accuracy and provides detailed reports for legal document preparation. Note: This workflow uses the PDF Vector community node. Make sure to install it from the n8n community nodes collection before using this template.
by Joel Cantero
🚀 Try It Out! YouTube Transcript API Extractor (Any Public Video) Extracts a clean transcript from a videoId using youtube-transcript.io. 🎯 Use Cases AI summaries, sentiment analysis, keyword extraction Internal indexing/SEO Content pipelines (blog/newsletter) Batch transcript processing 🔄 How It Works (5 Steps) 📥 Input: youtubeVideoId, apiToken 🌐 API: POST to youtube-transcript.io 🧩 Parse: Normalizes the response format 🧹 Clean: Normalizes text and whitespace ✅ Output: Transcript + metrics (wordCount/charCount) 🚀 How to Use Payload: {"youtubeVideoId":"xObjAdhDxBE", "apiToken": "xxxxxxxxxx"} ⚙️ Setup: This sub-workflow is intended to be called from another workflow (Execute Workflow Trigger)