by Mychel Garzon
Stop guessing if text came from ChatGPT. Let three AI agents argue about it using forensic data. Paste any text and get a verdict on whether it was written by a human, AI, or a hybrid mix. Instead of trusting one black-box score, this workflow runs your text through statistical analysis and a three-agent debate where each agent challenges the others using hard numbers. This is not another "detect AI with AI" template. The workflow measures six forensic markers first, then makes three separate agents argue about what those numbers mean. You see the raw data, the debate, and the final verdict with confidence scores. How it works The workflow runs in five stages: Extract forensic metrics: A code node measures burstiness (sentence length variation), type-token ratio (vocabulary diversity), hapax rate (words appearing once), repetition score (repeated phrases), transition density (filler words like "furthermore"), and AI fingerprints (100+ known LLM phrases stored in a data table). Short texts under 150 words get recalibrated because metrics are less reliable. Agent 1 - The Scanner: Reads the text cold with zero metrics. Gives a gut impression (human/AI/hybrid) based purely on instinct. Acts like an editor who has read thousands of manuscripts. Agent 2 - Forensic Analyst: Gets the text, all metrics, and Agent 1's verdict. Writes a data-driven report that must cite specific numbers. Either agrees or disagrees with Agent 1 and explains why using the forensic evidence. Agent 3 - Devil's Advocate: Gets everything above and argues the opposite of whatever Agent 2 concluded. If Agent 2 said AI, Agent 3 must argue human. Finds holes in the logic and metrics that got ignored. Weighted verdict: A code node scores all three agents (35% Analyst, 15% Scanner, 15% Devil's Advocate, 35% raw metrics) and classifies as human (score under 0.35), AI (score over 0.60), or AI-augmented (in between). Confidence is calculated separately so you get verdicts like "AI with 67% confidence." Chat output format The chat response shows: Verdict badge:** 🙎🏻 Human-Written, 🤖 AI-Generated, or 🦾 AI-Augmented Confidence bar:** Visual bar (██████████ 85%) showing how certain the verdict is Metrics table:** All six forensic markers with 🟥 AI or 🟩 Human flags Agent debate:** Three verdicts with reasoning. Agent 1's gut check, Agent 2's forensic report, Agent 3's counter-argument. Each shows classification and confidence percentage. Example output for AI text: 🤖 Verdict: AI-Generated Confidence: ████████░░ 87% 📊 Stylometric Metrics: Burstiness: 0.18 🟥 AI Vocabulary Diversity: 0.36 🟥 AI Hapax Rate: 0.32 🟥 AI Repetition: 0.21 🟥 AI Transition Density: 0.024 🟥 AI 🔎 Agent 1 (Gut Check): AI (90%) "Monotonous rhythm, corporate vocabulary, zero personality" 🔬 Agent 2 (Data): AI (95%) "Five of six metrics flag AI. Burstiness of 0.18 well below human threshold..." 😈 Agent 3 (Critic): AI-AUGMENTED (65%) "Could be human technical writing. Transition density alone not conclusive..." Self-updating fingerprint database A separate workflow branch runs weekly to keep the AI phrase list current: Check existing words: Reads all fingerprint phrases from the data table Find new AI tells: Asks an LLM what phrases modern models currently overuse Filter duplicates: Removes words already in the database Add to table: Stores new phrases for future detection Requires: A data table (Google Sheets, Airtable, or n8n Data Table) to store fingerprint words. The workflow includes a starter list of 100+ phrases like "delve into," "it's worth noting," "as of my last update." LLM writing patterns shift fast. What worked for GPT-3 detection does not work for GPT-4. This keeps the detector current without manual updates. Key benefits Three classifications instead of binary.** Human, AI, or AI-augmented. Most real content is hybrid. You see the reasoning.** Full agent debate included. When verdicts are borderline, you can read which argument won. Transparent metrics.** Raw numbers exposed with red/green flags. No hidden scoring. Self-updating detection.** Weekly workflow finds new AI phrase patterns as models evolve. Error resilient.** If one agent fails, the workflow continues and redistributes weights. Who this is for Content teams verifying contractor submissions are not AI-generated Educators checking student essays for AI assistance Publishers screening submissions to maintain editorial standards SEO teams ensuring content meets Google's helpful content guidelines Researchers analyzing hybrid human-AI writing patterns Setup Add API credentials for at least one LLM provider (Groq, OpenAI, Gemini, or Anthropic) Create a data table for AI fingerprint phrases or use n8n's built-in Data Table node Populate the table with the starter list (included in workflow documentation) Activate the workflow and open the chat interface Paste text and wait 30-60 seconds for forensic analysis Required APIs & credentials At least one LLM provider: OpenAI, Anthropic, Google Gemini, Groq, or any other provider with JSON output support. Each agent can use a different provider or all can use the same one. Data storage for fingerprint phrases: n8n Data Table (built-in), Google Sheets, or Airtable. The workflow checks this table to identify known AI phrases during analysis. How to customise it Swap models:** Each agent node has a chat model sub-node. Replace with any provider. Scanner works with smaller models. Analyst needs strong reasoning. Devil's Advocate needs good instruction-following. Tune thresholds:** Open Extract Stylometric Metrics code. Burstiness under 0.3 flags AI. Type-token ratio under 0.4 flags AI. Adjust for stricter or looser detection. Change agent weights:** Open Final Verdict code. Default is 35% Analyst, 15% Scanner, 15% Devil's Advocate, 35% metrics. Increase metric weight to trust data more. Modify agent personas:** Edit system prompts. Make Scanner more skeptical. Make Analyst cite sources. Make Devil's Advocate more aggressive. Add quality gate:** Drop a Filter node after verdict. Only proceed if confidence exceeds 70%. Batch process:** Replace Chat Trigger with Schedule Trigger looping over a file list. Known limitations The workflow works best on long-form content (500+ words). Short texts under 100 words produce less reliable metrics because statistical patterns need more data to emerge. The recalibration helps but is not perfect. AI fingerprint phrases evolve as models improve. GPT-5 might not use "delve into" but will have new tells. The self-updating workflow helps but lags current releases by a few weeks. The three-agent debate architecture assumes disagreement is meaningful. For extremely niche topics where only one agent has relevant training data, the minority opinion might be correct but gets outvoted. Review the individual agent reasoning when dealing with specialized content.
by Cheng Siong Chin
How It Works This workflow automates procurement fraud detection and supplier compliance monitoring for organizations managing complex purchasing operations. Designed for procurement teams, audit departments, and compliance officers, it solves the challenge of identifying fraudulent transactions, contract violations, and supplier misconduct across thousands of purchase orders and vendor relationships. The system schedules continuous monitoring, generates sample transaction data, analyzes patterns through dual AI agents (Price Reasonableness validates pricing against market rates, Delivery Agent assesses fulfillment performance), orchestrates comprehensive risk evaluation through Orchestration Agent, routes findings by severity (critical/high/medium/low), and triggers multi-channel responses: critical issues activate immediate Slack/email alerts with detailed logging; high-priority cases receive escalation workflows; medium/low findings generate routine compliance reports. By combining AI-powered anomaly detection with intelligent routing and coordinated notifications, organizations prevent fraud losses by 75%, ensure vendor compliance, maintain audit trails, and enable procurement teams to focus on strategic sourcing rather than manual transaction reviews. Setup Steps Connect Schedule Trigger for monitoring frequency Configure procurement systems with API credentials Add AI model API keys to Price Reasonableness, Delivery, and Orchestration Agent nodes Define fraud indicators and compliance thresholds in agent prompts based on company policies Link Slack webhooks for critical and high-priority fraud alerts to procurement and audit teams Connect email credentials for stakeholder notifications and escalation workflows Prerequisites Procurement system API access, AI service accounts, market pricing databases for benchmarking Use Cases Invoice fraud detection, bid rigging identification, duplicate payment prevention Customization Modify agent prompts for industry-specific fraud patterns, adjust risk scoring algorithms Benefits Prevents fraud losses by 75%, automates compliance monitoring across unlimited transactions
by Paul Karrmann
This n8n template helps you turn inbound messages into a clean, deduped queue of actionable tickets. It includes Slack and Gmail as ready to use examples, but the key idea is the universal intake normalizer: you can plug in other sources later (forms, webhooks, chat tools, other inboxes) as long as you map them into the same normalized schema. Good to know This workflow sends message content to an LLM for classification. Keep sensitive data out of the prompt, and only process messages you are allowed to process. Costs depend on message volume and length, so truncation and tight filters matter. How it works Collect inbound items (Slack and Gmail are included as examples). Normalize each item into one shared JSON format so every source behaves the same. Deduplicate items using a data table so repeats are skipped. Use an AI agent with structured output to score urgency and importance, produce a summary, and draft a reply. Create a Notion ticket for tracking, and optionally notify Slack for high priority items. Setup steps Connect credentials for Slack, Gmail, Notion, and your LLM provider. Choose your Slack channel and set a Gmail filter that keeps volume manageable. Select your Notion database and ensure properties match the field mappings. Create or select a data table and map the unique ID column for deduplication. Adjust the notification threshold and schedule interval to match your workflow. Requirements Slack workspace access (optional if you swap the source) Gmail access (optional if you swap the source) Notion database for ticket creation LLM API credentials Customising this workflow Add new sources by mapping them into the normalizer schema. Truncate long messages before the AI step to reduce cost. Change categories, scoring, and thresholds to match your operating model.
by Incrementors
Every Monday morning, this workflow pulls your top 10 keywords from Google Search Console, passes the data to GPT-4o-mini, and emails a polished 150–200 word SEO digest to your client — automatically. You configure it once with four values, and it runs on its own every week. Built for SEO agencies and freelancers who want to deliver consistent client reporting without spending an hour writing it manually. What This Workflow Does Scheduled weekly trigger — Fires automatically every Monday at 8AM so you never miss a reporting week, with zero manual effort. Live GSC data pull — Fetches the top 10 keywords by clicks from Google Search Console for the past 7 days, directly from the API. Clean keyword formatting — Converts raw API data into a readable list showing keyword, clicks, impressions, CTR, and average position for every query. AI-written email body — GPT-4o-mini reads the keyword data and writes a professional, conversational 150–200 word digest — no templates, no copy-paste. One-config setup — All client details (site URL, client name, recipient email, agency name) live in a single node. Change it once to deploy for any client. Automated Gmail delivery — The final report is sent from your connected Gmail account with a dynamic subject line including the date range and client name. Setup Requirements Tools and accounts needed: n8n instance (self-hosted or cloud) Google account with Google Search Console access (OAuth2 credential) Gmail account for sending reports (OAuth2 credential — can be the same Google account) OpenAI account with GPT-4o-mini API access Estimated Setup Time: 10–15 minutes Step-by-Step Setup Import the workflow — Open n8n → Workflows → Import from JSON. Paste the workflow JSON and import. Confirm all 9 nodes are connected in a straight line. Connect your Google Search Console credential — In n8n, go to Credentials → New → Google OAuth2 API. Complete the OAuth flow with the Google account that has access to your GSC property. Once connected, open the 3. HTTP — Fetch GSC Top Keywords node and select this credential under the OAuth2 field. > ⚠️ Your GSC property URL must match exactly. If your property is https://www.example.com/, the URL in the config must be identical — including the trailing slash. Connect your Gmail credential — Go to Credentials → New → Gmail OAuth2. Complete the OAuth flow with the Gmail account you want to send reports from. Open the 9. Gmail — Send Weekly Report node and select this credential. Add your OpenAI API key — Go to Credentials → New → OpenAI API. Paste your API key from platform.openai.com. Open the 7. OpenAI — GPT-4o-mini Model node and select this credential. Edit your config values — Open the 2. Set — Config Values node. This is the only node you need to change. Replace all four values: | Field | What to enter | |---|---| | siteUrl | Your exact GSC property URL (e.g. https://www.example.com/) | | clientName | Client business name (appears in the email greeting) | | recipientEmail | The email address to receive the weekly report | | agencyName | Your agency name (appears in the email footer) | Activate the workflow — Toggle the workflow to Active. It will now run automatically every Monday at 8AM. How It Works (Step by Step) Step 1 — Schedule Trigger (Every Monday 8AM) The workflow fires automatically using a cron schedule set to every Monday at 8AM. No manual action is needed once the workflow is active. Step 2 — Set Config Values Four variables are stored here: the site URL, client name, recipient email, and agency name. These are referenced by every subsequent step so you only update them in one place. Step 3 — HTTP Request (Google Search Console API) An authenticated POST request goes to the Google Search Console Search Analytics API. It asks for the top 10 keywords by clicks for the past 7 days, automatically calculating the start and end dates based on today's date. Step 4 — Set (Extract Fields) The raw API response is captured alongside the config values. The keyword data is stored as a JSON string, and the date range (week start and week end) is formatted for display in the email. Step 5 — Code (Format Data for GPT) A short JavaScript block parses the keyword rows and builds a clean, numbered text list. Each keyword line includes the query, clicks, impressions, CTR percentage, and average position. If no data is found, a fallback message tells you to check your GSC URL and credentials. Step 6 — AI Agent (Write SEO Report Email) GPT-4o-mini receives the keyword list and a detailed prompt. It writes the full email body — a warm greeting, highlights of the top 3 keywords by name with stats, one positive observation about the week, and one actionable SEO tip for the following week. The output is plain text only, with no markdown or symbols. Step 7 — OpenAI Model (GPT-4o-mini) This is the language model powering the AI Agent in Step 6. It is set to GPT-4o-mini with a 500-token limit and a temperature of 0.6 for consistent, professional writing. Step 8 — Set (Prepare Final Email) The AI-written email body, subject line, recipient address, and agency name are assembled into one item. The subject line is dynamically built using the date range and client name. Step 9 — Gmail (Send Weekly Report) Gmail sends the final email to the recipient address. The body is the AI-written digest, followed by a footer identifying the agency, data source, and automation stack. Key Features ✅ Zero-maintenance scheduling — Runs every Monday at 8AM without any manual trigger or login required. ✅ Dynamic date ranges — Start and end dates are calculated automatically each week. No hardcoded dates to update. ✅ Single config node — All four client-specific values live in one place. Duplicating this workflow for a new client takes under 2 minutes. ✅ Fallback message on empty data — If the GSC API returns no rows, the workflow still runs and sends an alert message instead of failing silently. ✅ AI-written in plain text — GPT-4o-mini is explicitly instructed to avoid markdown, asterisks, or symbols — producing clean copy-paste-ready email content. ✅ Professional subject line — The email subject auto-includes the exact date range and client name, making reports easy to find in any inbox. ✅ Footer attribution — Every email ends with an auto-generated footer crediting your agency and the data source, reinforcing your brand on every send. Customisation Options Increase the keyword count — In the 3. HTTP — Fetch GSC Top Keywords node, change "rowLimit": 10 to any number up to 25,000 to include more keywords in the AI's analysis. Change the send schedule — In the 1. Schedule — Every Monday 8AM node, edit the cron expression 0 8 * * 1 to any schedule you need. For example, 0 8 * * 5 sends on Fridays, or 0 9 1 * * sends on the 1st of every month. Add a dimension for pages or countries — In the 3. HTTP — Fetch GSC Top Keywords node, add "page" or "country" to the "dimensions" array alongside "query" to include page-level or geographic data in the report. Send a CC copy to yourself — In the 9. Gmail — Send Weekly Report node, expand the options and add your own email address to the CC field to keep a copy of every client send. Adjust the email tone — In the 6. AI Agent — Write SEO Report Email node, edit the writing instructions in the prompt to match your agency's voice — more formal, more casual, longer, or shorter. Deploy for multiple clients — Duplicate the entire workflow in n8n and update the 2. Set — Config Values node for each client. Each copy runs independently on the same schedule. Troubleshooting GSC API returns a 403 or permission error: Confirm your Google OAuth2 credential has access to the correct Search Console property Check that the siteUrl value in 2. Set — Config Values exactly matches the GSC property URL, including the protocol (https://) and trailing slash Re-authenticate the Google credential if it has expired No keyword data in the email (fallback message appears): Verify the site had traffic in the past 7 days in your GSC dashboard Check that the siteUrl is the domain-level property and not a URL-prefix property with a different format Run the workflow manually and inspect the output of 3. HTTP — Fetch GSC Top Keywords to see the raw API response Gmail node fails to send: Confirm your Gmail OAuth2 credential is properly connected and not expired Check that recipientEmail in 2. Set — Config Values is a valid email address Check your Gmail sending limits if you are running this for many clients from one account AI Agent produces an empty or broken email body: Open the 7. OpenAI — GPT-4o-mini Model node and confirm the OpenAI credential is valid and has available API credits Check the n8n execution log for the AI Agent node to see if an OpenAI error message was returned Workflow not triggering on schedule: Confirm the workflow is toggled to Active — saved workflows do not run unless activated Check your n8n instance timezone settings and compare to the cron expression — 8AM runs based on your server timezone Support Need help setting this up or want a custom version built for your team or agency? 📧 Email: info@incrementors.com 🌐 Website: https://www.incrementors.com/contact-us/
by Avkash Kakdiya
How it works This workflow automatically generates and sends personalized sales proposals when a new row is added to Google Sheets. It uses AI to create proposal content, updates contact details in HubSpot, and generates a formatted document. The document is converted into a PDF and emailed to the client. This eliminates manual proposal writing and ensures fast, consistent delivery. Step-by-step Capture lead and generate AI content** Google Sheets Trigger – Detects new form submissions in your sheet. Loop Over Items – Processes each new entry individually. Message a model – Uses Gemini AI to generate the proposal content. Code in JavaScript – Cleans and splits AI output into structured fields. Create contact and generate document** Create or update a contact – Stores or updates client data in HubSpot. Copy file – Duplicates a proposal template from Google Drive. Update a document – Replaces placeholders with real client and AI data. Download file – Converts the final document into a PDF file. Send proposal to client** Send a message – Emails the generated PDF proposal to the client. Why use this? Automatically generates professional proposals without manual writing Ensures consistent formatting using templates and placeholders Saves time by combining CRM, AI, and document creation Improves response speed for leads and increases conversion chances Scales easily for handling multiple client requests simultaneously
by Automate With Marc
Image to Video Social Media Reel Generator + Autopost Without AI Slop Google Drive → AI Video Generation → Captions → Approval → Instagram & TikTok Watch Step-By-Step Video: https://www.youtube.com/watch?v=jPOYxQF25ws Turn a folder of images into fully-produced short-form social media reels—automatically. This workflow picks a random image, generates a cinematic AI video from it, adds text overlays and captions, waits for your approval, and then posts to Instagram and TikTok. What this template does On a scheduled basis (default: daily at 9:00 AM), this workflow: Selects a random image from a Google Drive folder Uploads the image for processing Generates a cinematic image-to-video prompt using AI Creates an 8-second vertical video using an image-to-video model (via Wavespeed) Applies captions and text overlays using Submagic Waits for human approval via email Automatically posts the approved reel to: Instagram TikTok If the video is not approved, the workflow loops and tries again on the next run. Why this workflow is useful Converts static images into high-engagement video content Removes repetitive manual work in short-form content creation Keeps a human-in-the-loop before anything is published Perfect for: Creators & solopreneurs Social media managers Small businesses & local brands AI-first content pipelines High-level flow Schedule → Pick Image → Generate Video → Add Captions → Approve → Post Node overview Schedule Trigger Runs the workflow automatically at a fixed time (default: daily at 9 AM). Google Drive – Search Files Fetches all images from a selected Drive folder. Randomizer (Code Node) Selects one random image to avoid repetitive posting. Upload Media Uploads the selected image so it can be used by downstream tools. Prompt Generator (AI) Generates a high-quality cinematic prompt optimized for image-to-video models Wavespeed – Image to Video Creates an 8-second, 9:16 video from the image + prompt. Wait & Polling (IF Nodes) Waits and checks until video generation is completed. Submagic – Text Overlay & Captioning Adds captions and overlays in a short-form style optimized for social platforms. Gmail – Send for Approval Sends a preview link and caption to your inbox and waits for approval. IF (Approved?) Yes: posts the reel automatically No: skips posting and retries in the next run Blotato – Social Posting Publishes the approved reel to: Instagram TikTok Requirements Before running this template, you’ll need to configure: Google Drive OAuth (image source folder) OpenAI API key (prompt generation) Wavespeed API key (image-to-video generation) Submagic API key (captions & overlays) Gmail OAuth (approval workflow) Blotato account (Instagram & TikTok posting) All credentials must be added manually after importing. Setup instructions Import the template into your n8n workspace Connect your Google Drive account and set your image folder Add credentials for: OpenAI Wavespeed Submagic Gmail Blotato Adjust the Schedule Trigger if needed Run the workflow once to test the full flow Enable the workflow to start daily automated posting Customization ideas Change video duration, aspect ratio, or style Modify the AI prompt to match your brand voice Post only after manual approval (already built-in) Add a Slack or Telegram approval step Duplicate posting logic for YouTube Shorts or Facebook Reels Store generated videos in cloud storage or a content database Troubleshooting No images found: check Drive folder ID and permissions Video stuck generating: increase wait time or polling interval Approval email not received: verify Gmail OAuth and inbox filters Posting fails: confirm Blotato account and platform permissions
by Neloy Barman
Self-Hosted This workflow provides a complete end-to-end system for capturing, analyzing, and routing customer feedback. By combining local multimodal AI processing with structured data storage, it allows teams to respond to customer needs in real-time without compromising data privacy. Who is this for? This is designed for Customer Success Managers, Product Teams, and Community Leads who need to automate the triage of high-volume feedback. It is particularly useful for organizations that handle sensitive customer data and prefer local AI processing over cloud-based API calls. 🛠️ Tech Stack Tally.so**: For front-end feedback collection. LM Studio**: To host the local AI models (Qwen3-VL). PostgreSQL**: For persistent data storage and reporting. Discord**: For real-time team notifications. ✨ How it works Form Submission: The workflow triggers when a new submission is received from Tally.so. Multimodal Analysis: The OpenAI node (pointing to LM Studio) processes the input using the Qwen3-VL model across three specific layers: Sentiment Analysis: Evaluates the text to determine if the customer is Positive, Negative, or Neutral. Zero-Shot Classification: Categorizes the feedback into pre-defined labels based on instructions in the prompt. Vision Processing: Analyzes any attached images to extract descriptive keywords or identify UI elements mentioned in the feedback. Data Storage: The PostgreSQL node logs the user's details, the original message, and all AI-generated insights. AI-Driven Routing: The same Qwen3-VL model makes the routing decision by evaluating the classification results and determining the appropriate path for the data to follow. Discord Notification: The Discord node sends a formatted message to the corresponding channel, ensuring the support team sees urgent issues while the marketing team sees positive testimonials. 📋 Requirements LM Studio** running a local server on port 1234. Qwen3-VL-4B** (GGUF) model loaded in LM Studio. PostgreSQL** instance with a table configured for feedback data. Discord Bot Token** and specific Channel IDs. 🚀 How to set up Prepare your Local AI: Open LM Studio and download the Qwen3-VL-4B model. Start the Local Server on port 1234 and ensure CORS is enabled. Disable the Require Authentication setting in the Local Server tab. Configure PostgreSQL: Ensure your database is running. Create a table named customer_feedback with columns for name, email_address, feedback_message, image_url, sentiment, category, and img_keywords. Import the Workflow: Import the JSON file into your n8n instance. Link Services: Update the Webhook node with your Tally.so URL. In the Discord nodes, paste the relevant Channel IDs for your #support, #feedback, and #general channels. Test and Activate: Toggle the workflow to Active. Send a test submission through your Tally form and verify the data appears in PostgreSQL and Discord. 🔑 Credential Setup To run this workflow, you must configure the following credentials in n8n: OpenAI API (Local)**: Create a new OpenAI API credential. API Key: Enter any placeholder text (e.g., lm-studio). Base URL: Set this to your machine's local IP address (e.g., http://192.168.1.10:1234/v1) to ensure n8n can connect to the local AI server, especially if running within a Docker container. PostgreSQL**: Create a new PostgreSQL credential. Enter your database Host, Database Name, User, and Password. If using the provided Docker setup, the host is usually db. Discord Bot**: Create a new Discord Bot API credential. Paste your Bot Token obtained from the Discord Developer Portal. Tally**: Create a new Tally API credential. Enter your API Key, which you can find in your Tally.so account settings. ⚙️ How to customize Refine AI Logic**: Update the System Message in the AI node to change classification categories or sentiment sensitivity. Switch to Cloud AI: If you prefer not to use a local model, you can swap the local **LM Studio connection for any 3rd party API, such as OpenAI (GPT-4o), Anthropic (Claude), or Google Gemini, by updating the node credentials and Base URL. Expand Destinations: Add more **Discord nodes or integrate Slack to notify different departments based on the AI's routing decision. Custom Triggers: Replace the Tally webhook with a **Typeform, Google Forms, or a custom Webhook trigger if your collection stack differs.
by Databox
Stop spending hours manually pulling paid ads data. This workflow connects to Databox via MCP, auto-discovers every connected paid platform, fetches 6 key metrics, and delivers a consolidated weekly report to Slack and email - every Monday at 9 AM, completely hands-free. Who's it for Performance marketers** managing campaigns across multiple platforms Marketing managers** who need a weekly cross-platform overview Agencies** automating paid ads reporting for clients How it works Schedule Trigger fires every Monday at 9 AM AI Agent connects to Databox via MCP and discovers all connected paid platforms (Google Ads, Facebook Ads, LinkedIn Ads, TikTok Ads, and 6 more) Fetches Spend, Clicks, CPC, CTR, Impressions, and Conversions for this week and last week Calculates week-over-week changes and formats two outputs - a Slack summary and a color-coded HTML email Delivers both simultaneously Requirements Databox account** with at least one paid ads platform connected (free plan works) OpenAI API key (or Anthropic) Slack account Gmail account How to set up Click Databox MCP Tool - set Authentication to OAuth2 and authorize Add your OpenAI API key to the Chat Model node Connect Slack and update the channel ID in the Send to Slack node Connect Gmail and set the recipient address in the Send Email node Activate - your first report arrives next Monday
by Davide
This workflow automates the process of receiving a post-call audio file and transcription from ElevenLabs, processing them, and generating a financial risk report. Key Advantages 1. ✅ End-to-End Automation The workflow fully automates the process from raw input (audio/transcript) to final delivery (email report), eliminating manual intervention. 2. ✅ AI-Powered Decision Making It leverages language models to: Analyze qualitative interview responses Convert them into quantitative scores Produce consistent and objective evaluations 3. ✅ Structured Data Extraction Automatically extracts critical business information, reducing human error and ensuring standardized outputs. 4. ✅ Scalability The webhook-based architecture allows the system to handle large volumes of interviews in parallel without additional effort. 5. ✅ Modular & Extensible Design Each step (audio processing, extraction, scoring, reporting) is modular, making it easy to: Replace models Add new analysis layers Integrate additional services 6. ✅ Professional Output Generation Generates clean, ready-to-send HTML reports compatible with email clients, improving communication with stakeholders. 7. ✅ Data Traceability & Storage Audio files are stored in Google Drive, ensuring: Auditability Easy retrieval of original data 8. ✅ Consistency & Standardization The evaluation logic ensures that all interviews are assessed using the same criteria, reducing subjective bias. How it works Receiving and Routing Data: The workflow starts with a Webhook that listens for incoming data from ElevenLabs. A Switch node then routes the data based on the body.type field. Post Call Audio: If the type is post_call_audio, the workflow processes the audio. Post Call Transcription: If the type is post_call_transcription, the workflow processes the transcription. Audio Processing Path: For an audio file, a Code node extracts the Base64 audio data and the conversation_id from the webhook payload. It converts the Base64 string into a binary audio buffer (MP3). This binary data is then passed to a Google Drive node, which uploads the file to a specified folder (the user's root folder). Transcription Processing Path: For a transcription, a Set node extracts the transcript array from the payload. A subsequent Code node processes this array, combining all messages from the conversation into a single, readable full text string, prefixed by the speaker's role. Data Enrichment and Analysis: The full transcript text is then used by two nodes in parallel: Information Extractor: This LangChain node uses an OpenAI model (gpt-5-mini) to extract structured data from the text, specifically the company_name, the CEO's name, the address, and the vat_number. Calculate Rating: This LangChain node uses another OpenAI model to perform a quantitative evaluation. It follows a provided system prompt to assign a numerical score, a final verdict (POSITIVE/NEUTRAL/NEGATIVE), and a reason based on the interviewee's responses. Its output is parsed by a Structured Output Parser to ensure it is valid JSON. Report Generation and Delivery: The outputs from the Information Extractor and Calculate Rating nodes are merged into a single data object. This object is passed to the Financial Report Generator, a final LangChain node that acts as a professional analyst. Using the merged data (company details, score, verdict, etc.), it generates a polished, formatted HTML email body. Finally, a Gmail node sends this HTML report as an email to the specified recipient. Set up steps Configure Credentials: OpenAI: Set up an OpenAI API credential for the three language model nodes. Ensure it has access to the gpt-5-mini model. Google Drive: Configure OAuth2 credentials for the "Upload audio" node to allow file uploads. Gmail: Set up OAuth2 credentials for the "Send report" node. Configure Webhook: Note the webhook ID and path. This URL must be configured in ElevenLabs to send post-call data to this n8n instance. Update Node Parameters: Google Drive: Modify the "Upload audio" node if the target folder (folderId) is not the root. Information Extractor: The extraction attributes (company, name, address, VAT) are pre-configured. No changes are needed unless the target data fields change. Gmail: Update the Gmail node with the recipient email address (xxx@xxx.com) and verify the email subject line formatting. Activate Workflow: Once all credentials and parameters are set, toggle the workflow from active: false to active: true in the n8n editor to start listening for webhook calls. 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Incrementors
Description This workflow automates AI Search Engine Optimization (ASEO) tracking for digital marketing agencies. It tests your client's website visibility across four major AI platforms—ChatGPT, Claude, DeepSeek, and Perplexity—using brand-neutral prompts, analyzes ranking position and presence strength on each platform, identifies top competitors, and returns a structured 27-field scorecard with actionable recommendations. Designed as a sub-workflow, it integrates directly into your existing client audit or reporting pipeline. Key Features Brand-neutral prompt generation (no client name used—tests true organic AI discoverability) Simultaneous visibility testing across ChatGPT, Claude, DeepSeek, and Perplexity Presence strength scoring (0–100%) per platform Competitor identification across all four AI platforms Strongest and weakest platform detection AI-generated actionable recommendations for improvement Structured 27-field output ready for Google Sheets or database insertion Error handling on all agent nodes (partial results if any platform fails) Sub-workflow design—integrates cleanly into larger audit pipelines What This Workflow Does Input This workflow is triggered by a parent workflow and receives two parameters: Website**: The client's website URL (e.g., https://example.com) Website Summary**: A plain-text description of what the business does and its core services Processing Stage 1 — Brand-Neutral Prompt Generation GPT-4.1-mini generates a realistic search prompt that potential customers would type into an AI chatbot to find a company like the client. Critically, the prompt does not include the client's brand name—it focuses on their services and industry instead. For example, for a Los Angeles product photography studio, the prompt would be something like "best product photography studio for Amazon listings in Los Angeles" rather than the studio's name. This tests true organic discoverability, not brand recall. Stage 2 — Four-Platform Sequential Testing The same generated prompt is submitted sequentially to four AI platforms: ChatGPT via GPT-4o-mini Claude via Claude Sonnet 3.7 DeepSeek Perplexity Each platform agent runs independently with error handling enabled. If one platform API is down or throws an error, the workflow continues and returns partial results—it does not fail entirely. Stage 3 — Cross-Platform Analysis DeepSeek analyzes all four platform outputs together and produces a structured JSON report covering each platform's ranking (Yes/No), position (1–10 or null), presence strength percentage, key mentions, and top competitors. It also generates an overall summary comparing all platforms. Stage 4 — Data Flattening The nested JSON is flattened into 27 individual fields that can be directly inserted into a Google Sheet row, database, or passed back to the parent workflow for reporting. Output The workflow returns 27 structured data fields: Search prompt used (1 field) Per-platform metrics for ChatGPT, Claude, DeepSeek, Perplexity: Ranking (Yes/No), Position, Presence Strength %, Key Mentions, Top Competitors (5 fields × 4 platforms = 20 fields) Overall summary: Total platforms ranking, Average presence strength, Strongest platform, Weakest platform, Main competitors across all platforms, Recommendations (6 fields) Setup Instructions Prerequisites Active n8n instance (self-hosted or n8n Cloud) Parent workflow with Execute Workflow node (this workflow does not run standalone) OpenAI API key (used for prompt generation and ChatGPT testing) Anthropic API key (used for Claude testing) DeepSeek API key (used for DeepSeek testing and final analysis) Perplexity API key (used for Perplexity testing) Estimated setup time: 20–25 minutes Step 1: Understand how this workflow is triggered This is a sub-workflow. It does not have its own schedule trigger. It runs when a parent workflow calls it using n8n's Execute Workflow node. Setting up the parent workflow: Open or create your parent workflow (e.g., a client audit scheduler, a Google Sheets loop, or a manual trigger) Add an Execute Workflow node to your parent workflow Inside the Execute Workflow node: Source: Select "Database" Workflow: Search for and select this AI Search Ranking Analyzer workflow Mode: Choose "Run once for all items" or "Run once for each item" depending on your setup Under Fields, add two parameters to pass: Name: Website | Value: your client's website URL expression (e.g., ={{ $json['Website URL'] }}) Name: Website Summary | Value: your client's business description (e.g., ={{ $json['Business Description'] }}) Example parent workflow structure: Schedule Trigger (Weekly / Monthly) → Read Client List from Google Sheets → Loop Over Each Client → Execute Workflow (this AI Search Ranking Analyzer) Pass: Website = {{ $json['Website URL'] }} Pass: Website Summary = {{ $json['Summary'] }} → Append 27 Fields to Reporting Sheet → Send Report Email or Slack Notification Testing the trigger connection: Open this workflow and click on the Receive Website and Summary from Parent node You will see "Waiting for input from parent workflow..." Go to your parent workflow and click Execute node on the Execute Workflow node The data will flow into this workflow for testing Both workflows must be set to Active for production use Step 2: Connect OpenAI credentials This workflow uses two OpenAI models: GPT-4.1-mini** — used by Generate Brand-Neutral Search Prompts, Parse Prompt as JSON, and GPT Model for Parser Support GPT-4o-mini** — used by Test Visibility on ChatGPT To connect: In n8n go to Credentials → Add credential → OpenAI API Paste your API key from https://platform.openai.com/api-keys Name it clearly (e.g., "OpenAI Main") Open each of these nodes and select your credential: GPT Model for Prompt Generation → select your OpenAI credential, set model to gpt-4.1-mini GPT Model for Parser Support → select your OpenAI credential, set model to gpt-4.1-mini GPT-4o-mini for ChatGPT Test → select your OpenAI credential, set model to gpt-4o-mini Step 3: Connect Anthropic credentials Used by the Test Visibility on Claude agent via Claude Sonnet 3.7 Model. To connect: Go to Credentials → Add credential → Anthropic API Get API key from https://console.anthropic.com/ Open the Claude Sonnet 3.7 Model node and select your credential Verify the model is set to claude-3-7-sonnet-20250219 Step 4: Connect DeepSeek credentials Used by two nodes: DeepSeek Model for Testing (platform test) and DeepSeek Model for Analysis (final summarizer). To connect: Go to Credentials → Add credential → DeepSeek API Get API key from https://platform.deepseek.com/ Open DeepSeek Model for Testing node → select your credential Open DeepSeek Model for Analysis node → select your credential Step 5: Connect Perplexity credentials Used by the Test Visibility on Perplexity node (Perplexity native node, not an AI agent). To connect: Go to Credentials → Add credential → Perplexity API Get API key from https://www.perplexity.ai/settings/api Open the Test Visibility on Perplexity node and select your credential Step 6: Test the complete workflow Temporarily add a Manual Trigger node at the start and connect it to Generate Brand-Neutral Search Prompts (bypass the executeWorkflowTrigger for isolated testing) Set the Manual Trigger to pass test data: { "Website": "https://your-test-site.com", "Website Summary": "A company that provides [your service] in [your city]" } Execute and verify: Generate Brand-Neutral Search Prompts produces a sensible search query Each platform node returns output (or gracefully continues on error) Analyze All Platform Results produces structured JSON Flatten JSON to 27 Data Fields produces all 27 fields correctly Remove the test Manual Trigger once testing is complete Activate this workflow and your parent workflow Workflow Node Breakdown Receive Website and Summary from Parent The entry point of this sub-workflow. Listens for execution from a parent workflow via n8n's Execute Workflow node. Receives two inputs: Website (client URL) and Website Summary (business description text). These values are referenced by subsequent nodes throughout the workflow. Generate Brand-Neutral Search Prompts An AI agent powered by GPT-4.1-mini that creates a realistic search query a potential customer might type into an AI chatbot to find a business like the client—without using the client's brand name. This tests organic discoverability based on services and industry positioning rather than brand recognition. The output is a single focused search prompt. Parse Prompt as JSON A Structured Output Parser that enforces JSON schema {"Prompts": "..."} on the generated prompt. Uses GPT Model for Parser Support as its language model and has autoFix enabled, so malformed outputs are automatically retried and corrected. Test Visibility on ChatGPT An AI agent that submits the generated search prompt to ChatGPT (GPT-4o-mini) and records the response. This captures what ChatGPT currently recommends when users search for services like the client's. Test Visibility on Claude An AI agent powered by Claude Sonnet 3.7 (Anthropic) that receives the same prompt and records Claude's recommendations. Has onError: continueRegularOutput so the workflow continues if Claude's API is unavailable. Test Visibility on DeepSeek An AI agent powered by DeepSeek that tests the same prompt on DeepSeek's platform. Also has onError: continueRegularOutput for resilience. Test Visibility on Perplexity Uses n8n's native Perplexity node (not an AI agent) to submit the prompt to Perplexity's search-augmented AI. Perplexity is particularly important because it uses real-time web search, making its recommendations highly relevant for current visibility. Has onError: continueRegularOutput. Analyze All Platform Results A DeepSeek-powered AI agent that receives all four platform outputs simultaneously along with the client website URL and the original search prompt. It analyzes each platform independently—determining whether the client appears (Yes/No), at what position (1–10), how strongly (0–100%), how they are mentioned, and which competitors appear. It also generates an overall summary comparing all platforms and provides specific improvement recommendations. Uses Parse Analysis as Structured JSON as its output parser. Flatten JSON to 27 Data Fields A Set node that extracts values from the nested JSON output of the analyzer into 27 flat fields. This makes the data ready for direct insertion into a Google Sheets row, Airtable record, or database table—or for return to the parent workflow. Output Data Complete A No Operation node marking the successful completion of the workflow. The parent workflow receives all 27 fields as the execution output. Usage Guide Adding clients for analysis In your parent workflow, maintain a Google Sheet with columns: | Client Name | Website URL | Business Description | Last Checked | |---|---|---|---| | Example Corp | https://example.com | A SaaS company that provides... | 2025-01-15 | Your parent workflow reads each row, passes the Website URL and Business Description to this sub-workflow, and writes the 27 returned fields back into the sheet for tracking. Understanding the output After execution, check the Flatten JSON to 27 Data Fields node output. For each platform you get: Ranking:** Yes (client appears) or No (client not mentioned) Position:** Numeric position in the AI's recommendations (1 being top) Presence Strength:** 0–100% measuring how positively and prominently the client is featured Key Mentions:** How the AI described or mentioned the client Ranking Competitors:** Which competitors the AI recommended instead The Overall Summary tells you: How many of 4 platforms are currently ranking your client The average presence strength across all platforms Which platform is your strongest opportunity Which platform needs the most improvement The 3 main competitors appearing consistently Specific recommendations for improving AI discoverability Tracking over time Run this workflow monthly per client. Append results to a Google Sheet with a date column. Track whether presence strength is improving, whether the client appears on more platforms over time, and whether competitors are losing or gaining ground. Customization Options Change the number of platforms: Remove any platform agent node and update the Analyze All Platform Results prompt to exclude that platform's output reference. Add more platforms: Add new AI agent nodes (e.g., Grok, Gemini) between Test Visibility on Perplexity and Analyze All Platform Results. Update the analyzer prompt to include the new platform's output. Generate multiple prompts: Modify Generate Brand-Neutral Search Prompts to produce 3–5 different prompts. Loop through each and aggregate results for more comprehensive testing. Write results directly to Google Sheets: After Flatten JSON to 27 Data Fields, add a Google Sheets Append node in your parent workflow to log each audit automatically. Add email or Slack notifications: After the workflow completes in the parent, add a Send Email or Slack node that formats the key metrics (Overall Ranking, Average Presence Strength, Recommendations) into a readable client report. Adjust presence strength scoring: Modify the Analyze All Platform Results prompt to change how the AI scores presence strength—for example, weighting first-position mentions more heavily. Troubleshooting Parent workflow not triggering this workflow Verify both workflows are toggled to Active In the Execute Workflow node, confirm the correct workflow is selected Check that the Mode is set (not left blank) Test by clicking Execute node directly on the Execute Workflow node in the parent Website and Website Summary parameters not passing In the Execute Workflow node, confirm the field names are exactly Website and Website Summary (case-sensitive, space in second parameter) Check the parent workflow is actually passing values, not empty expressions Use the Receive Website and Summary from Parent node's input panel to verify received data One platform returns empty output The workflow continues even if one platform fails (onError: continueRegularOutput is set) Check the specific platform node for the error message Verify API credentials are valid and have available credits Perplexity free tier has strict rate limits—upgrade plan if hitting limits Structured output parser fails Parse Prompt as JSON has autoFix enabled—it will retry malformed outputs automatically If Parse Analysis as Structured JSON fails, simplify the prompt in Analyze All Platform Results or increase max tokens Check that DeepSeek credentials are active (DeepSeek handles the analysis output parsing) Generated prompt includes client brand name The Generate Brand-Neutral Search Prompts agent prompt instructs GPT to avoid brand names If brand names slip through, add to the system prompt: "Never mention any specific company name, brand, or trademark in the generated prompt" All 27 fields not appearing in output Run the workflow with test data and inspect Analyze All Platform Results node output If a platform returned empty output due to an error, its fields will be null Check that Flatten JSON to 27 Data Fields expressions reference the correct node names Use Cases Digital marketing agencies offering ASEO services: Run monthly AI visibility audits for 20–50 clients from one parent workflow. Generate client reports showing AI platform rankings, presence strength trends, and competitor comparisons. Position ASEO as a premium new service. SEO teams expanding beyond Google: Use this alongside traditional Google ranking reports. Show clients their full search visibility picture—covering both Google and the AI chatbots that are increasingly influencing purchase decisions. Competitive intelligence: Run this workflow for your own site and 3–5 competitors simultaneously. Identify which competitors dominate AI recommendations and reverse-engineer their content strategy. Brand monitoring: Track how AI chatbots describe your brand over time. Detect if competitors are gaining ground or if negative associations are appearing in AI responses. New market entry research: Before entering a new market or launching a new service line, test whether your website would appear in AI searches for that service category. Use results to guide content strategy before launch. Expected Results Time savings: 45–60 minutes of manual AI testing per client, eliminated per audit cycle Coverage: 4 major AI platforms tested in a single automated run Output quality: Structured, consistent 27-field data format—ready for Google Sheets, dashboards, or PDF reports Scalability: Process 50+ clients per parent workflow run with no additional manual effort Competitive advantage: One of the first systematic approaches to measuring AI Search Engine Optimization (ASEO)—a space with no established tooling yet For any questions, custom development, or workflow integration support: 📧 Email: info@incrementors.com 🌐 Website: https://www.incrementors.com/
by Cheng Siong Chin
How It Works Automates sales data analysis and strategic insight generation for sales managers and strategists needing actionable intelligence. Fetches multi-source data from sales, marketing, and financial systems, validates data quality to prevent errors, applies advanced AI analysis via OpenAI to identify market trends and patterns, calculates comprehensive KPIs for performance measurement, generates prioritized recommendations, and automatically distributes detailed insights via Gmail alerts and Google Sheets dashboards—eliminating time-consuming manual analysis overhead. Setup Instructions OpenAI API: Add key via credentials Gmail: Authorize account for email delivery Google Sheets: Connect for dashboard logging Schedule: Set monthly trigger timing Modify data sources: Replace fetch nodes with your APIs Prerequisites OpenAI API key, Gmail account with send permissions, Google Sheets access, n8n instance, source data APIs (sales, marketing, financial systems). Use Cases E-commerce platforms analyzing sales trends; SaaS companies generating strategy reports; multi-channel retailers routing recommendations; Customization Add data sources via fetch nodes; swap OpenAI for Claude or Gemini; modify routing logic for different priority thresholds; Benefits Reduces analysis time from hours to minutes. Eliminates manual report crea
by Wan Dinie
Automated Malaysian Weather Alerts with Perplexity AI, Firecrawl and Telegram This n8n template automates daily weather monitoring by fetching official government warnings and searching for related news coverage, then delivering comprehensive reports directly to Telegram. Use cases include monitoring severe weather conditions, tracking flood warnings across Malaysian states, staying updated on weather-related news, and receiving automated daily weather briefings for emergency preparedness. Good to know Firecrawl free tier allows limited scraping requests per hour. Consider the 3-second interval between requests to avoid rate limits. OpenAI costs apply for content summarization - GPT-4.1 mini balances quality and affordability. After testing multiple AI models (GPT, Gemini), Perplexity Sonar Pro Search proved most effective for finding recent, relevant weather news from Malaysian sources. The workflow focuses on major Malaysian news outlets like Utusan, Harian Metro, Berita Harian, and Kosmo. How it works Schedule Trigger runs daily at 9 AM to fetch weather warnings from Malaysia's official data.gov.my API. JavaScript code processes weather data to extract warning types, severity levels, and affected locations. Search queries are aggregated and combined with location information. Perplexity Sonar Pro AI Agent searches for recent news articles (within 3 days) from Malaysian news channels. URLs are cleaned and processed one by one through a loop to manage API limits. Firecrawl scrapes each news article and extracts summaries from main content. All summaries and source URLs are combined and sent to OpenAI for final report generation. The polished weather report is delivered to your Telegram channel in English. How to use The schedule trigger is set for 9 AM but can be adjusted to any preferred time. Replace the Telegram chat ID with your channel or group ID. The workflow automatically filters out "No Advisory" warnings to avoid unnecessary notifications. Modify the search query timeout and batch processing based on your API limits. Requirements OpenAI API key (get one at https://platform.openai.com) Perplexity API via OpenRouter (get access at https://openrouter.ai) Firecrawl API key (get free tier at https://firecrawl.dev) Telegram Bot token and channel/group ID Customizing this workflow Expand news sources**: Modify the AI Agent prompt to include additional Malaysian news outlets or social media sources. Language options**: Change the final report language from English to Bahasa Malaysia by updating the "Make a summary" system prompt. Alert filtering**: Adjust the JavaScript code to focus on specific warning types (e.g., only severe warnings or specific states). Storage integration**: Connect to Supabase or Google Sheets to maintain a historical database of weather warnings and news. Multi-channel delivery**: Add more notification nodes to send alerts via email, WhatsApp, or SMS alongside Telegram.