by Cj Elijah Garay
Discord AI Content Moderator with Learning System This n8n template demonstrates how to automatically moderate Discord messages using AI-powered content analysis that learns from your community standards. It continuously monitors your server, intelligently flags problematic content while allowing context-appropriate language, and provides a complete audit trail for all moderation actions. Use cases are many: Try moderating a forex trading community where enthusiasm runs high, protecting a gaming server from toxic behavior while keeping banter alive, or maintaining professional standards in a business Discord without being overly strict! Good to know This workflow uses OpenAI's GPT-5 Mini model which incurs API costs per message analyzed (approximately $0.001-0.003 per moderation check depending on message volume) The workflow runs every minute by default - adjust the Schedule Trigger interval based on your server activity and budget Discord API rate limits apply - the batch processor includes 1.5-second delays between deletions to prevent rate limiting You'll need a Google Sheet to store training examples - a template link is provided in the workflow notes The AI analyzes context and intent, not just keywords - "I *cking love this community" won't be deleted, but "you guys are sht" will be Deleted messages cannot be recovered from Discord - the admin notification channel preserves the content for review How it works The Schedule Trigger activates every minute to check for new messages requiring moderation We'll fetch training data from Google Sheets containing labeled examples of messages to delete (with reasons) and messages to keep The workflow retrieves the last 10 messages from your specified Discord channel using the Discord API A preparation node formats both the training examples and recent messages into a structured prompt with unique indices for each message The AI Agent (powered by GPT-5 Mini) analyzes each message against your community standards, considering intent and context rather than just keywords The AI returns a JSON array of message indices that violate guidelines (e.g., [0, 2, 5]) A parsing node extracts these indices, validates them, removes duplicates, and maps them to actual Discord message objects The batch processor loops through each flagged message one at a time to prevent API rate limiting and ensure proper error handling Each message is deleted from Discord using the exact message ID A 1.5-second wait prevents hitting Discord's rate limits between operations Finally, an admin notification is posted to your designated admin channel with the deleted message's author, ID, and original content for audit purposes How to use Replace the Discord Server ID, Moderated Channel ID, and Admin Channel ID in the "Edit Fields" node with your server's specific IDs Create a copy of the provided Google Sheets template with columns: message_content, should_delete (YES/NO), and reason Connect your Discord OAuth2 credentials (requires bot permissions for reading messages, deleting messages, and posting to channels) Add your OpenAI API key to access GPT-5 Mini Customize the AI Agent's system message to reflect your specific community standards and tone Adjust the message fetch limit (default: 10) based on your server activity - higher limits cost more per run but catch more violations Consider changing the Schedule Trigger from every minute to every 3-5 minutes if you have a smaller community Requirements Discord OAuth2 credentials for bot authentication with message read, delete, and send permissions Google Sheets API connection for accessing the training data knowledge base OpenAI API key for GPT-5 Mini model access A Google Sheet formatted with message examples, deletion labels, and reasoning Discord Server ID, Channel IDs (moderated + admin) which you can get by enabling Developer Mode in Discord Customising this workflow Try building an emoji-based feedback system where admins can react to notifications with ✅ (correct deletion) or ❌ (wrong deletion) to automatically update your training data Add a severity scoring system that issues warnings for minor violations before deleting messages Implement a user strike system that tracks repeat offenders and automatically applies temporary mutes or bans Expand the AI prompt to categorize violations (spam, harassment, profanity, etc.) and route different types to different admin channels Create a weekly digest that summarizes moderation statistics and trending violation types Add support for monitoring multiple channels by duplicating the Discord message fetch nodes with different channel IDs Integrate with a database instead of Google Sheets for faster lookups and more sophisticated training data management If you have questions Feel free to contact me here: elijahmamuri@gmail.com elijahfxtrading@gmail.com
by Cheng Siong Chin
How It Works MQTT ingests real-time sensor data from connected devices. The workflow normalizes the values and trains or retrains machine learning models on a defined schedule. An AI agent detects anomalies, validates the results for accuracy, and ensures reliable alerts. Detected issues are then routed to dashboards for visualization and sent via email notifications to relevant stakeholders, enabling timely monitoring and response. Setup Steps MQTT: Configure broker connection, set topic subscriptions, and verify data flow. ML Model: Define retraining schedule and specify historical data sources for model updates. AI Agent: Connect Claude or OpenAI APIs and configure anomaly validation prompts. Alerts: Set dashboard URL and email recipients to receive real-time notifications. Prerequisites MQTT broker credentials; historical training data; OpenAI/Claude API key; dashboard access; email service Use Cases IoT sensor monitoring; server performance tracking; network traffic anomalies; application log analysis; predictive maintenance alerts Customization Adjust sensitivity thresholds; swap ML models; modify notification channels; add Slack/Teams integration; customize validation rules Benefits Reduces detection latency 95%; eliminates manual monitoring; prevents false alerts; enables rapid incident response; improves system reliability
by Feras Dabour
Social Media Foto Creation Bot with Approval Loop Create & Share AI Photos with Telegram, Gemini & Post to Facebook, Instagram & X Description This n8n workflow turns your Telegram messenger into a complete AI Photo Content Pipeline. You send your photo idea as a text or voice message to a Telegram bot, collaborate with an AI to refine the prompt and social media caption, let Gemini generate the image, and then automatically publish it after your approval to Facebook, Instagram, and X (Twitter) – including status tracking and Telegram confirmations. What You Need to Get Started This workflow connects several external services. You will need the following credentials: Telegram Bot API Key** Create a bot via BotFather and copy the bot token. This is used by the Listen for incoming events and other Telegram nodes. OpenAI API Key** Required for Speech to Text (OpenAI Whisper) to transcribe voice notes. Used by the AI Agent model (OpenAI Chat Model) for prompt creation. Google Gemini API Key** Used by the Generate an image node (model: models/gemini-2.5-flash-image) to create the AI image. Google Drive & Sheets Access** The generated image is temporarily stored in a Google Drive folder (Upload image1) and later retrieved by Blotato. Prompts and post texts are logged to Google Sheets (Save Prompt & Post-Text) for tracking. Blotato API Key** The layer for social media publishing. Uploads the image as a media asset (Upload media1) and creates posts for Facebook, Instagram, and X. How the Workflow Operates – Step by Step 1. Input & Initial Processing (Telegram + Voice Handling) This phase receives your messages and prepares the input for the AI. | Node Name | Role in Workflow | | :--- | :--- | | Listen for incoming events | Telegram Trigger node that starts the workflow on any incoming message. | | Voice or Text | Set node that structures the incoming message into a unified text field. | | A Voice? | IF node that checks if the message is a voice note. | | Get Voice File | If voice is detected, this downloads the audio file from Telegram. | | Speech to Text | Uses OpenAI Whisper to convert the voice note into a text transcript. | The output of this stage is always a clean text string containing your image idea. 2. AI Core & Refinement Loop (Prompt + Caption via AI) Here, the AI drafts the image prompt (for Gemini) and the social media caption (for all platforms) and enters an approval loop with you. | Node Name | Role in Workflow | | :--- | :--- | | AI Agent | Central logic agent. Creates a videoPrompt (used for image generation) and socialMediaText based on your idea, and asks for feedback. | | OpenAI Chat Model | The LLM backing the agent (e.g., GPT-4.1-mini). | | Window Buffer Memory | Stores recent turns, allowing the agent to maintain context during revisions. | | Send questions or proposal to user | Sends the AI's suggestion for review back to you. | | Approved from user? | IF node that checks if the output is the approved JSON (meaning you replied with "ok" or "approved"). | | Parse AI Output | Code node that extracts the videoPrompt and socialMediaText fields from the agent’s final JSON output. | 3. Content Generation & Final Approval Once the prompt and caption are set, the image is created and sent to you for final approval before publishing. | Node Name | Role in Workflow | | :--- | :--- | | Inform user about processing | Telegram node to confirm: "Okay. Your image is being prepared now..." | | Save Prompt & Post-Text | Google Sheets node that logs the videoPrompt and socialMediaText. | | Generate an image | Gemini node that creates the image based on the videoPrompt. | | Send a photo message | Sends the generated image to Telegram for review. | | Send message and wait for response | Telegram node that waits for your response to the image (e.g., "Good?" / "Approve"). | | Upload image1 | Temporarily saves the generated image to Google Drive. | | Download image from Drive | Downloads the image back from Drive. | | If1 | IF node that checks if the image was approved in the previous step (approved == true). | 4. Upload & Publishing (Blotato) After final approval, the image is uploaded to Blotato, and post submissions for the social media platforms are created. | Node Name | Role in Workflow | | :--- | :--- | | Upload media1 | Blotato Media node. Uploads the approved image as a media asset and returns a public url. | | Create instagram Post | Creates an Instagram post using the media URL and socialMediaText. | | Create x post | Creates an X (Twitter) post using the media URL and socialMediaText. | | Create FB post | Creates a Facebook post using the media URL and socialMediaText. | 5. Status Monitoring & Retry Loops (X, Facebook, Instagram) An independent loop runs for each platform, polling Blotato until the post is either published or failed. | Node Name | Role in Workflow | | :--- | :--- | | Wait, Wait1, Wait2 | Initial pauses after post creation. | | Check Post Status, Get post1, Check Post Status1 | Blotato Get operations to fetch the current status of the post. | | Published to X?, Published to Facebook?, Published to Instagram? | IF nodes checking for the "published" status. | | Confirm publishing to X, Confirm publishing to Facebook, Confirm publishing to Instagram | Telegram nodes that notify you of successful publication (often including the post link). | | In Progress?, In Progress?1, In Progress?2 | IF nodes that check for "in-progress" status and loop back to the Wait nodes (Give Blotat other 5s). | | Send X Error Message, Send Facebook Error Message, Send Instagram Error Message | Telegram nodes that notify you if a failure occurs. | 🛠️ Personalizing Your Content Bot The workflow is highly adaptable to your personal brand and platform preferences: Tweak the AI Prompt & Behavior: Where: In the AI Agent node, within the System Message. Options: Change the tone (casual, professional, humorous) and the level of detail required for the prompt generation or the social media captions. Change Gemini Model or Image Options: Where: In the Generate an image node. Options: Swap the model or adjust image options like Aspect Ratio or Style based on Gemini's API capabilities. Modify Which Platforms You Post To: Where: In the Blotato nodes: Create instagram Post, Create x post, Create FB post. Options: Disable or delete branches for unused platforms, or add new platforms supported by Blotato.
by Tamas Demeter
This n8n template shows you how to turn outbound sales into a fully automated machine: scrape verified leads, research them with AI, and fire off personalized cold emails while you sleep. Use cases are simple: scale B2B lead gen without hiring more SDRs, run targeted outreach campaigns that don’t feel generic, and give founders or agencies a repeatable system that books more calls with less effort. Good to know At time of writing, each AI call may incur costs depending on your OpenAI plan. This workflow uses Apollo/Apify for lead scraping, which requires an active token. Telegram approval flow is optional but recommended for quality control. How it works Define your ICP (role, location, industry) in the workflow. Generate Apollo search URLs and scrape verified contacts. AI enriches leads with personal + company research. Hormozi-style cold emails are generated and queued for approval. Approve drafts in Telegram, then Gmail automatically sends them out. How to use Start with the included Schedule Trigger or replace with a Webhook/Form trigger. Adjust ICP settings in the Set node to fit your target audience. Test with a small batch of leads before scaling to larger runs. Requirements Google Sheets, Docs, Drive, and Gmail connected to n8n Apollo/Apify account and token OpenAI API key Telegram bot for approvals Customising this workflow Swap Apollo scraping with another data source if needed. Adapt the AI prompt for a different email tone (formal, friendly, etc.). Extend with a CRM integration to sync approved leads and outreach results.
by DIGITAL BIZ TECH
Weekly Timesheet Report + Pending Submissions Workflow Overview This workflow automates the entire weekly timesheet reporting cycle by integrating Salesforce, OpenAI, Gmail, and n8n. It retrieves employee timesheets for the previous week, identifies which were submitted or not, summarizes all line-item activities using OpenAI, and delivers a consolidated, manager-ready summary that mirrors the final email output. The workflow eliminates manual checking, reduces repeated follow-ups, and ensures leadership receives an accurate, structured, and consistent weekly report. Workflow Structure Data Source: Salesforce DBT Timesheet App This workflow requires the Digital Biz Tech – Simple Timesheet managed package to be installed in Salesforce. Install the Timesheet App: https://appexchange.salesforce.com/appxListingDetail?listingId=a077704c-2e99-4653-8bde-d32e1fafd8c6 The workflow retrieves: dbt__Timesheet__c — weekly timesheet records dbt__Timesheet_Line_Item__c — project and activity entries dbt__Employee__c — employee reference and metadata Billable, non-billable, and absence hour details Attendance information These combined objects form the complete dataset used for both submitted and pending sections. Trigger Weekly n8n Schedule Trigger — runs once every week. Submitted Path Retrieve submitted timesheets → Fetch line items → Convert to HTML → OpenAI summary → Merge with employee details. Pending Path Identify “New” timesheets → Fetch employee details → Generate pending submission list. Final Output Merge both paths → Build formatted report → Gmail sends weekly email to managers. Detailed Node-by-Node Explanation 1. Schedule Trigger Runs weekly without manual intervention and targets the previous full week. 2. Timesheet – Salesforce GetAll Fetches all dbt__Timesheet__c records matching: Timesheet for <week-start> to <week-end> Extracted fields include: Employee reference Status Billable, non-billable, absence hours Total hours Reporting period Feeds both processing paths. Processing Path A — Submitted Timesheets 3. Filter Submitted Filters timesheets where dbt__Status__c == "Submitted". 4. Loop Through Each Submitted Record Each employee’s timesheet is processed individually. 5. Retrieve Line Items Fetches all dbt__Timesheet_Line_Item__c entries: Project / Client Activity Duration Work description Billable category 6. Convert Line Items to HTML (Code Node) Transforms line items into well-structured HTML tables for clean LLM input. 7. OpenAI — Weekly Activity Summary OpenAI receives the HTML + Employee ID and returns a 4-point activity summary avoiding: Hours Dates Repeated or irrelevant metadata 8. Fetch Employee Details Retrieves employee name, email, and additional fields if needed. 9. Merge Employee + Summary Combines: Timesheet data Employee details OpenAI summary Creates a unified object. 10. Prepare Submitted Section (Code Node) Produces the formatted block used in the final email: Employee: Name Period: Start → End Status: Submitted Total Hours: ... Timesheet Line Items Breakdown: summary point summary point summary point summary point Processing Path B — Not Submitted Timesheets 11. Identify Not Submitted Timesheets still in dbt__Status__c == "New" are flagged. 12. Retrieve Employee Information Fetches employee name and email. 13. Merge Pending Information Maps each missing submission with its reporting period. 14. Prepare Pending Reporting Block Creates formatted pending entries: TIMESHEET NOT SUBMITTED Employee Name Email: user@example.com Final Assembly & Report Delivery 15. Merge Submitted + Pending Sections Combines all processed data. 16. Create Final Email (Code Node) Builds: Subject HTML body Section headers Manager recipient group Matches the final email layout. 17. Send Email via Gmail Automatically delivers the weekly summary to managers via Gmail OAuth. No manual involvement required. What Managers Receive Each Week 👤 Employee: Name 📅 Period: Start Date → End Date 📌 Status: Submitted 🕒 Total Hours: XX hrs Billable: XX hrs Non-Billable: XX hrs Absence: XX hrs Weekly Requirement Met: ✔️ / ❌ 📂 Timesheet Line Items Breakdown: • Summary point 1 • Summary point 2 • Summary point 3 • Summary point 4 🟥 TIMESHEET NOT SUBMITTED 🟥 Employee Name 📧 Email: user@example.com Data Flow Summary Salesforce → Filter Submitted / Not Submitted ↳ Submitted → Line Items → HTML → OpenAI Summary → Merge ↳ Not Submitted → Employee Lookup → Merge → Code Node formats unified report → Gmail sends professional weekly summary Technologies & Integrations | System | Purpose | Authentication | |------------|----------------------------------|----------------| | Salesforce | Timesheets, Employees, Timesheet Line Items | Salesforce OAuth | | OpenAI | Weekly activity summarization | API Key | | Gmail | Automated email delivery | Gmail OAuth | | n8n | Workflow automation & scheduling | Native | Agent System Prompt Summary > You are an AI assistant that extracts and summarizes weekly timesheet line items. Produce a clean, structured summary of work done for each employee. Focus only on project activities, tasks, accomplishments, and notable positives or negatives. Follow a strict JSON-only output format with four short points and no extra text or symbols. Key Features AI-driven extraction: Converts raw line items into clean weekly summaries. Strict formatting: Always returns controlled 4-point JSON summaries. Error-tolerant: Works even when timesheet entries are incomplete or messy. Seamless integration: Works smoothly with Salesforce, n8n, Gmail, or OpenAI. Setup Checklist Install DBT Timesheet App from Salesforce AppExchange Configure Salesforce OAuth Configure Gmail OAuth Set OpenAI model for summarization Update manager recipient list Activate the weekly schedule Summary This unified workflow delivers a complete, automated weekly reporting system that: Eliminates manual timesheet checking Identifies missing submissions instantly Generates high-quality AI summaries Improves visibility into employee productivity Ensures accurate billable/non-billable tracking Automates end-to-end weekly reporting Need Help or More Workflows? We can integrate this into your environment, tune the agent prompt, or extend it for more automation. We can also help you set it up for free — from connecting credentials to deployment. Contact: anushapriya.subaskar@digitalbiz.tech Website: https://www.digitalbiz.tech LinkedIn: https://www.linkedin.com/company/digital-biz-tech/ You can also DM us on LinkedIn for any help.
by Mychel Garzon
Stop guessing if text came from ChatGPT. Let three AI agents argue about it using forensic data. Paste any text and get a verdict on whether it was written by a human, AI, or a hybrid mix. Instead of trusting one black-box score, this workflow runs your text through statistical analysis and a three-agent debate where each agent challenges the others using hard numbers. This is not another "detect AI with AI" template. The workflow measures six forensic markers first, then makes three separate agents argue about what those numbers mean. You see the raw data, the debate, and the final verdict with confidence scores. How it works The workflow runs in five stages: Extract forensic metrics: A code node measures burstiness (sentence length variation), type-token ratio (vocabulary diversity), hapax rate (words appearing once), repetition score (repeated phrases), transition density (filler words like "furthermore"), and AI fingerprints (100+ known LLM phrases stored in a data table). Short texts under 150 words get recalibrated because metrics are less reliable. Agent 1 - The Scanner: Reads the text cold with zero metrics. Gives a gut impression (human/AI/hybrid) based purely on instinct. Acts like an editor who has read thousands of manuscripts. Agent 2 - Forensic Analyst: Gets the text, all metrics, and Agent 1's verdict. Writes a data-driven report that must cite specific numbers. Either agrees or disagrees with Agent 1 and explains why using the forensic evidence. Agent 3 - Devil's Advocate: Gets everything above and argues the opposite of whatever Agent 2 concluded. If Agent 2 said AI, Agent 3 must argue human. Finds holes in the logic and metrics that got ignored. Weighted verdict: A code node scores all three agents (35% Analyst, 15% Scanner, 15% Devil's Advocate, 35% raw metrics) and classifies as human (score under 0.35), AI (score over 0.60), or AI-augmented (in between). Confidence is calculated separately so you get verdicts like "AI with 67% confidence." Chat output format The chat response shows: Verdict badge:** 🙎🏻 Human-Written, 🤖 AI-Generated, or 🦾 AI-Augmented Confidence bar:** Visual bar (██████████ 85%) showing how certain the verdict is Metrics table:** All six forensic markers with 🟥 AI or 🟩 Human flags Agent debate:** Three verdicts with reasoning. Agent 1's gut check, Agent 2's forensic report, Agent 3's counter-argument. Each shows classification and confidence percentage. Example output for AI text: 🤖 Verdict: AI-Generated Confidence: ████████░░ 87% 📊 Stylometric Metrics: Burstiness: 0.18 🟥 AI Vocabulary Diversity: 0.36 🟥 AI Hapax Rate: 0.32 🟥 AI Repetition: 0.21 🟥 AI Transition Density: 0.024 🟥 AI 🔎 Agent 1 (Gut Check): AI (90%) "Monotonous rhythm, corporate vocabulary, zero personality" 🔬 Agent 2 (Data): AI (95%) "Five of six metrics flag AI. Burstiness of 0.18 well below human threshold..." 😈 Agent 3 (Critic): AI-AUGMENTED (65%) "Could be human technical writing. Transition density alone not conclusive..." Self-updating fingerprint database A separate workflow branch runs weekly to keep the AI phrase list current: Check existing words: Reads all fingerprint phrases from the data table Find new AI tells: Asks an LLM what phrases modern models currently overuse Filter duplicates: Removes words already in the database Add to table: Stores new phrases for future detection Requires: A data table (Google Sheets, Airtable, or n8n Data Table) to store fingerprint words. The workflow includes a starter list of 100+ phrases like "delve into," "it's worth noting," "as of my last update." LLM writing patterns shift fast. What worked for GPT-3 detection does not work for GPT-4. This keeps the detector current without manual updates. Key benefits Three classifications instead of binary.** Human, AI, or AI-augmented. Most real content is hybrid. You see the reasoning.** Full agent debate included. When verdicts are borderline, you can read which argument won. Transparent metrics.** Raw numbers exposed with red/green flags. No hidden scoring. Self-updating detection.** Weekly workflow finds new AI phrase patterns as models evolve. Error resilient.** If one agent fails, the workflow continues and redistributes weights. Who this is for Content teams verifying contractor submissions are not AI-generated Educators checking student essays for AI assistance Publishers screening submissions to maintain editorial standards SEO teams ensuring content meets Google's helpful content guidelines Researchers analyzing hybrid human-AI writing patterns Setup Add API credentials for at least one LLM provider (Groq, OpenAI, Gemini, or Anthropic) Create a data table for AI fingerprint phrases or use n8n's built-in Data Table node Populate the table with the starter list (included in workflow documentation) Activate the workflow and open the chat interface Paste text and wait 30-60 seconds for forensic analysis Required APIs & credentials At least one LLM provider: OpenAI, Anthropic, Google Gemini, Groq, or any other provider with JSON output support. Each agent can use a different provider or all can use the same one. Data storage for fingerprint phrases: n8n Data Table (built-in), Google Sheets, or Airtable. The workflow checks this table to identify known AI phrases during analysis. How to customise it Swap models:** Each agent node has a chat model sub-node. Replace with any provider. Scanner works with smaller models. Analyst needs strong reasoning. Devil's Advocate needs good instruction-following. Tune thresholds:** Open Extract Stylometric Metrics code. Burstiness under 0.3 flags AI. Type-token ratio under 0.4 flags AI. Adjust for stricter or looser detection. Change agent weights:** Open Final Verdict code. Default is 35% Analyst, 15% Scanner, 15% Devil's Advocate, 35% metrics. Increase metric weight to trust data more. Modify agent personas:** Edit system prompts. Make Scanner more skeptical. Make Analyst cite sources. Make Devil's Advocate more aggressive. Add quality gate:** Drop a Filter node after verdict. Only proceed if confidence exceeds 70%. Batch process:** Replace Chat Trigger with Schedule Trigger looping over a file list. Known limitations The workflow works best on long-form content (500+ words). Short texts under 100 words produce less reliable metrics because statistical patterns need more data to emerge. The recalibration helps but is not perfect. AI fingerprint phrases evolve as models improve. GPT-5 might not use "delve into" but will have new tells. The self-updating workflow helps but lags current releases by a few weeks. The three-agent debate architecture assumes disagreement is meaningful. For extremely niche topics where only one agent has relevant training data, the minority opinion might be correct but gets outvoted. Review the individual agent reasoning when dealing with specialized content.
by Cheng Siong Chin
How It Works This workflow automates procurement fraud detection and supplier compliance monitoring for organizations managing complex purchasing operations. Designed for procurement teams, audit departments, and compliance officers, it solves the challenge of identifying fraudulent transactions, contract violations, and supplier misconduct across thousands of purchase orders and vendor relationships. The system schedules continuous monitoring, generates sample transaction data, analyzes patterns through dual AI agents (Price Reasonableness validates pricing against market rates, Delivery Agent assesses fulfillment performance), orchestrates comprehensive risk evaluation through Orchestration Agent, routes findings by severity (critical/high/medium/low), and triggers multi-channel responses: critical issues activate immediate Slack/email alerts with detailed logging; high-priority cases receive escalation workflows; medium/low findings generate routine compliance reports. By combining AI-powered anomaly detection with intelligent routing and coordinated notifications, organizations prevent fraud losses by 75%, ensure vendor compliance, maintain audit trails, and enable procurement teams to focus on strategic sourcing rather than manual transaction reviews. Setup Steps Connect Schedule Trigger for monitoring frequency Configure procurement systems with API credentials Add AI model API keys to Price Reasonableness, Delivery, and Orchestration Agent nodes Define fraud indicators and compliance thresholds in agent prompts based on company policies Link Slack webhooks for critical and high-priority fraud alerts to procurement and audit teams Connect email credentials for stakeholder notifications and escalation workflows Prerequisites Procurement system API access, AI service accounts, market pricing databases for benchmarking Use Cases Invoice fraud detection, bid rigging identification, duplicate payment prevention Customization Modify agent prompts for industry-specific fraud patterns, adjust risk scoring algorithms Benefits Prevents fraud losses by 75%, automates compliance monitoring across unlimited transactions
by Paul Karrmann
This n8n template helps you turn inbound messages into a clean, deduped queue of actionable tickets. It includes Slack and Gmail as ready to use examples, but the key idea is the universal intake normalizer: you can plug in other sources later (forms, webhooks, chat tools, other inboxes) as long as you map them into the same normalized schema. Good to know This workflow sends message content to an LLM for classification. Keep sensitive data out of the prompt, and only process messages you are allowed to process. Costs depend on message volume and length, so truncation and tight filters matter. How it works Collect inbound items (Slack and Gmail are included as examples). Normalize each item into one shared JSON format so every source behaves the same. Deduplicate items using a data table so repeats are skipped. Use an AI agent with structured output to score urgency and importance, produce a summary, and draft a reply. Create a Notion ticket for tracking, and optionally notify Slack for high priority items. Setup steps Connect credentials for Slack, Gmail, Notion, and your LLM provider. Choose your Slack channel and set a Gmail filter that keeps volume manageable. Select your Notion database and ensure properties match the field mappings. Create or select a data table and map the unique ID column for deduplication. Adjust the notification threshold and schedule interval to match your workflow. Requirements Slack workspace access (optional if you swap the source) Gmail access (optional if you swap the source) Notion database for ticket creation LLM API credentials Customising this workflow Add new sources by mapping them into the normalizer schema. Truncate long messages before the AI step to reduce cost. Change categories, scoring, and thresholds to match your operating model.
by Incrementors
Every Monday morning, this workflow pulls your top 10 keywords from Google Search Console, passes the data to GPT-4o-mini, and emails a polished 150–200 word SEO digest to your client — automatically. You configure it once with four values, and it runs on its own every week. Built for SEO agencies and freelancers who want to deliver consistent client reporting without spending an hour writing it manually. What This Workflow Does Scheduled weekly trigger — Fires automatically every Monday at 8AM so you never miss a reporting week, with zero manual effort. Live GSC data pull — Fetches the top 10 keywords by clicks from Google Search Console for the past 7 days, directly from the API. Clean keyword formatting — Converts raw API data into a readable list showing keyword, clicks, impressions, CTR, and average position for every query. AI-written email body — GPT-4o-mini reads the keyword data and writes a professional, conversational 150–200 word digest — no templates, no copy-paste. One-config setup — All client details (site URL, client name, recipient email, agency name) live in a single node. Change it once to deploy for any client. Automated Gmail delivery — The final report is sent from your connected Gmail account with a dynamic subject line including the date range and client name. Setup Requirements Tools and accounts needed: n8n instance (self-hosted or cloud) Google account with Google Search Console access (OAuth2 credential) Gmail account for sending reports (OAuth2 credential — can be the same Google account) OpenAI account with GPT-4o-mini API access Estimated Setup Time: 10–15 minutes Step-by-Step Setup Import the workflow — Open n8n → Workflows → Import from JSON. Paste the workflow JSON and import. Confirm all 9 nodes are connected in a straight line. Connect your Google Search Console credential — In n8n, go to Credentials → New → Google OAuth2 API. Complete the OAuth flow with the Google account that has access to your GSC property. Once connected, open the 3. HTTP — Fetch GSC Top Keywords node and select this credential under the OAuth2 field. > ⚠️ Your GSC property URL must match exactly. If your property is https://www.example.com/, the URL in the config must be identical — including the trailing slash. Connect your Gmail credential — Go to Credentials → New → Gmail OAuth2. Complete the OAuth flow with the Gmail account you want to send reports from. Open the 9. Gmail — Send Weekly Report node and select this credential. Add your OpenAI API key — Go to Credentials → New → OpenAI API. Paste your API key from platform.openai.com. Open the 7. OpenAI — GPT-4o-mini Model node and select this credential. Edit your config values — Open the 2. Set — Config Values node. This is the only node you need to change. Replace all four values: | Field | What to enter | |---|---| | siteUrl | Your exact GSC property URL (e.g. https://www.example.com/) | | clientName | Client business name (appears in the email greeting) | | recipientEmail | The email address to receive the weekly report | | agencyName | Your agency name (appears in the email footer) | Activate the workflow — Toggle the workflow to Active. It will now run automatically every Monday at 8AM. How It Works (Step by Step) Step 1 — Schedule Trigger (Every Monday 8AM) The workflow fires automatically using a cron schedule set to every Monday at 8AM. No manual action is needed once the workflow is active. Step 2 — Set Config Values Four variables are stored here: the site URL, client name, recipient email, and agency name. These are referenced by every subsequent step so you only update them in one place. Step 3 — HTTP Request (Google Search Console API) An authenticated POST request goes to the Google Search Console Search Analytics API. It asks for the top 10 keywords by clicks for the past 7 days, automatically calculating the start and end dates based on today's date. Step 4 — Set (Extract Fields) The raw API response is captured alongside the config values. The keyword data is stored as a JSON string, and the date range (week start and week end) is formatted for display in the email. Step 5 — Code (Format Data for GPT) A short JavaScript block parses the keyword rows and builds a clean, numbered text list. Each keyword line includes the query, clicks, impressions, CTR percentage, and average position. If no data is found, a fallback message tells you to check your GSC URL and credentials. Step 6 — AI Agent (Write SEO Report Email) GPT-4o-mini receives the keyword list and a detailed prompt. It writes the full email body — a warm greeting, highlights of the top 3 keywords by name with stats, one positive observation about the week, and one actionable SEO tip for the following week. The output is plain text only, with no markdown or symbols. Step 7 — OpenAI Model (GPT-4o-mini) This is the language model powering the AI Agent in Step 6. It is set to GPT-4o-mini with a 500-token limit and a temperature of 0.6 for consistent, professional writing. Step 8 — Set (Prepare Final Email) The AI-written email body, subject line, recipient address, and agency name are assembled into one item. The subject line is dynamically built using the date range and client name. Step 9 — Gmail (Send Weekly Report) Gmail sends the final email to the recipient address. The body is the AI-written digest, followed by a footer identifying the agency, data source, and automation stack. Key Features ✅ Zero-maintenance scheduling — Runs every Monday at 8AM without any manual trigger or login required. ✅ Dynamic date ranges — Start and end dates are calculated automatically each week. No hardcoded dates to update. ✅ Single config node — All four client-specific values live in one place. Duplicating this workflow for a new client takes under 2 minutes. ✅ Fallback message on empty data — If the GSC API returns no rows, the workflow still runs and sends an alert message instead of failing silently. ✅ AI-written in plain text — GPT-4o-mini is explicitly instructed to avoid markdown, asterisks, or symbols — producing clean copy-paste-ready email content. ✅ Professional subject line — The email subject auto-includes the exact date range and client name, making reports easy to find in any inbox. ✅ Footer attribution — Every email ends with an auto-generated footer crediting your agency and the data source, reinforcing your brand on every send. Customisation Options Increase the keyword count — In the 3. HTTP — Fetch GSC Top Keywords node, change "rowLimit": 10 to any number up to 25,000 to include more keywords in the AI's analysis. Change the send schedule — In the 1. Schedule — Every Monday 8AM node, edit the cron expression 0 8 * * 1 to any schedule you need. For example, 0 8 * * 5 sends on Fridays, or 0 9 1 * * sends on the 1st of every month. Add a dimension for pages or countries — In the 3. HTTP — Fetch GSC Top Keywords node, add "page" or "country" to the "dimensions" array alongside "query" to include page-level or geographic data in the report. Send a CC copy to yourself — In the 9. Gmail — Send Weekly Report node, expand the options and add your own email address to the CC field to keep a copy of every client send. Adjust the email tone — In the 6. AI Agent — Write SEO Report Email node, edit the writing instructions in the prompt to match your agency's voice — more formal, more casual, longer, or shorter. Deploy for multiple clients — Duplicate the entire workflow in n8n and update the 2. Set — Config Values node for each client. Each copy runs independently on the same schedule. Troubleshooting GSC API returns a 403 or permission error: Confirm your Google OAuth2 credential has access to the correct Search Console property Check that the siteUrl value in 2. Set — Config Values exactly matches the GSC property URL, including the protocol (https://) and trailing slash Re-authenticate the Google credential if it has expired No keyword data in the email (fallback message appears): Verify the site had traffic in the past 7 days in your GSC dashboard Check that the siteUrl is the domain-level property and not a URL-prefix property with a different format Run the workflow manually and inspect the output of 3. HTTP — Fetch GSC Top Keywords to see the raw API response Gmail node fails to send: Confirm your Gmail OAuth2 credential is properly connected and not expired Check that recipientEmail in 2. Set — Config Values is a valid email address Check your Gmail sending limits if you are running this for many clients from one account AI Agent produces an empty or broken email body: Open the 7. OpenAI — GPT-4o-mini Model node and confirm the OpenAI credential is valid and has available API credits Check the n8n execution log for the AI Agent node to see if an OpenAI error message was returned Workflow not triggering on schedule: Confirm the workflow is toggled to Active — saved workflows do not run unless activated Check your n8n instance timezone settings and compare to the cron expression — 8AM runs based on your server timezone Support Need help setting this up or want a custom version built for your team or agency? 📧 Email: info@incrementors.com 🌐 Website: https://www.incrementors.com/contact-us/
by Automate With Marc
Image to Video Social Media Reel Generator + Autopost Without AI Slop Google Drive → AI Video Generation → Captions → Approval → Instagram & TikTok Watch Step-By-Step Video: https://www.youtube.com/watch?v=jPOYxQF25ws Turn a folder of images into fully-produced short-form social media reels—automatically. This workflow picks a random image, generates a cinematic AI video from it, adds text overlays and captions, waits for your approval, and then posts to Instagram and TikTok. What this template does On a scheduled basis (default: daily at 9:00 AM), this workflow: Selects a random image from a Google Drive folder Uploads the image for processing Generates a cinematic image-to-video prompt using AI Creates an 8-second vertical video using an image-to-video model (via Wavespeed) Applies captions and text overlays using Submagic Waits for human approval via email Automatically posts the approved reel to: Instagram TikTok If the video is not approved, the workflow loops and tries again on the next run. Why this workflow is useful Converts static images into high-engagement video content Removes repetitive manual work in short-form content creation Keeps a human-in-the-loop before anything is published Perfect for: Creators & solopreneurs Social media managers Small businesses & local brands AI-first content pipelines High-level flow Schedule → Pick Image → Generate Video → Add Captions → Approve → Post Node overview Schedule Trigger Runs the workflow automatically at a fixed time (default: daily at 9 AM). Google Drive – Search Files Fetches all images from a selected Drive folder. Randomizer (Code Node) Selects one random image to avoid repetitive posting. Upload Media Uploads the selected image so it can be used by downstream tools. Prompt Generator (AI) Generates a high-quality cinematic prompt optimized for image-to-video models Wavespeed – Image to Video Creates an 8-second, 9:16 video from the image + prompt. Wait & Polling (IF Nodes) Waits and checks until video generation is completed. Submagic – Text Overlay & Captioning Adds captions and overlays in a short-form style optimized for social platforms. Gmail – Send for Approval Sends a preview link and caption to your inbox and waits for approval. IF (Approved?) Yes: posts the reel automatically No: skips posting and retries in the next run Blotato – Social Posting Publishes the approved reel to: Instagram TikTok Requirements Before running this template, you’ll need to configure: Google Drive OAuth (image source folder) OpenAI API key (prompt generation) Wavespeed API key (image-to-video generation) Submagic API key (captions & overlays) Gmail OAuth (approval workflow) Blotato account (Instagram & TikTok posting) All credentials must be added manually after importing. Setup instructions Import the template into your n8n workspace Connect your Google Drive account and set your image folder Add credentials for: OpenAI Wavespeed Submagic Gmail Blotato Adjust the Schedule Trigger if needed Run the workflow once to test the full flow Enable the workflow to start daily automated posting Customization ideas Change video duration, aspect ratio, or style Modify the AI prompt to match your brand voice Post only after manual approval (already built-in) Add a Slack or Telegram approval step Duplicate posting logic for YouTube Shorts or Facebook Reels Store generated videos in cloud storage or a content database Troubleshooting No images found: check Drive folder ID and permissions Video stuck generating: increase wait time or polling interval Approval email not received: verify Gmail OAuth and inbox filters Posting fails: confirm Blotato account and platform permissions
by Neloy Barman
Self-Hosted This workflow provides a complete end-to-end system for capturing, analyzing, and routing customer feedback. By combining local multimodal AI processing with structured data storage, it allows teams to respond to customer needs in real-time without compromising data privacy. Who is this for? This is designed for Customer Success Managers, Product Teams, and Community Leads who need to automate the triage of high-volume feedback. It is particularly useful for organizations that handle sensitive customer data and prefer local AI processing over cloud-based API calls. 🛠️ Tech Stack Tally.so**: For front-end feedback collection. LM Studio**: To host the local AI models (Qwen3-VL). PostgreSQL**: For persistent data storage and reporting. Discord**: For real-time team notifications. ✨ How it works Form Submission: The workflow triggers when a new submission is received from Tally.so. Multimodal Analysis: The OpenAI node (pointing to LM Studio) processes the input using the Qwen3-VL model across three specific layers: Sentiment Analysis: Evaluates the text to determine if the customer is Positive, Negative, or Neutral. Zero-Shot Classification: Categorizes the feedback into pre-defined labels based on instructions in the prompt. Vision Processing: Analyzes any attached images to extract descriptive keywords or identify UI elements mentioned in the feedback. Data Storage: The PostgreSQL node logs the user's details, the original message, and all AI-generated insights. AI-Driven Routing: The same Qwen3-VL model makes the routing decision by evaluating the classification results and determining the appropriate path for the data to follow. Discord Notification: The Discord node sends a formatted message to the corresponding channel, ensuring the support team sees urgent issues while the marketing team sees positive testimonials. 📋 Requirements LM Studio** running a local server on port 1234. Qwen3-VL-4B** (GGUF) model loaded in LM Studio. PostgreSQL** instance with a table configured for feedback data. Discord Bot Token** and specific Channel IDs. 🚀 How to set up Prepare your Local AI: Open LM Studio and download the Qwen3-VL-4B model. Start the Local Server on port 1234 and ensure CORS is enabled. Disable the Require Authentication setting in the Local Server tab. Configure PostgreSQL: Ensure your database is running. Create a table named customer_feedback with columns for name, email_address, feedback_message, image_url, sentiment, category, and img_keywords. Import the Workflow: Import the JSON file into your n8n instance. Link Services: Update the Webhook node with your Tally.so URL. In the Discord nodes, paste the relevant Channel IDs for your #support, #feedback, and #general channels. Test and Activate: Toggle the workflow to Active. Send a test submission through your Tally form and verify the data appears in PostgreSQL and Discord. 🔑 Credential Setup To run this workflow, you must configure the following credentials in n8n: OpenAI API (Local)**: Create a new OpenAI API credential. API Key: Enter any placeholder text (e.g., lm-studio). Base URL: Set this to your machine's local IP address (e.g., http://192.168.1.10:1234/v1) to ensure n8n can connect to the local AI server, especially if running within a Docker container. PostgreSQL**: Create a new PostgreSQL credential. Enter your database Host, Database Name, User, and Password. If using the provided Docker setup, the host is usually db. Discord Bot**: Create a new Discord Bot API credential. Paste your Bot Token obtained from the Discord Developer Portal. Tally**: Create a new Tally API credential. Enter your API Key, which you can find in your Tally.so account settings. ⚙️ How to customize Refine AI Logic**: Update the System Message in the AI node to change classification categories or sentiment sensitivity. Switch to Cloud AI: If you prefer not to use a local model, you can swap the local **LM Studio connection for any 3rd party API, such as OpenAI (GPT-4o), Anthropic (Claude), or Google Gemini, by updating the node credentials and Base URL. Expand Destinations: Add more **Discord nodes or integrate Slack to notify different departments based on the AI's routing decision. Custom Triggers: Replace the Tally webhook with a **Typeform, Google Forms, or a custom Webhook trigger if your collection stack differs.
by Avkash Kakdiya
How it works This workflow automatically monitors competitor product prices stored in Google Sheets. It scrapes product pages, extracts pricing and offer data using AI, and compares it with historical values. Based on changes, it updates records and generates a market intelligence report. The workflow then emails the report and resets data for the next execution cycle. Step-by-step Step 1: Database sync** Schedule Trigger – Runs the workflow at a scheduled time. Get row(s) in sheet – Fetches competitor data and product URLs. Step 2: Scraping** Loop Over Items – Processes each competitor entry. HTTP Request3 – Retrieves raw HTML using ScraperAPI. Clean Content – Cleans and prepares text for AI processing. Step 3: Price extraction** AI Agent1 – Extracts product name, price, and offers. Groq Chat Model1 – Provides AI extraction capability. current Price and offer – Converts AI output into structured data. If2 – Checks if it's the first recorded entry. First time price and offer added – Stores initial values. If1 – Compares current vs previous price and offers. Updated current price and offer in sheet – Updates if changes detected. If No changes then update – Updates sheet even when no change is found. Step 4: Analysis** Get row(s) in sheet1 – Retrieves updated dataset. Data Aggregator – Builds structured market comparison data. AI Agent – Generates strategic insights and recommendations. Groq Chat Model – Powers the analysis output. Update row in sheet – Saves AI-generated summary in sheet. Step 5: Reporting** Edit Fields1 – Formats the report into HTML email layout. Send a message – Sends the final report via Gmail. Step 6: Reset** Get row(s) in sheet2 – Retrieves final processed data. Update row in sheet1 – Moves current data to history and clears fields. Why use this? Ensures all price scenarios (change or no change) are handled properly Keeps your Google Sheets always updated with accurate data Provides AI-powered competitive intelligence automatically Sends clean, formatted reports without manual effort Maintains structured historical tracking for better decision-making