by DIGITAL BIZ TECH
Travel Reimbursement - OCR & Expense Extraction Workflow Overview This is a lightweight n8n workflow that accepts chat input and uploaded receipts, runs OCR, stores parsed results in Supabase, and uses an AI agent to extract structured travel expense data and compute totals. Designed for zero retention operation and fast integration. Workflow Structure Frontend:** Chat UI trigger that accepts text and file uploads. Preprocessing:** Binary normalization + per-file OCR request. Storage:** Store OCR-parsed blocks in Supabase temp_table. Core AI:** Travel reimbursement agent that extracts fields, infers missing values, and calculates totals using the Calculator tool. Output:** Agent responds to the chat with a concise expense summary and breakdowns. Chat Trigger (Frontend) Trigger node:** When chat message received public: true, allowFileUploads: true, sessionId used to tie uploads to the chat session. Custom CSS + initial messages configured for user experience. Binary Presence Check Node:** CHECK IF BINARY FILE IS PRESENT OR NOT (IF) Checks whether incoming payload contains files. If files present -> route to Split Out -> NORMALIZE binary file -> OCR (ANY OCR API) -> STORE OCR OUTPUT -> Merge. If no files -> route directly to Merge -> Travel reimbursement agent. Binary Normalization Node:** Split Out and NORMALIZE binary file (Code) Split Out extracts binary entries into a data field. NORMALIZE binary file picks the first binary key and rewrites payload to binary.data for consistent downstream shape. OCR Node:** OCR (ANY OCR API ) (HTTP Request) Sends multipart/form-data to OCR endpoint, expects JSONL or JSON with blocks. Body includes mode=single, output_type=jsonl, include_images=false. Store OCR Output Node:** STORE OCR OUTPUT (Supabase) Upserts into temp_table with session_id, parsed blocks, and file_name. Used by agent to fetch previously uploaded receipts for same session. Memory & Tooling Nodes:** Simple Memory and Simple Memory1 (memoryBufferWindow) Keep last 10 messages for session context. Node:** Calculator1 (toolCalculator) Used by agent to sum multiple charges, handle currency arithmetic and totals. Travel Reimbursement Agent (Core) Node:** Travel reimbursement agent (LangChain agent) Model: Mistral Cloud Chat Model (mistral-medium-latest) Behavior: Parse OCR blocks and non-file chat input. Extract required fields: vendor_name, category, invoice_date, checkin_date, checkout_date, time, currency, total_amount, notes, estimated. When fields are missing, infer logically and mark estimated: true. Use Calculator tool to sum totals across multiple receipts. Fetch stored OCR entries from Supabase when user asks for session summaries. Always attempt extraction; never reply with "unclear" or ask for a reupload unless user requests audit-grade precision. Final output: Clean expense table and Grand Total formatted for chat. Data Flow Summary User sends chat message plus or minus file. IF file present -> Split Out -> Normalize -> OCR -> Store OCR output -> Merge with chat payload. Travel reimbursement agent consumes merged item, extracts fields, uses Calculator tool for sums, and replies with a formatted expense summary. Integrations Used | Service | Purpose | Credential | |---------|---------|-----------| | Mistral Cloud | LLM for agent | Mistral account | | Supabase | Store parsed OCR blocks and session data | Supabase account | | OCR API | Text extraction from images/PDFs | Configurable HTTP endpoint | | n8n Core | Flow control, parsing, editing | Native | Agent System Prompt Summary > You are a Travel Expense Extraction and Calculation AI. Extract vendor, dates, currency, category, and total amounts from uploaded receipts, invoices, hotel bills, PDFs, and images. Infer values when necessary and mark them as estimated. When asked, fetch session entries from Supabase and compute totals using the Calculator tool. Respond in a concise business professional format with a category wise breakdown and a Grand Total. Never reply "unclear" or ask for a reupload unless explicitly asked. Required final response format example: Key Features Zero retention friendly design: OCR output stored only to temp_table per session. Robust extraction with inference when OCR quality is imperfect. Session aware: agent retrieves stored receipts for consolidated totals. Calculator integration for accurate numeric sums and currency handling. Configurable OCR endpoint so you can swap providers without changing logic. Setup Checklist Add Mistral Cloud and Supabase credentials. Configure OCR endpoint to accept multipart uploads and return blocks. Create temp_table schema with session_id, file, file_name. Test with single receipts, multipage PDFs, and mixed uploads. Validate agent responses and Calculator totals. Summary A practical n8n workflow for travel expense automation: accept receipts, run OCR, store parsed data per session, extract structured fields via an AI agent, compute totals, and return clean expense summaries in chat. Built for reliability and easy integration. Need Help or More Workflows? We can integrate this into your environment, tune the agent prompt, or adapt it for different OCR providers. We can help you set it up for free — from connecting credentials to deploying it live. Contact: shilpa.raju@digitalbiz.tech Website: https://www.digitalbiz.tech LinkedIn: https://www.linkedin.com/company/digital-biz-tech/ You can also DM us on LinkedIn for any help.
by Trung Tran
Try It Out, HireMind – AI-Driven Resume Intelligence Pipeline! This n8n template demonstrates how to automate resume screening and evaluation using AI to improve candidate processing and reduce manual HR effort. A smart and reliable resume screening pipeline for modern HR teams. This workflow combines Google Drive (JD & CV storage), OpenAI (GPT-4-based evaluation), Google Sheets (position mapping + result log), and Slack/SendGrid integrations for real-time communication. Automatically extract, evaluate, and track candidate applications with clarity and consistency. How it works A candidate submits their application using a form that includes name, email, CV (PDF), and a selected job role. The CV is uploaded to Google Drive for record-keeping and later reference. The Profile Analyzer Agent reads the uploaded resume, extracts structured candidate information, and transforms it into a standardized JSON format using GPT-4 and a custom output parser. The corresponding job description PDF file is automatically retrieved from a Google Sheet based on the selected job role. The HR Expert Agent evaluates the candidate profile against the job description using another GPT-4 model, generating a structured assessment that includes strengths, gaps, and an overall recommendation. The evaluation result is parsed and formatted for output. The evaluation score will be used to mark candidate as qualified or unqualified, based on that an email will be sent to applicant or the message will be send to hiring team for the next process The final evaluation result will be stored in a Google Sheet for long-term tracking and reporting. Google drive structure ├── jd # Google drive folder to store your JD (pdf) │ ├── Backend_Engineer.pdf │ ├── Azure_DevOps_Lead.pdf │ └── ... │ ├── cv # Google drive folder, where workflow upload candidate resume │ ├── John_Doe_DevOps.pdf │ ├── Jane_Smith_FullStack.pdf │ └── ... │ ├── Positions (Sample: https://docs.google.com/spreadsheets/d/1pW0muHp1NXwh2GiRvGVwGGRYCkcMR7z8NyS9wvSPYjs/edit?usp=sharing) # 📋 Mapping Table: Job Role ↔ Job Description (Link) │ └── Columns: │ - Job Role │ - Job Description File URL (PDF in jd/) │ └── Evaluation form (Google Sheet) # ✅ Final AI Evaluation Results How to use Set up credentials and integrations: Connect your OpenAI account (GPT-4 API). Enable Google Cloud APIs: Google Sheets API (for reading job roles and saving evaluation results) Google Drive API (for storing CVs and job descriptions) Set up SendGrid (to send email responses to candidates) Connect Slack (to send messages to the hiring team) Prepare your Google Drive structure: Create a root folder, then inside it create: /jd → Store all job descriptions in PDF format /cv → This is where candidate CVs will be uploaded automatically Create a Google Sheet named Positions with the following structure: | Job Role | Job Description Link | |------------------------------|----------------------------------------| | Azure DevOps Engineer | https://drive.google.com/xxx/jd1.pdf | | Full-Stack Developer (.NET) | https://drive.google.com/xxx/jd2.pdf | Update your application form: Use the built-in form, or connect your own (e.g., Typeform, Tally, Webflow, etc.) Ensure the Job Role dropdown matches exactly the roles in the Positions sheet Run the AI workflow: When a candidate submits the form: Their CV is uploaded to the /cv folder The job role is used to match the JD from /jd The Profile Analyzer Agent extracts candidate info from the CV The HR Expert Agent evaluates the candidate against the matched JD using GPT-4 Distribute and store results: Store the evaluation results in the Evaluation form Google Sheet Optionally notify your team: ✉️ Send an email to the candidate using SendGrid 💬 Send a Slack message to the hiring team with a summary and next steps Requirements OpenAI GPT-4 account for both Profile Analyzer and HR Expert Agents Google Drive account (for storing CVs and evaluation sheet) Google Sheets API credentials (for JD source and evaluation results) Need Help? Join the n8n Discord or ask in the n8n Forum! Happy Hiring! 🚀
by Davide
This workflow creates an AI-powered chatbot that generates custom songs through an interactive conversation, then uploads the results to Google Drive. This workflow transforms n8n into a complete AI music production pipeline by combining: Conversational AI Structured data validation Tool orchestration External music generation API Cloud automation It demonstrates a powerful hybrid architecture: LLM Agent + Tools + API + Storage + Async Control Flow Key Advantages 1. ✅ Fully Automated AI Music Production From idea → to lyrics → to full generated track → to cloud storage All handled automatically. 2. ✅ Conversational UX Users don’t need technical knowledge. The AI collects missing information step-by-step. 3. ✅ Smart Tool Selection The agent dynamically chooses: Songwriter tool (for original lyrics) Search tool (for existing lyrics) This makes the system adaptive and intelligent. 4. ✅ Structured & Error-Safe Design Strict JSON schema enforcement Output parsing and validation Cleanup of malformed LLM responses Reduces failure rate dramatically. 5. ✅ Asynchronous API Handling Uses webhook-based resume Handles long-running AI generation Supports multiple song outputs Scalable and production-ready. 6. ✅ Modular & Extensible The architecture allows: Switching LLM provider Changing music API Adding new tools (e.g., cover art generation) Supporting different vocal styles or languages 7. ✅ Memory-Enabled Conversations Uses buffer memory (last 10 messages) Maintains conversational context and continuity. 8. ✅ Automatic File Management Generated songs are: Automatically downloaded Properly renamed Stored in Google Drive No manual file handling required. How it Works Here's the flow: User Interaction: The workflow starts with a chat trigger that receives user messages. A "Music Producer Agent" powered by Google Gemini engages with the user conversationally to gather all necessary song parameters. Data Collection: The agent collects four essential pieces of information: Song title Musical style (genre) Lyrics (prompt) - either generated by calling the "Songwriter" tool or searched online via the "Search songs" tool Negative tags (styles/elements to avoid) Validation & Formatting: The collected data passes through an IF condition checking for valid JSON format, then a Code node parses and cleans the JSON output. A "Fix Json Structure" node ensures proper formatting with strict rules (no line breaks, no double quotes). Song Generation: The formatted data is sent to the Kie.ai API (HTTP Request node) which generates the actual music track. The workflow includes a callback URL for asynchronous processing. Wait & Retrieve: A Wait node pauses execution until the Kie.ai API sends a webhook callback with the generated songs. The "Get songs" node then retrieves the song data. Process Results: The response is split out, and a Loop Over Items node processes each generated song individually. For each song, the workflow: Downloads the audio file via HTTP request Uploads it to a specified Google Drive folder with a timestamped filename Setup steps API Credentials (3 required): Google Gemini (PaLM) API: Configure in the two Gemini Chat Model nodes Gemini Search API: Set up in the "Search songs" tool node Kie AI Bearer Token: Add in the HTTP Request nodes (Create song and Get songs) Google Drive Configuration: Authenticate Google Drive OAuth2 in the "Upload song" node Verify/modify the folder ID if needed Ensure the Drive has proper write permissions Webhook Setup: The Wait node has a webhook ID that needs to be publicly accessible Configure this URL in your Kie.ai API settings as the callback endpoint Optional Customizations: Adjust the AI agent prompts in the "Music Producer Agent" and "Songwriter" nodes Modify song generation parameters in the Kie.ai API call (styleWeight, weirdnessConstraint, etc.) Update the Google Drive folder path for song storage Change the vocal gender or other music generation settings in the "Create song" node Testing: Activate the workflow and start a chat session to test song generation with sample requests like "Write a pop song about summer" or "Find lyrics for 'Bohemian Rhapsody' and make it in rock style" 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Servify
Takes a product image from Google Sheets, adds frozen effect with Gemini, generates ASMR video with Veo3, writes captions with GPT-4o, and posts to 4 platforms automatically. How it works Schedule trigger picks first unprocessed row from Google Sheet Downloads product image and sends to Gemini for frozen/ice effect Uploads frozen image to ImgBB (Veo3 needs public URL) Veo3 generates 10-12s ASMR video with ice cracking sounds GPT-4o writes platform-specific titles and captions Uploads simultaneously to YouTube, TikTok, Instagram, Pinterest Updates sheet status and sends Telegram notification Setup Replace these placeholders in the workflow: YOUR_GOOGLE_AI_API_KEY (Gemini) YOUR_KIE_AI_API_KEY (Veo3) YOUR_IMGBB_API_KEY (free) YOUR_UPLOAD_POST_API_KEY YOUR_GOOGLE_SHEET_ID YOUR_PINTEREST_BOARD_ID YOUR_PINTEREST_USERNAME YOUR_TIKTOK_USERNAME YOUR_INSTAGRAM_USERNAME YOUR_TELEGRAM_CHAT_ID Google Sheet format | topic | image_url | status | |-------|-----------|--------| | Dior Sauvage — Dior | https://example.com/img.jpg | | Leave status empty. Workflow sets it to processing then uploaded. Requirements Gemini API key - Google AI Studio Kie.ai account - kie.ai ImgBB API key - api.imgbb.com OpenAI API key upload-post.com account with connected TikTok/IG/Pinterest YouTube channel with OAuth
by Pinecone
Try it out This n8n workflow template lets you chat with your Google Drive documents (.docx, .json, .md, .txt, .pdf) using OpenAI and Pinecone Assistant. It retrieves relevant context from your files in real time so you can get accurate, context-aware answers about your proprietary data—without the need to train your own LLM. What is Pinecone Assistant? Pinecone Assistant allows you to build production-grade chat and agent-based applications quickly. It abstracts the complexities of implementing retrieval-augmented (RAG) systems by managing the chunking, embedding, storage, query planning, vector search, model orchestration, reranking for you. Prerequisites A Pinecone account and API key A GCP project with Google Drive API enabled and configured Note: When setting up the OAuth consent screen, skip steps 8-10 if running on localhost An Open AI account and API key Setup Create a Pinecone Assistant in the Pinecone Console here Name your Assistant n8n-assistant and create it in the United States region If you use a different name or region, update the related nodes to reflect these changes No need to configure a Chat model or Assistant instructions Setup your Google Drive OAuth2 API credential in n8n In the File added node -> Credential to connect with, select Create new credential Set the Client ID and Client Secret from the values generated in the prerequisites Set the OAuth Redirect URL from the n8n credential in the Google Cloud Console (instructions) Name this credential Google Drive account so that other nodes reference it Setup Pinecone API key credential in n8n In the Upload file to assistant node -> PineconeApi section, select Create new credential Paste in your Pinecone API key in the API Key field Setup Pinecone MCP Bearer auth credential in n8n In the Pinecone Assistant node -> Credential for Bearer Auth section, select Create new credential Set the Bearer Token field to your Pinecone API key used in the previous step Setup the Open AI credential in n8n In the OpenAI Chat Model node -> Credential to connect with, select Create new credential Set the API Key field to your OpenAI API key Add your files to a Drive folder named n8n-pinecone-demo in the root of your My Drive If you use a different folder name, you'll need to update the Google Drive triggers to reflect that change Activate the workflow or test it with a manual execution to ingest the documents Chat with your docs! Ideas for customizing this workflow Customize the System Message on the AI Agent node to your use case to indicate what kind of knowledge is stored in Pinecone Assistant Change the top_k value of results returned from Assistant by adding "and should set a top_k of 3" to the System Message to help manage token consumption Configure the Context Window Length in the Conversation Memory node Swap out the Conversation Memory node for one that is more persistent Make the chat node publicly available or create your own chat interface that calls the chat webhook URL. Need help? You can find help by asking in the Pinecone Discord community, asking on the Pinecone Forum, or filing an issue on this repo.
by Don Jayamaha Jr
A fully autonomous, HTX Spot Market AI Agent (Huobi AI Agent) built using GPT-4o and Telegram. This workflow is the primary interface, orchestrating all internal reasoning, trading logic, and output formatting. ⚙️ Core Features 🧠 LLM-Powered Intelligence: Built on GPT-4o with advanced reasoning ⏱️ Multi-Timeframe Support: 15m, 1h, 4h, and 1d indicator logic 🧩 Self-Contained Multi-Agent Workflow: No external subflows required 🧮 Real-Time HTX Market Data: Live spot price, volume, 24h stats, and order book 📲 Telegram Bot Integration: Interact via chat or schedule 🔄 Autonomous Runs: Support for webhook, schedule, or Telegram triggers 📥 Input Examples | User Input | Agent Action | | --------------- | --------------------------------------------- | | btc | Returns 15m + 1h analysis for BTC | | eth 4h | Returns 4-hour swing data for ETH | | bnbusdt today | Full day snapshot with technicals + 24h stats | 🖥️ Telegram Output Sample 📊 BTC/USDT Market Summary 💰 Price: $62,400 📉 24h Stats: High $63,020 | Low $60,780 | Volume: 89,000 BTC 📈 1h Indicators: • RSI: 68.1 → Overbought • MACD: Bearish crossover • BB: Tight squeeze forming • ADX: 26.5 → Strengthening trend 📉 Support: $60,200 📈 Resistance: $63,800 🛠️ Setup Instructions Create your Telegram Bot using @BotFather Add Bot Token in n8n Telegram credentials Add your GPT-4o or OpenAI-compatible key under HTTP credentials in n8n (Optional) Add your HTX API credentials if expanding to authenticated endpoints Deploy this main workflow using: ✅ Webhook (HTTP Request Trigger) ✅ Telegram messages ✅ Cron / Scheduled automation 🎥 Live Demo 🧠 Internal Architecture | Component | Role | | ------------------ | -------------------------------------------------------- | | 🔄 Telegram Trigger | Entry point for external or manual signal | | 🧠 GPT-4o | Symbol + timeframe extraction + strategy generation | | 📊 Data Collector | Internal tools fetch price, indicators, order book, etc. | | 🧮 Reasoning Layer | Merges everything into a trading signal summary | | 💬 Telegram Output | Sends formatted HTML report via Telegram | 📌 Use Case Examples | Scenario | Outcome | | -------------------------------------- | ------------------------------------------------------- | | Auto-run every 4 hours | Sends new HTX signal summary to Telegram | | Human requests “eth 1h” | Bot replies with real-time 1h chart-based summary | | System-wide trigger from another agent | Invokes webhook and returns response to parent workflow | 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: Don Jayamaha – LinkedIn
by Pratyush Kumar Jha
Inbox2Ledger is an end-to-end n8n template that turns a noisy finance inbox into a clean, structured ledger. It fetches emails, uses AI guardrails to keep only finance-relevant messages, extracts invoice/receipt fields via an OCR-style agent, validates and auto-categorizes each expense, generates a unique case ID, and appends the result to a Google Sheet for accounting or downstream automations. Key Features Trigger*: Form submission or scheduled fetch (sample *On form submission node included) AI Filter**: Guardrail node determines whether an email is finance-related (payments, invoices, receipts) Keyword Filter**: Filters common invoice/bill/payment subject keywords Extraction**: Language-model agent returns normalized JSON: vendor_name invoice_date (YYYY-MM-DD) invoice_id total_amount tax_amount currency items_summary vendor_tax_id Validation**: Code node checks required fields and amount formats; flags extraction errors Categorization**: Rule-based expense categorizer (software & hosting, subscriptions, travel, payroll, etc.) with MCC/vendor fallbacks Output**: Appends structured rows to a Google Sheet with mapped columns: invoice_id, vendor_name, invoice_date, total_amount, currency, tax_amount, gl_category, approval_status, timestamp, case_id, items_summary, vendor_tax_id, processed_at High Accuracy**: Low false-positive rate using combined AI guardrails + subject filtering Quick Setup**: Example nodes and credentials pre-configured in the template Included Nodes & Flow Highlights On form submission (date picker trigger) → Get Email Content (Gmail) → Guardrail: Is Finance? (LangChain Guardrails) → IF (Guardrail Passed) → Filter Finance Keywords → AI Agent (Email OCR) → Validate Extraction → Check for Errors → Apply Finance Rules → Log to Invoices Sheet (Google Sheets) (Full node list and configuration included in the template.) Requirements & Credentials Gmail OAuth2 (read access)** — for fetching emails OpenAI API key (or compatible LLM)** — for guardrails & extraction Google Sheets OAuth2** — to append rows to the invoice sheet Recommended: Use the Google Sheet ID included in the template, or replace it with your own Sheet ID and gid. Quick Setup Guide 👉 Demo & Setup Video Import the template into n8n Connect and authorize credentials: Gmail, Google Sheets, OpenAI (or preferred LLM) Update the Google Sheet ID / sheet gid if using your own sheet (Optional) Adjust the Guardrail topicalAlignment threshold or filter keywords Test using the form trigger or a single email, then enable the workflow Configuration Tips The extraction agent outputs a strict JSON schema — keep it for reliable downstream mapping Use a low LLM temperature (0.2) for deterministic extraction For non-USD currencies, ensure your accounting system supports the currency field or add a conversion step For high-volume inboxes, enable batching or rate-limit the Gmail node to avoid API quota issues Privacy & Security This template processes real email content and financial data — store credentials securely Restrict access to the n8n instance to authorized users only Review data-retention policies if using a hosted LLM service Example Use Cases Auto-log vendor invoices from email into an accounting Google Sheet Build an audit trail with case IDs for finance teams Preprocess incoming receipts before forwarding to AP tools or ERPs Tags (Recommended) finance, invoices, email, ai, ocr, google-sheets, automation, accounting, n8n-template
by Hyrum Hurst
Who this workflow is for Consulting firms in strategy, management, or IT who want to automate client onboarding and internal task assignment. What this workflow does Automatically creates onboarding tasks and checklists using AI, routes them to the right consultant, logs client info in Google Sheets, and sends client welcome emails. Internal teams get Slack notifications, and kickoff meetings can be scheduled automatically. How the workflow works New client intake triggers workflow AI generates onboarding checklist Tasks routed based on project type Client info logged in Google Sheets Slack notifications sent to consultants Optional PDF of onboarding sent to client Email confirmation delivered to client Optional CRM integration Setup Instructions Connect Webhook/Form for intake Connect Google Sheets Connect OpenAI Connect Slack and email Configure optional CRM integration Author: Hyrum Hurst, AI Automation Engineer Company: QuarterSmart Contact: hyrum@quartersmart.com
by WeblineIndia
Facebook Page Comment Moderation Scoreboard → Team Report This workflow automatically monitors Facebook Page comments, analyzes them using AI for intent, toxicity & spam, stores moderation results in a database and sends a clear summary report to Slack and Telegram. This workflow runs every few hours to fetch Facebook Page comments and analyze them using OpenAI. Each comment is classified as positive, neutral or negative, checked for toxicity, spam & abusive language and then stored in Supabase. A simple moderation summary is sent to Slack and Telegram. You receive: Automated Facebook comment moderation AI-based intent, toxicity, and spam detection Database logging of all moderated comments Clean Slack & Telegram summary reports Ideal for teams that want visibility into comment quality without manually reviewing every message. Quick Start – Implementation Steps Import the workflow JSON into n8n. Add your Facebook Page access token to the HTTP Request node. Connect your OpenAI API key for comment analysis. Configure your Supabase table for storing moderation data. Connect Slack and Telegram credentials and choose target channels. Activate the workflow — moderation runs automatically. What It Does This workflow automates Facebook comment moderation by: Running on a scheduled interval (every 6 hours). Fetching recent comments from a Facebook Page. Preparing each comment for AI processing. Sending comments to OpenAI for moderation analysis. Extracting structured moderation data: Comment intent Toxicity score Spam detection Abusive language detection Flagging risky comments based on defined rules. Storing moderation results in Supabase. Generating a summary report. Sending the report to Slack and Telegram. This ensures consistent, repeatable moderation with no manual effort. Who’s It For This workflow is ideal for: Social media teams Community managers Marketing teams Customer support teams Moderation and trust & safety teams Businesses managing high-volume Facebook Pages Anyone wanting AI-assisted comment moderation Requirements to Use This Workflow To run this workflow, you need: n8n instance** (cloud or self-hosted) Facebook Page access token** OpenAI API key** Supabase project and table** Slack workspace** with API access Telegram bot** and chat ID Basic understanding of APIs and JSON (helpful but not required) How It Works Scheduled Trigger – Workflow starts automatically every 6 hours. Fetch Comments – Facebook Page comments are retrieved. Prepare Data – Comments are formatted for processing. AI Moderation – OpenAI analyzes each comment. Normalize Results – AI output is cleaned and standardized. Store Data – Moderation results are saved in Supabase. Aggregate Stats – Summary statistics are calculated. Send Alerts – Reports are sent to Slack and Telegram. Setup Steps Import the workflow JSON into n8n. Open the Fetch Facebook Page Comments node and add: Page ID Access token Connect your OpenAI account in the AI moderation node. Create a Supabase table and map fields correctly. Connect Slack and select a reporting channel. Connect Telegram and set the chat ID. Activate the workflow. How To Customize Nodes Customize Flagging Rules Update the normalization logic to: Change toxicity thresholds Flag only spam or abusive comments Add custom moderation rules Customize Storage You can extend Supabase fields to include: Language AI confidence score Reviewer notes Resolution status Customize Notifications Slack and Telegram messages can include: Emojis Mentions (@channel) Links to Facebook comments Severity labels Add-Ons (Optional Enhancements) You can extend this workflow to: Auto-hide or delete toxic comments Reply automatically to positive comments Detect language and region Generate daily or weekly moderation reports Build dashboards using Supabase or BI tools Add escalation alerts for high-risk comments Track trends over time Use Case Examples 1. Community Moderation Automatically identify harmful or spam comments. 2. Brand Reputation Monitoring Spot negative sentiment early and respond faster. 3. Support Oversight Detect complaints or frustration in comments. 4. Marketing Insights Measure positive vs negative engagement. 5. Compliance & Auditing Keep historical moderation logs in a database. Troubleshooting Guide | Issue | Possible Cause | Solution | |-----|---------------|----------| | No comments fetched | Invalid Facebook token | Refresh token & permissions | | AI output invalid | Prompt formatting issue | Use strict JSON prompt | | Data not saved | Supabase mapping mismatch | Verify table fields | | Slack message missing | Channel or credential error | Recheck Slack config | | Telegram alert fails | Wrong chat ID | Confirm bot permissions | | Workflow not running | Trigger disabled | Enable Cron node | Need Help? If you need help customizing, scaling or extending this workflow — such as advanced moderation logic, dashboards, auto-actions or production hardening, then our n8n workflow development team at WeblineIndia can assist with expert automation solutions.
by Cheng Siong Chin
How It Works This workflow automates end-to-end marketing campaign management for digital marketing teams and agencies executing multi-channel strategies. It solves the complex challenge of coordinating personalized content across email, social media, and advertising platforms while maintaining brand consistency and optimizing engagement. The system processes scheduled campaign triggers through AI-powered content generation and personalization engines, then intelligently distributes tailored messages across six parallel channels: email campaigns, social media posts, paid advertising, influencer outreach, content marketing, and performance analytics. Each channel receives audience-specific messaging optimized for platform requirements, engagement patterns, and conversion objectives. This eliminates manual content adaptation, ensures consistent campaign timing, and delivers data-driven personalization at scale. Setup Steps Configure campaign schedule trigger or webhook integration with marketing automation platform Add AI model API credentials for content generation, personalization, and A/B testing optimization Connect email service provider with segmented audience lists and template configurations Set up social media management platform APIs for Facebook, Instagram, LinkedIn Integrate advertising platforms (Google Ads, Meta Ads) with campaign tracking parameters Prerequisites Marketing automation platform access, AI service API keys, email service provider account Use Cases Product launch campaigns coordinating announcements across channels Customization Adjust AI prompts for brand voice consistency, modify channel priorities based on audience preferences Benefits Reduces campaign setup time by 80%, ensures consistent messaging across all channels
by Dinakar Selvakumar
Description This n8n template generates high-quality, platform-ready hashtags for beauty and skincare brands by combining AI, live website analysis, and current social media trends. It is designed for marketers, agencies, and founders who want smarter hashtag strategies without manual research. Use cases Beauty & skincare brands building social media reach Agencies managing multiple client accounts Content teams creating Instagram, LinkedIn, or Facebook posts Founders validating brand positioning through hashtags What this template demonstrates Form-based user input in n8n Website scraping with HTTP Request AI-driven brand analysis using Gemini Structured AI outputs with output parsers Live trend research using search tools Automated storage in Google Sheets How it works Users submit brand details through a form. The workflow scrapes the brand’s website, analyzes it with AI, generates tailored hashtags, enriches them with platform-specific trends, and stores the final result in Google Sheets. How to use Activate the workflow Open the form URL Enter brand details and website URL Submit the form View generated hashtags in Google Sheets Requirements Google Gemini API credentials Google Sheets account SerpAPI account for trend research Good to know Website scraping is best suited for public, text-rich sites Hashtags are generated dynamically based on brand tone and audience You can reuse the Google Sheet as a hashtag library Customising this workflow Change the number of hashtags generated Modify the AI prompt for different industries Add filters for banned or restricted hashtags Extend the workflow to auto-post to social platforms
by Pixcels Themes
Who’s it for This template is perfect for content creators, marketers, solopreneurs, agencies, and social media strategists who want to understand what audiences are talking about online. It helps teams quickly turn broad topics into structured insights, trend opportunities, and actionable content ideas. What it does / How it works This workflow begins with a form where the user enters a single topic. An AI agent expands the topic into subtopics and generates multiple relevant keywords. For each keyword, the workflow automatically gathers content from Reddit and X (formerly Twitter), extracting posts, titles, text, engagement metrics, and links. Each collected post is then analyzed by an AI model to determine: Trend potential Audience relevance Platform suitability Recommended content formats Categories and keywords Once all posts are processed, a final AI agent synthesizes the results, identifies the strongest emerging trends, groups similar insights, and generates strategic content recommendations. Requirements Google Gemini (PaLM) API credentials X / Twitter OAuth2 credentials Access to the n8n Form Trigger (publicly accessible URL) How to set up Connect your Gemini API and Twitter API credentials. Make sure the Form Trigger node is accessible. Review and adjust the AI prompts if you want different output formats. Run the form, enter a topic, and execute the workflow. How to customize the workflow Add more platforms (YouTube, TikTok, Instagram, Hacker News) Add sentiment scoring or engagement ranking Export insights to Google Sheets or Notion Generate ready-to-post content from the trends