by Khairul Muhtadin
Auto repost job with RAG is a workflow designed to automatically extract, process, and publish job listings from monitored sources using Google Drive, OpenAI, Supabase, and WordPress. This integration streamlines job reposting by intelligently extracting relevant job data, mapping categories and types accurately, managing media assets, and publishing posts seamlessly. 💡 Why Use Auto repost job with RAG? Automated Publishing: Slash manual entry time by automating job post extraction and publication, freeing hours every week. Error-Resistant Workflow: Avoid incomplete job posts with smart validation checks to ensure all necessary fields are ready before publishing. Consistent Content Quality: Maintain formatting, SEO, and style consistency backed by AI-driven article regeneration adhering strictly to your guidelines. Competitive Edge: Get fresh jobs live faster than your competitors without lifting more than a finger—because robots don't take coffee breaks! ⚡ Perfect For Recruiters & HR Teams: Accelerate your job posting funnel with error-free automation. Content Managers: Keep your job boards fresh with AI-enriched standardized listings. Digital Marketers: Automate content flows to boost SEO and engagement without the headache. 🔧 How It Works ⏱ Trigger: Job link inputs via Telegram. 📎 Process: Auto-download of job documents, data extraction using Jina AI and OpenAI's GPT-4 model to parse content and metadata. 🤖 Smart Logic: AI agent regenerates articles based on strict RAG dataset rules; category & job type IDs mapped to match WordPress taxonomy; fallback attempts with default images for missing logos. 💌 Output: Job posts formatted and published to WordPress; success or failure updates sent back via Telegram notifications. 🗂 Storage: Uses Supabase vector store for document embedding and retrieval related to formatting rules and job data. 🔐 Quick Setup Import the provided JSON workflow into your n8n instances Add credentials: Google Drive OAuth, OpenAI API, Supabase API, Telegram API, WordPress API Customize: Set your Google Drive folder ID, WordPress endpoints, and Telegram chat IDs Update: Confirm default logo URLs and fallback settings as needed Test: Submit a new job link via Telegram or add a file to the watched Drive folder 🧩 You'll Need Active n8n instances Google Drive Account with OAuth2 credentials OpenAI API access for GPT-4 processing Supabase account configured for vector storage WordPress API credentials for job listing publishing Telegram Bot for notifications and job link inputs 🛠️ Level Up Ideas Integrate Slack, Gmail or Teams notifications for teams visibility Add a sentiment analysis step to prioritize certain jobs Automate social media posting of new job listings for wider reach Made by: Khmuhtadin Tags: automation, job-posting, AI, OpenAI, Google Drive, WordPress Category: content automation Need custom work? Contact me
by Yahor Dubrouski
📌 How it works This workflow turns voice or text messages from Telegram into structured tasks in Notion, using AI-powered intent detection and task generation. It supports: 🆕 Task creation ✏️ Task updates (like changing priority, title, or deadline) 🧠 Task analysis (e.g., workload, goal alignment, emotional fatigue) The assistant uses OpenAI to: Detect intent (create, update, or analyze) Extract or update task fields (title, priority, due date, etc.) Auto-format list-style descriptions with bullet points Detect relevant tags like health, money, sport, etc. ⚙️ Setup steps Clone the GitHub repo or import the .json into n8n manually. Configure: OpenAI credentials Telegram Bot Token Notion credentials Use Telegram to send messages like: “Create a task to call mom tomorrow” “Update the grocery task to add milk” “Am I overbooked today?”
by Shadrack
This workflow deploys a fully customizable AI chatbot that can be embedded on any website, from custom-coded sites to platforms like WordPress. The chatbot is powered by n8n, uses Supabase for memory and RAG, and integrates SerpAPI, Google Calendar, SMTP, and Google Sheets to automate responses, collect leads, and follow up intelligently. Unlike typical widgets, this bot captures name and email before chatting, enabling personalized, human-like conversations and smart lead tracking. Check demo here 🎯 Core Features 💡 Universal Embedding – Works on any site (custom HTML or WordPress) using a single embed snippet. 🧠 AI Agent Node + RAG – Powered by Gemini (or any AI model) with Supabase as memory for contextual replies. 🌐 SerpAPI Integration – Lets the agent search the internet for real-time information. 📅 Google Calendar & Sheets – Logs leads, appointments, and chat summaries. 📧 SMTP Node – Sends personalized follow-up emails directly to new leads. 🪪 Lead Capture – Requires users to enter their name and email before chatting, creating personalized sessions. ⚙️ How It Works Chat Trigger: The widget sends user input to your n8n webhook set to production mode. AI Processing: The AI Agent node handles the response logic with memory and RAG context from Supabase. Integrations: SerpAPI → Real-time search. Google Calendar & Sheets → Stores lead data and events. SMTP Node → Sends automatic thank-you or follow-up emails. Response: The chatbot replies instantly on your website, maintaining session memory. 🧩 Quick Setup Steps Fork or use the Open Source Repo: The widget script is hosted via CDN from your GitHub repo and is fully editable. Embed the Widget: Copy and paste the following snippet into your site’s <head> or footer (or use a plugin like Insert Headers and Footers on WordPress): <link href="https://cdn.jsdelivr.net/npm/@n8n/chat/dist/style.css" rel="stylesheet" /> <script> window.ChatWidgetConfig = { webhook: { url: '', // production webhook URL route: 'general' }, branding: { logo: '', // your logo URL name: 'CustomCX Agent', welcomeText: 'Hi 👋, how can we help?', responseTimeText: 'We typically respond right away', }, style: { primaryColor: '#854fff', secondaryColor: '#6b3fd4', position: 'right', backgroundColor: '#ffffff', fontColor: '#333333', } }; </script> <script src="https://cdn.jsdelivr.net/gh/shadrack-ago/n8n/widget.js?v=2.6"></script> Connect Integrations: Add your Supabase, SerpAPI, Google, and SMTP credentials in n8n. Update your webhook URL in the script above. Deploy: Activate the workflow, refresh your site, and start chatting with your AI assistant. 🚀 Why Use This Template Works with any website or CMS. Captures and stores qualified leads automatically. Open source — easily modify, brand, or extend it. Seamlessly integrates AI, CRM, and communication tools.
by Kelsey Brown
Never miss a workflow failure Automatically capture, analyze, and debug n8n workflow errors using Claude Sonnet 4 with real-time documentation lookup via Context7 MCP server. Why this works better Documentation-first AI analysis Context7 searches official n8n docs before diagnosing. Claude Sonnet 4 only responds after finding relevant documentation—no hallucinations. Complete error intelligence Every error stored in Supabase with full context: stack traces, execution data, workflow structure, AI analysis. Track patterns across all workflows. Production-ready emails Professional HTML with inline code snippets, proper formatting, and one-click execution links. What happens Root cause - Plain English explanation Specific fix - Exact field names and values Prevention tip - Avoid future errors Execution link - One-click debug access Statistics - Error frequency tracking How it works Error → Your workflow fails Capture → Full context retrieved Research → Context7 searches n8n Analysis → AI diagnoses with context Email → Formatted alert delivered Record → Error stored in database Requirements Supabase** - Free tier works OpenRouter** - $5 credit Context7** - Free API available SMTP** - For email delivery n8n API** - Must be enabled Setup: 15 minutes Overview Complete instructions with SQL script included in workflow sticky notes. Activate: Settings → Error Workflow → Select this workflow Customize Reduce cost: Remove workflow_json and execution_data from prompt Change output: Swap email node for Telegram/Slack/Discord—expressions provided in notes FAQ Works with community nodes? Yes. Context7 searches all n8n documentation. Handles sensitive data? Remove workflow_json and execution_data from prompt to exclude content. Customizable design? Yes. HTML template in "Build HTML" node is fully editable.
by NODA shuichi
Description: Don't just get a recipe. Get a Strategy. (Speed / Healthy / Creative) 🍳🤖 This workflow solves the "What should I eat?" problem by using Google Gemini to generate 3 distinct recipe variations simultaneously based on your fridge leftovers. It demonstrates advanced n8n concepts like Array Processing and Data Aggregation. Key Features: Array Processing: Demonstrates how to handle JSON lists (Gemini outputs an array -> n8n splits it -> API calls run for each item). Aggregation: Shows how to combine processed items back into a single summary email. Visual Enrichment: Automatically searches for recipe images using Google Custom Search. How it works: Input: Enter ingredients via the Form Trigger. Generate: Gemini creates 3 JSON objects: "Speed (5min)", "Healthy", and "Creative". Process: The workflow iterates through the 3 recipes, searching for images and logging data to Google Sheets. Aggregate: The results are combined into one HTML comparison table. Deliver: You receive an email with 3 options to choose from. Setup Requirements: Google Sheets: Create a sheet named Recipes with headers: date, ingredients, style, recipe_name, recipe_text, image_url. Credentials: Google Gemini API, Google Custom Search (API Key & Engine ID), Gmail, Google Sheets. Configuration: Enter your IDs in the "1. Configuration" node.
by Jay Emp0
🤖 Reddit Auto-Comment Assistant (AI-Driven Marketing Workflow) Automate how you reply to Reddit posts using AI-generated, first-person comments that sound human, follow subreddit rules, and (optionally) promote your own links or products. 🧩 Overview This workflow monitors Reddit mentions (via F5Bot Gmail alerts) and automatically: Fetches the relevant Reddit post. Checks the subreddit’s rules for self-promotion. Generates a comment using GPT-5 style prompting (human-like tone, <255 chars). Optionally promotes your chosen product from Google Sheets. Posts the comment automatically It’s ideal for creators, marketers, or founders who want to grow awareness organically and authentically on Reddit — without sounding like a bot. 🧠 Workflow Diagram 🚀 Key Features | Feature | Description | |----------|--------------| | AI-Generated Reddit Replies | Uses GPT-powered reasoning and prompt structure that mimics a senior marketing pro typing casually. | | Rule-Aware Posting | Reads subreddit rules and adapts tone — no promo where it’s not allowed. | | Product Integration | Pulls product name + URL from your Google Sheet automatically. | | Full Automation Loop | From Gmail → Gsheet → Reddit | | Evaluation Metrics | Logs tool usage, link presence, and formatting to ensure output quality. | 🧰 Setup Guide 1️⃣ Prerequisites | Tool | Purpose | |------|----------| | n8n Cloud or Self-Host | Workflow automation environment | | OpenAI API key | For comment generation | | Reddit OAuth2 credentials | To post comments | | Google Sheets API | To fetch and evaluate products | | Gmail API | To read F5Bot alerts | 2️⃣ Import the Workflow Download Reddit Assistant.json In n8n, click Import Workflow → From File Paste your credentials in the corresponding nodes: Reddit account Gmail account Gsheet account OpenAI API 3️⃣ Connect Your Google Sheets You’ll need two Google Sheets: | Sheet | Purpose | Example Tab | |--------|----------|-------------| | Product List | Contains all your product names, URLs, goals, and CTAs | promo | | Reddit Evaluations | Logs AI performance metrics and tool usage | reddit evaluations | 4️⃣ Set Up Gmail Trigger (F5Bot) Subscribe to F5Bot alerts for keywords like "blog automation" or your brand name. Configure Gmail Trigger to only pull from sender: admin@f5bot.com. 5️⃣ Configure AI Agent Prompt The built-in prompt follows a GPT-5-style structured reasoning chain: Reads the Reddit post + rules. Determines if promotion is allowed. Fetches product data from Google Sheets. Writes a short, human comment (<255 chars). Avoids buzzwords and fake enthusiasm. 📊 Workflow Evaluations The workflow includes automatic evaluation nodes to track: | Metric | Description | |--------|--------------| | contains link | Checks if comment includes a URL | | contains dash | Detects format breaks | | Tools Used | Logs which AI tools were used in reasoning | | executionTime | Monitors average latency | 💡 Why This Workflow Has Value | Value | Explanation | |--------|--------------| | Saves time | Automates Reddit marketing without manual engagement. | | Feels human | AI comments use a fast-typing, casual tone (e.g., “u,” “ur,” “idk”). | | Follows rules | Respects subreddits where promo is banned. | | Data-driven | Logs performance across 10 test cases for validation. | | Monetizable | Can promote Gumroad, YouTube, or SaaS products safely. | ⚙️ Example Use Case > “I used this automation to pull $1.4k by replying to Reddit posts about blog automation. > Each comment felt natural and directed users to my n8n workflow.”
by nXsi
This n8n template builds an automated daily news digest powered by Claude AI. It monitors RSS feeds, Reddit, and Hacker News, extracts full article text, analyzes each piece with AI, and delivers a polished briefing to Discord and Slack. Stop drowning in newsletters -- Claude reads everything and surfaces only what matters, scored and ranked by importance. Good to know Estimated cost is $0.03-0.10 per daily run using Claude Haiku + Sonnet. See Anthropic pricing for current rates. Works without a database out of the box. Optionally enable PostgreSQL for article history and cross-day deduplication. How it works Schedule trigger fires daily and fetches articles from 10 configurable sources (RSS, Atom, Reddit JSON, Hacker News API) Articles are deduplicated by URL hash and fuzzy title matching Jina Reader extracts full article text for deeper analysis Claude Haiku scores each article 1-10 for importance, assigns categories, and writes a "why it matters" summary Claude Sonnet compiles the top articles into a structured digest with lead story, top stories, quick hits, and trend detection Formatted output is delivered to Discord (rich embeds) and Slack (Block Kit) How to use Add your Anthropic API key as an n8n credential and set your Discord webhook URL in the config node -- that's the minimum to get running Edit the feed list in "Build feed source list" to add your own sources Requirements Anthropic API key (setup guide) Discord webhook URL (setup guide) and/or Slack credential Customizing this workflow Swap feed sources for any topic -- finance, gaming, research papers, industry news Adjust topic importance weights to prioritize what you care about Modify the Claude system prompt to change the digest's tone and style
by Akshay Chug
Overview Stop wasting time on leads that will never convert. This workflow scores every inbound form submission 1-10 using Claude AI, then automatically replies and routes based on fit — hot leads get an instant email and Slack alert, warm leads get a follow-up prompt, cold leads get a polite decline. How it works Lead fills out your built-in n8n contact form Claude AI scores them 1-10 against your ideal customer profile Hot (7-10) → Slack alert + personalised email + logged to Sheets Warm (4-6) → holding reply email + logged to Sheets Cold (1-3) → polite decline email + logged to Sheets Setup steps Copy the form URL from the Inbound Lead Form node and share it as your contact form Add your Anthropic API key to the Claude Sonnet sub-node Connect Gmail to the three reply nodes and update the email signatures Connect Slack to Notify Team - Hot Lead — or right-click and Disable it Create a Google Sheet with headers: Timestamp, Name, Email, Company, Size, Message, Score, Tier, Reasoning and connect it in all three Log nodes Edit the scoring prompt in Score Lead Intent to describe your ideal customer
by AI Solutions
🏆 Sports Digest — AI-Curated Weekly Newsletter Automatically aggregates sports news for a configurable topic (e.g., "University of Florida Football" or "Atlanta Falcons") from Reddit, Google News, Yahoo Sports, NCAA.com, and BBC Sport, then curates and delivers a branded HTML email newsletter. How it works Schedule Trigger fires weekly on Friday at 9 AM. Config — Topic & Recipients — Set your topic, subreddit, and recipient_email in one node. News Collection — Five sources run in parallel: Reddit, Google News RSS, Yahoo Sports RSS, NCAA.com RSS, and BBC Sport RSS. Normalization — Each source is standardized into a uniform schema (title, date, link, source, score, description). Article Selection — All sources are merged, deduplicated, filtered to the last 60 days, and 7 articles are randomly selected (≥1 per source where available). AI Curation — GPT-4o-mini summarizes each article with a headline, 2-sentence summary, and "why it matters" note. Output is structured JSON. Image Generation — Gemini 2.5 Flash generates 2 AI images (for the top 2 articles) in parallel, each uploaded to your WordPress media library. HTML Assembly — Images are attached to the first 2 articles; all articles are rendered into a responsive branded HTML email. Delivery — Newsletter is sent via Microsoft Outlook to the configured recipient email. Setup Config node**: Set topic, subreddit, and recipient_email before activating. Reddit**: Configure Reddit OAuth2 credential. OpenAI**: Add OpenAI API credential — requires GPT-4o-mini access. Gemini**: Add Google PaLM API credential for Gemini 2.5 Flash image generation. WordPress**: Add HTTP Basic Auth credential for your WordPress site (for image upload). Outlook**: Add Microsoft Outlook OAuth2 credential for email delivery. Customization Change the schedule in the trigger node (daily, weekly, etc.). Adjust the article count (default: 7) in the Select Articles node. Swap the WordPress upload for any image hosting service. Update the HTML template in Build Newsletter HTML to match your brand colors. Community Nodes ⚠️ This workflow uses @n8n/n8n-nodes-langchain.googleGemini for AI image generation. This is a LangChain community node — requires a self-hosted n8n instance. visit https://iportgpt.com/n8n_assets/sportsdig.html for instructions
by Port IO
RBAC for AI agents with n8n and Port This workflow implements role-based access control for AI agent tools using Port as the single source of truth for permissions. Different users get access to different tools based on their roles, without needing a separate permission database. For example, developers might have access to PagerDuty and AWS S3, while support staff only gets Wikipedia and a calculator. The workflow checks each user's permissions in Port before letting the agent use any tools. For the full guide with blueprint setup and detailed configuration, see RBAC for AI Agents with n8n and Port in the Port documentation. How it works The n8n workflow orchestrates the following steps: Slack trigger — Listens for @mentions and extracts the user ID from the message. Get user profile — Fetches the user's Slack profile to get their email address. Port authentication — Requests an access token from the Port API using client credentials. Permission lookup — Queries Port for the user entity (by email) and reads their allowed_tools array. Unknown user check — If the user doesn't exist in Port, sends an error message and stops. Permission filtering — The "Check permissions" node compares each connected tool against allowed_tools and replaces unauthorized ones with a stub that returns "You are not authorized to use this tool." AI agent — Runs with only permitted tools, using GPT-4 and chat memory. Response — Posts the agent output back to the Slack channel. Setup [ ] Connect your Slack account and set the channel ID in the trigger node [ ] Add your OpenAI API key [ ] Register for free on Port.io [ ] Create the rbacUser blueprint in Port (see full guide for blueprint setup) [ ] Add user entities using email as the identifier [ ] Replace YOUR_PORT_CLIENT_ID and YOUR_PORT_CLIENT_SECRET in the "Get Port access token" node [ ] Connect credentials for any tools you want to use (PagerDuty, AWS, etc.) [ ] Update the channel ID in the Slack nodes [ ] Invite the bot to your Slack channel [ ] You should be good to go! Prerequisites You have a Port account and have completed the onboarding process. You have a working n8n instance (self-hosted) with LangChain nodes available. Slack workspace with bot permissions to receive mentions and post messages. OpenAI API key for the LangChain agent. Port client ID and secret for API authentication. (Optional) PagerDuty, AWS, or other service credentials for tools you want to control. ⚠️ This template is intended for Self-Hosted instances only.
by Ayaka Sato
Who is this for Business owners and service providers who want to reduce no-show rates for appointments booked via Google Calendar. What this workflow does This workflow fetches tomorrow's Google Calendar bookings, looks up each customer's no-show history from Google Sheets, and calculates a risk score (0-100) from four weighted signals. Based on the score, it routes to three automated responses: a Slack alert plus AI re-confirmation email for super high risk, a friendly AI reminder email for high risk, and a silent log for low risk. Setup Create a Google Sheets file with a tab named customer_history (columns: customer_email, customer_name, total_bookings, no_show_count, last_booking_date, last_status) Open the Set Configuration node and fill in your Sheet ID and Slack channel Connect Google Calendar, Google Sheets, Gmail, Slack, and OpenAI credentials Activate the workflow Requirements Google Calendar with appointment bookings Google Sheets for customer history Slack workspace for alerts OpenAI API key (used only for generating email text) Gmail account with OAuth2 access How to customize Adjust the risk thresholds in Set Configuration (default: 70 for Super High, 40 for High) Edit the AI prompts in Generate Urgent Message and Generate Reminder nodes to match your tone Replace Slack with Discord webhook if preferred Important note Risk scoring is 100% rule-based. AI is used only for generating reminder text, never for the risk judgment itself.
by Cheng Siong Chin
How It Works This workflow automates end-to-end supply chain visibility and logistics coordination for manufacturers, distributors, and retailers managing complex multi-tier supply networks. Designed for supply chain planners, logistics managers, and operations directors, it solves the challenge of tracking inventory across procurement, warehousing, and transportation while optimizing decisions for cost, speed, and risk mitigation. The system schedules regular data collection from procurement and warehouse/transportation systems, consolidates supply chain data, analyzes patterns through dual AI agents (Signal Monitoring identifies anomalies and trends, Coordination Agent orchestrates optimization decisions), routes findings by risk level (critical/marginal/acceptable), triggers action-specific responses (critical issues send Slack alerts, escalation emails, and compliance audit logs with approval workflows; acceptable conditions generate standard reports), and maintains complete traceability. Organizations achieve 50% reduction in stockouts, optimize logistics costs by 30%, enable proactive disruption management, and maintain real-time visibility across global supply networks. Setup Steps Connect Schedule Trigger for monitoring frequency Configure procurement system APIs with order and supplier data access credentials Link warehouse management systems (WMS) and transportation platforms (TMS) for inventory and shipment tracking Add AI model API keys to Signal Monitoring and Coordination Agent nodes Define optimization parameters in agent prompts Configure Slack webhooks for critical supply chain alerts to operations teams Set up email credentials for escalation workflows Prerequisites Supply chain system API access (ERP, WMS, TMS), AI service accounts. Use Cases Inventory optimization across multi-tier networks, predictive stockout prevention Customization Modify agent prompts for industry-specific constraints (perishable goods, hazmat regulations) Benefits Reduces stockouts by 50% and optimizes logistics costs by 30%