by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for Recipe Recommendation Engine with Bright Data MCP & OpenAI is a powerful automated workflow combines Bright Data's MCP for scraping trending or regional recipe data with OpenAI 4o mini to generate personalized recipe recommendations. This automated workflow is designed for: Food Bloggers & Culinary Creators : Who want to automate the extraction and curation of recipes from across the web to generate content, compile cookbooks, or publish newsletters. Nutritionists & Health Coaches : Who need structured recipe data to analyze ingredients, calories, and nutrition for personalized meal planning or dietary tracking. AI/ML Engineers & Data Scientists : Building models that classify cuisines, predict recipes from ingredients, or generate dynamic meal suggestions using clean, structured datasets. Grocery & Meal Kit Platforms : Who aim to extract recipes to power recommendation engines, ingredient lists, or personalized meal plans. Recipe Aggregator Startups : Looking to scale recipe data collection, filtering, and standardization across diverse cooking websites with minimal human intervention. Developers Integrating Cooking Features : Into apps or digital assistants that offer recipe recommendations, step-by-step cooking instructions, or nutritional insights. What problem is this workflow solving? This workflow solves: Automated recipe data extraction from any public URL AI-driven structured data extraction Scalable looped crawling and processing Real-time notifications and data persistence What this workflow does 1. Set Recipe Extract URL Configure the recipe website URL in the input node Set your Bright Data zone name and authentication 2. Paginated Data Extract Triggers a paginated extraction across multiple pages (recipe listing, index, or search pages) Returns a list of recipe links for processing 3. Loop Over Items Loops through the array of recipe links Each link is passed individually to the scraping engine 4. Bright Data MCP Client (Per Recipe) Scrapes each individual recipe page using scrape_as_html Smartly bypasses common anti-bot protections via Bright Data Web Unlocker 5. Structured Recipe Data Extract (via OpenAI GPT-4o mini) Converts raw HTML to clean text using an LLM preprocessing node Uses OpenAI GPT-4o mini to extract structured data 6. Webhook Notification Pushes the structured recipe data to your configured webhook endpoint Format: JSON payload, ideal for Slack, internal APIs, or dashboards 7. Save Response to Disk Saves the structured recipe JSON information to local file system Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the OpenAi account credentials. Make sure to set the fields as part of Set the Recipe Extract URL. Remember to set the webhook_url to send a webhook notification of recipe response. Set the desired local path in the Write the structured content to disk node to save the recipe response. How to customize this workflow to your needs You can tailor the Recipe Recommendation Engine workflow to better fit your specific use case by modifying the following key components: 1. Input Fields Node Update the Recipe URL to target specific cuisine sites or recipe types (e.g., vegan, keto, regional dishes). 2. LLM Configuration Swap out the OpenAI GPT-4o mini model with another provider (like Google Gemini) if you prefer. Modify the structured data prompt to extract custom fields that you wish. 3. Webhook Notification Configure the Webhook Notification node to point to your preferred integration (e.g., Slack, Discord, internal APIs). 4. Storage Destination Change the Save to Disk node to store the structured recipe data in: A cloud bucket (S3, GCS, Azure Blob etc.) A database (MongoDB, PostgreSQL, Firestore) Google Sheets or Airtable for spreadsheet-style access.
by Trung Tran
📝 Smart Vendor Contract Renewal & Reminder Workflow With GPT 4.1 mini Never miss a vendor renewal again! This smart workflow automatically tracks expiring contracts, reminds your finance team via Slack, and helps initiate renewal with vendors through email — all with built-in approval and logging. Perfect for managing both auto-renew and manual contracts. 📌 Who’s it for This workflow is designed for Finance and Procurement teams responsible for managing vendor/service contracts. It ensures timely notifications for expiring contracts and automates the initiation of renewal conversations with vendors. ⚙️ How it works / What it does ⏰ Daily Trigger Runs every day at 6:00 AM using a scheduler. 📄 Retrieve Contract List Reads vendor contract data from a Google Sheet (or any data source). Filters for contracts nearing their end date, using a Notice Period (days) field. 🔀 Branch Based on Renewal Type Auto-Renew Contracts: Compose a Slack message summarizing the auto-renewal. Notify the finance contact via Slack. Manual Renewal Contracts: Use an OpenAI-powered agent to generate a meaningful Slack message. Send message and wait for approval from the finance contact (e.g., within 8 hours). Upon approval, generate a formal HTML email to the vendor. Send the email to initiate the contract extension process. 📊 (Optional) Logging Can be extended to log all actions (Slack messages, emails, approvals) to Google Sheets or other databases. 🛠️ How to set up Prepare your Google Sheet Include the following fields: Vendor Name, Vendor Email, Service Type, Contract Start Date, Contract End Date, Notice Period (days), Renewal Type, Finance Contact, Contact Email, Slack ID, Contract Value, Notes. Sample: https://docs.google.com/spreadsheets/d/1zdDgKyL0sY54By57Yz4dNokQC_oIbVxcCKeWJ6PADBM/edit?usp=sharing Configure Integrations 🟢 Google Sheets API: To read contract data. 🔵 Slack API: To notify and wait for approval. 🧠 OpenAI API (GPT-4): To generate personalized reminders. ✉️ Email (SMTP/Gmail): To send emails to vendors. Set the Daily Scheduler Use a Cron node to trigger the workflow at 6:00 AM daily. ✅ Requirements | Component | Required | |----------------------------------|----------| | Google Sheets API | ✅ | | Slack API | ✅ | | OpenAI API (GPT-4) | ✅ | | Email (SMTP/Gmail) | ✅ | | n8n (Self-hosted or Cloud) | ✅ | | Contract Sheet with proper schema| ✅ | 🧩 How to customize the workflow Adjust Reminder Period**: Modify the logic in the Find Expiring Vendors node (based on Contract End Date and Notice Period). Change Message Tone or Format**: Customize the OpenAI agent's prompt or switch from plain text to branded HTML email. Add Logging or Tracking: Add a node to append logs to a **Google Sheet, Notion, or database. Replace Data Source: Swap out Google Sheets for **Airtable, PostgreSQL, or other CRM/database systems. Adjust Wait/Approval Duration**: Modify the sendAndWait Slack node timeout (e.g., from 8 hours to 2 hours). 📦 Optional Extensions 🧾 Add PDF contract preview via Drive link 🧠 Use GPT to summarize renewal terms 🛠 Auto-create Jira task for contract review
by Onur
Proactively retain customers predicted to churn with this automated n8n workflow. Running daily, it identifies high-risk customers from your Google Sheet, uses Google Gemini to generate personalized win-back offers based on their churn score and preferences, sends these offers via Gmail, and logs all actions for tracking. What does this workflow do? This workflow automates the critical process of customer retention by: Running automatically every day** on a schedule you define. Fetching customer data** from a designated Google Sheet containing metrics like predicted churn scores and preferred categories. Filtering* to identify customers with a high churn risk (score > 0.7) who haven't recently received a specific campaign (based on the created_campaign_date field - *you might need to adjust this logic). Using Google Gemini AI to dynamically generate one of three types of win-back offers, personalized based on the customer's specific churn score and preferred product categories: Informational: (Score 0.7-0.8) Highlights new items in preferred categories. Bonus Points: (Score 0.8-0.9) Offers points for purchases in a target category (e.g., Books). Discount Percentage: (Score 0.9-1.0) Offers a percentage discount in a target category (e.g., Books). Sending the personalized offer* directly to the customer via *Gmail**. Logging** each sent offer or the absence of eligible customers for the day in a separate 'SYSTEM_LOG' Google Sheet for monitoring and analysis. Who is this for? CRM Managers & Retention Specialists:** Automate personalized outreach to at-risk customers. Marketing Teams:** Implement data-driven retention campaigns with minimal manual effort. E-commerce Businesses & Subscription Services:** Proactively reduce churn and increase customer lifetime value. Anyone** using customer data (especially churn prediction scores) who wants to automate personalized retention efforts via email. Benefits Automated Retention:** Set it up once, and it runs daily to engage at-risk customers automatically. AI-Powered Personalization:** Go beyond generic offers; tailor messages based on churn risk and customer preferences using Gemini. Proactive Churn Reduction:* Intervene *before customers leave by addressing high churn scores with relevant offers. Scalability:** Handle personalized outreach for many customers without manual intervention. Improved Customer Loyalty:** Show customers you value them with relevant, timely offers. Action Logging:** Keep track of which customers received offers and when the workflow ran. How it Works Daily Trigger: The workflow starts automatically based on the schedule set (e.g., daily at 9 AM). Fetch Data: Reads all customer data from your 'Customer Data' Google Sheet. Filter Customers: Selects customers where predicted_churn_score > 0.7 AND created_campaign_date is empty (verify this condition fits your needs). Check for Eligibility: Determines if any customers passed the filter. IF Eligible Customers Found: Loop: Processes each eligible customer one by one. Generate Offer (Gemini): Sends the customer's predicted_churn_score and preferred_categories to Gemini. Gemini analyzes these and the defined rules to create the appropriate offer type, value, title, and detailed message, returning it as structured JSON. Log Sent Offer: Records action_taken = SENT_WINBACK_OFFER, the timestamp, and customer_id in the 'SYSTEM_LOG' sheet. Send Email: Uses the Gmail node to send an email to the customer's user_mail with the generated offer_title as the subject and offer_details as the body. IF No Eligible Customers Found: Set Status: Creates a record indicating system_log = NOT_FOUND. Log Status: Records this 'NOT_FOUND' status and the current timestamp in the 'SYSTEM_LOG' sheet. n8n Nodes Used Schedule Trigger Google Sheets (x3 - Read Customers, Log Sent Offer, Log Not Found) Filter If SplitInBatches (Used for Looping) Langchain Chain - LLM (Gemini Offer Generation) Langchain Chat Model - Google Gemini Langchain Output Parser - Structured Set (Prepare 'Not Found' Log) Gmail (Send Offer Email) Prerequisites Active n8n instance (Cloud or Self-Hosted). Google Account** with access to Google Sheets and Gmail. Google Sheets API Credentials (OAuth2):** Configured in n8n. Two Google Sheets:** 'Customer Data' Sheet: Must contain columns like customer_id, predicted_churn_score (numeric), preferred_categories (string, e.g., ["Books", "Electronics"]), user_mail (string), and potentially created_campaign_date (date/string). 'SYSTEM_LOG' Sheet: Should have columns like system_log (string), date (string/timestamp), and customer_id (string, optional for 'NOT_FOUND' logs). Google Cloud Project** with the Vertex AI API enabled. Google Gemini API Credentials:** Configured in n8n (usually via Google Vertex AI credentials). Gmail API Credentials (OAuth2):** Configured in n8n with permission to send emails. Setup Import the workflow JSON into your n8n instance. Configure Schedule Trigger: Set the desired daily run time (e.g., Hours set to 9). Configure Google Sheets Nodes: Select your Google Sheets OAuth2 credentials for all three Google Sheets nodes. 1. Fetch Customer Data...: Enter your 'Customer Data' Spreadsheet ID and Sheet Name. 5b. Log Sent Offer...: Enter your 'SYSTEM_LOG' Spreadsheet ID and Sheet Name. Verify column mapping. 3b. Log 'Not Found'...: Enter your 'SYSTEM_LOG' Spreadsheet ID and Sheet Name. Verify column mapping. Configure Filter Node (2. Filter High Churn Risk...): Crucially, review the second condition: {{ $json.created_campaign_date.isEmpty() }}. Ensure this field and logic correctly identify customers who should receive the offer based on your campaign strategy. Modify or remove if necessary. Configure Google Gemini Nodes: Select your configured Google Vertex AI / Gemini credentials in the Google Gemini Chat Model node. Review the prompt in the 5a. Generate Win-Back Offer... node to ensure the offer logic matches your business rules (especially category names like "Books"). Configure Gmail Node (5c. Send Win-Back Offer...): Select your Gmail OAuth2 credentials. Activate the workflow. Ensure your 'Customer Data' and 'SYSTEM_LOG' Google Sheets are correctly set up and populated. The workflow will run automatically at the next scheduled time. This workflow provides a powerful, automated way to engage customers showing signs of churn, using personalized AI-driven offers to encourage them to stay. Adapt the filtering and offer logic to perfectly match your business needs!
by Trung Tran
📒 Telegram Expense Tracker to Google Sheets with GPT-4.1 👤 Who’s it for This workflow is for anyone who wants to log their daily expenses by simply chatting with a Telegram bot. Ideal for: Individuals who want a quick way to track spending Freelancers who log receipts and purchases on the go Teams or small business owners who want lightweight expense capture ⚙️ How it works / What it does User sends a text message on Telegram describing an expense (e.g., “Bought coffee for 50k at Highlands”) Message format is validated If the message is text, it proceeds to GPT-4.1 Mini for processing. If it's not text (e.g. image or file), the bot sends a fallback message. OpenAI GPT-4.1 Mini parses the message and returns: relevant: true/false expense_record: structured fields (date, amount, currency, category, description, source) message: a friendly confirmation or fallback If valid: The bot replies with a fun acknowledgment The data is saved to a connected Google Sheet If invalid: A fallback message is sent to encourage proper input 🛠️ How to set up 1. Telegram Bot Setup Create a bot using BotFather on Telegram Copy the bot token and paste it into the Telegram Trigger node 2. Google Sheet Setup Create a Google Sheet with these columns: Date | Amount | Currency | Category | Description | SourceMessage Share the sheet with your n8n service account email 3. OpenAI Configuration Connect the OpenAI Chat Model node using your OpenAI API key Use GPT-4.1 Mini as the model Apply a system prompt that extracts structured JSON with: relevant, expense_record, and message 4. Add Parser Use the Structured Output Parser node to safely parse the JSON response 5. Conditional Logic Nodes Is text message? Checks if the message is in text format Supported scenario? Checks if relevant = true in the LLM response 6. Final Actions If relevant**: Send confirmation via Telegram Append row to Google Sheet If not relevant**: Send fallback message via Telegram ✅ Requirements Telegram bot token OpenAI GPT-4.1 Mini API access n8n instance (self-hosted or cloud) Google Sheet with access granted to n8n Basic understanding of n8n node configuration 🧩 How to customize the workflow | Feature | How to Customize | |----------------------------------|-------------------------------------------------------------------| | Add multi-currency support | Update system prompt to detect and extract different currencies | | Add more categories | Modify the list of categories in the system prompt | | Track multiple users | Add username or chat ID column to the Google Sheet | | Trigger alerts | Add Slack, Email, or Telegram alerts for specific expense types | | Weekly summaries | Use a cron node + Google Sheet query + Telegram message | | Visual dashboards | Connect the sheet to Looker Studio or Google Data Studio | Built with 💬 Telegram + 🧠 GPT-4.1 Mini + 📊 Google Sheets + ⚡ n8n
by Aryan Shinde
How it works This workflow automates the process of creating, approving, and optionally posting LinkedIn content from a Google Sheet. Here's a high-level overview: Scheduled Trigger: Runs automatically based on your defined time interval (daily, weekly, etc.). Fetch Data from Google Sheets: Pulls the first row from your sheet where Status is marked as Pending. Generate LinkedIn Post Content: Uses OpenAI to create a professional LinkedIn post using the Post Description and Instructions from the sheet. Format & Prepare Data: Formats the generated content along with the original instruction and post description for email. Send for Approval: Sends an email to a predefined user (e.g., marketing team) with a custom form for approval, including a dropdown to accept/reject and an optional field for edits. (Optional) Image Fetch: Downloads an image from a URL (if provided in the sheet) for future use in post visuals. Set up steps You’ll need the following before you start: A Google Sheet with the following columns: Post Description, Instructions, Image (URL), Status Access to an OpenAI API key A connected Gmail account for sending approval emails Your own Google Sheets and Gmail credentials added in n8n Steps: Google Sheet Preparation: Create a new Google Sheet with the mentioned columns (Post Description, Instructions, Image, Status, Output, Post Link). Add a row with test data and set Status to Pending. Credentials: In n8n, create OAuth2 credentials for: a. Google Sheets b. Gmail c. OpenAI (API Key) Assign these credentials to the respective nodes in the JSON. OpenAI Model: Choose a model like gpt-4o-mini (used here) or any other available in your plan. Adjust the prompt in the "Generate Post Content" node if needed. Email Configuration: In the Gmail node, set the recipient email to your own or your team’s address. Customize the email message template if necessary. Schedule the Workflow: Set the trigger interval (e.g., every morning at 9 AM). Testing: Run the workflow manually first to confirm everything works. Check Gmail for the approval form, respond, and verify the results.
by Eduard
This workflow demonstrates three distinct approaches to chaining LLM operations using Claude 3.7 Sonnet. Connect to any section to experience the differences in implementation, performance, and capabilities. What you'll find: 1️⃣ Naive Sequential Chaining The simplest but least efficient approach - connecting LLM nodes in a direct sequence. Easy to set up for beginners but becomes unwieldy and slow as your chain grows. 2️⃣ Agent-Based Processing with Memory Process a list of instructions through a single AI Agent that maintains conversation history. This structured approach provides better context management while keeping your workflow organized. 3️⃣ Parallel Processing for Maximum Speed Split your prompts and process them simultaneously for much faster results. Ideal when you need to run multiple independent tasks without shared context. Setup Instructions: API Credentials: Configure your Anthropic API key in the credentials manager. This workflow uses Claude 3.7 Sonnet, but you can modify the model in each Anthropic Chat Model node, or pick an entirely different LLM. For Cloud Users: If using the parallel processing method (section 3), replace {{ $env.WEBHOOK_URL }} in the "LLM steps - parallel" HTTP Request node with your n8n instance URL. Test Data: The workflow fetches content from the n8n blog by default. You can modify this part to use a different content or a data source. Customization: Each section contains a set of example prompts. Modify the "Initial prompts" nodes to change the questions asked to the LLM. Compare these methods to understand the trade-offs between simplicity, speed, and context management in your AI workflows! Follow me on LinkedIn for more tips on AI automation and n8n workflows!
by Lakshit Ukani
One-way sync between Telegram, Notion, Google Drive, and Google Sheets Who is this for? This workflow is perfect for productivity-focused teams, remote workers, virtual assistants, and digital knowledge managers who receive documents, images, or notes through Telegram and want to automatically organize and store them in Notion, Google Drive, and Google Sheets—without any manual work. What problem is this workflow solving? Managing Telegram messages and media manually across different tools like Notion, Drive, and Sheets can be tedious. This workflow automates the classification and storage of incoming Telegram content, whether it’s a text note, an image, or a document. It saves time, reduces human error, and ensures that media is stored in the right place with metadata tracking. What this workflow does Triggers on a new Telegram message** using the Telegram Trigger node. Classifies the message type** using a Switch node: Text messages are appended to a Notion block. Images are converted to base64, uploaded to imgbb, and then added to Notion as toggle-image blocks. Documents are downloaded, uploaded to Google Drive, and the metadata is logged in Google Sheets. Sends a completion confirmation** back to the original Telegram chat. Setup Telegram Bot: Set up a bot and get the API token. Notion Integration: Share access to your target Notion page/block. Use the Notion API credentials and block ID where content should be appended. Google Drive & Sheets: Connect the relevant accounts. Select the destination folder and spreadsheet. imgbb API: Obtain a free API key from imgbb. Replace placeholder credential IDs and asset URLs as needed in the imported workflow. How to customize this workflow to your needs Change Storage Locations**: Update the Notion block ID or Google Drive folder ID. Switch Google Sheet to log in a different file or sheet. Add More Filters**: Use additional Switch rules to handle other Telegram message types (like videos or voice messages). Modify Response Message**: Personalize the Telegram confirmation text based on the file type or sender. Use a different image hosting service** if you don’t want to use imgbb.
by Davide
This workflow automates the process of generating and scheduling social media posts using content from a WordPress blog. It leverages advanced AI (OpenAI & Anthropic Claude), Google Sheets, and the Postiz platform to create and publish platform-specific posts across LinkedIn, Facebook, Instagram, and Twitter (X). This system streamlines cross-platform social media publishing, ensuring consistent branding and AI-optimized content. Key Features Content Source: WordPress Automatically fetches the content of a WordPress post by its Post ID. Content Transformation via AI Uses Anthropic Claude and OpenAI to generate unique, optimized captions for each platform: LinkedIn: professional and insight-driven Instagram: creative with emojis and storytelling Facebook: community-oriented and friendly Twitter (X): concise, hashtag-optimized Visual Generation (Optional) Uses OpenAI's DALL·E (via OpenRouter) to generate custom images based on the AI-generated Instagram and Facebook/LinkedIn captions. Post Management with Google Sheets Uses a Google Sheet as the control panel: Simply input the WordPress Post ID Marks each post as “done” by updating corresponding columns (TWITTER, FACEBOOK, INSTAGRAM, LINKEDIN) Publishing via Postiz Uses the Postiz API to schedule or immediately publish posts to your connected social accounts. Handles image uploads and scheduling time for each platform. Benefits 💡 Intelligent automation: Saves time by removing manual copywriting and platform formatting. 🎯 Platform optimization: Ensures posts are tailored to each platform’s audience and algorithm. 🛠️ No-code friendly: Simple setup via Google Sheets + Postiz + WordPress. 🔁 Repeatable & Scalable: Ideal for agencies or content creators managing multiple posts per week. 🧪 +20 Social Media Platforms: Easy to start with social integrations. How It Works Input & Data Fetching: The workflow starts with a manual trigger (e.g., "Test workflow") or scheduled execution. It retrieves a WordPress post ID from a Google Sheets document, then fetches the full post content (title and body) via the WordPress API. AI-Powered Content Generation: The "Social Media Manager" node (powered by Claude Opus 4.1) analyzes the post and generates platform-optimized captions for: Twitter/X: Concise, hashtag-rich text (≤150 chars). Facebook/LinkedIn: Professional yet engaging copy with CTAs. Instagram: Visual-focused captions with emojis and hashtags. AI-generated images are created for Instagram (square) and Facebook/LinkedIn (landscape) using OpenAI’s image model. Publishing Automation: Captions and images are uploaded to Postiz, a social media scheduler. Postiz publishes the content to connected platforms (Twitter, Facebook, LinkedIn, Instagram) at the specified time. Google Sheets is updated with status markers (e.g., "x" in columns like TWITTER, FACEBOOK) to track published posts. Set Up Steps Prerequisites: Postiz Account: Sign up for Postiz (free trial available). API Keys: Configure Postiz API credentials in the "Postiz" and "Upload Image" nodes. Social Channels: Link your social accounts in Postiz’s dashboard and note their integrationId values (replace "XXX" in Postiz nodes). Google Sheets Setup: Clone the template Sheet and add WordPress post IDs to the "POST ID" column. Configure Nodes: WordPress: Add credentials for your WordPress site in the "Get Post" node. AI Models: Ensure API keys for Claude (Anthropic) and OpenAI (for images) are valid. Postiz Nodes: Replace placeholder integrationId values with your actual Postiz channel IDs. Test & Deploy: Trigger the workflow manually to verify captions, images, and Postiz scheduling. Activate the workflow for automation (e.g., run daily to publish new WordPress posts). Note: This workflow requires self-hosted n8n due to community nodes (Postiz, LangChain). Need help customizing? Contact me for consulting and support or add me on Linkedin.
by David Levesque
Here's the corrected English text: Dropbox Folder Monitoring Workflow As we don't have (yet?) a Dropbox node "Watching new files" or "Watching folder", I created this central workflow to do it. How it works Triggered by Dropbox webhook I respond immediately to Dropbox to avoid webhook disabling Then I add/duplicate one branch per monitored folder, according to my needs In my case, I need to monitor several folders, like "vocal notes to process", "transcriptions to LinkedIn posts" or "quotes to add". This workflow shows 2 types of folder monitoring: Way #1: Each file in the monitored folder calls a sub-workflow Way #2: We get all files from the monitored folder and compare them to a database. If the file is not listed in DB, i supposed it's new one. Way #1 - We get all files from the monitored folder I set a variable folder_to_watch to indicate which folder to monitor. This step is here just to be homogeneous and allow setting the folder path only once in this branch. I list the folder files We keep only files (exclude folders) Then I call the specialized sub-workflow Way #2 - We want only new files from the monitored folder I set a variable folder_to_watch to indicate which folder to monitor I list the folder files and keep only files Meanwhile, I query my DB to get known files about this folder (I send the query to NocoDB (folder_to_watch,eq,{{ $json.folder_to_watch }})) Now I can exclude old files and keep only new ones by merging (I compare from Dropbox file id - as the file could be renamed by the user) I add the new file in DB to be sure to recognize it next time - I save the JSON Dropbox data: { "id":"{{ $json.id }}", "name":"{{ $json.name }}", "lastModifiedClient": "{{ $json.lastModifiedClient }}", "lastModifiedServer": "{{ $json.lastModifiedServer }}", "rev": "{{ $json.rev }}", "contentSize": {{ $json.contentSize }}, "type": "{{ $json.type }}", "contentHash": "{{ $json.contentHash }}", "pathLower": "{{ $json.pathLower }}", "pathDisplay": "{{ $json.pathDisplay }}", "isDownloadable": {{ $json.isDownloadable }} } And now I can call my sub-workflow :) My DB Columns details: folder_to_watch data (json/text) timestamp file_id (Dropbox file ID, to ease future searches) My vision: I have only one workflow in my n8n that monitors Dropbox folders/files This workflow calls the required sub-workflow specialized for the tasks required I will have as many branches as I have folders to monitor (if I have 5 different folders to watch, I will get 5 branches and 5 sub-workflows)
by WeWeb
This n8n template helps you build a full AI-powered LinkedIn content generator with just a few clicks. Paired with the free WeWeb UI template, it becomes a ready-to-use web app where users can: Add their own OpenAI API key Customize the prompt and define 6 content topics Edit the AI-generated topics Choose when to generate LinkedIn posts, complete with hashtags and an optional image Who This Is For Perfect for marketers, indie hackers, and solopreneurs who want to build their personal brand on LinkedIn while staying in control of what gets posted. 🧠 What Makes This Different Unlike most AI agents, you stay fully in control: You define the tone and focus via the prompt. You choose which topics to keep or modify. You decide when to generate a post. You can build on top of this and create your own SaaS product. It’s also modular and extendable—hook it up to your backend, add user login, or feed AI improvements based on user input. ⚙️ How It Works Triggering Events: The app includes 3 pre-configured triggers, ready to be hooked into your WeWeb frontend. Just update the webhook URLs after duplicating the n8n workflow. Topic Generation: A call is made to OpenAI (GPT-4) to generate topic ideas based on your prompt. Post Creation: Once topics are approved or edited, GPT-4 writes full posts with suggested hashtags. Image Generation (Optional): If enabled, a DALL·E call generates a relevant image. Everything Stays Local: All data and images are handled locally, no cloud storage setup needed. 🧪 Requirements & Setup No fancy infrastructure required. Here’s what helps you get started: Free WeWeb account** (recommended) to use the frontend UI template OpenAI account** with API access (for GPT-4 and DALL·E) n8n account** (self-hosted or cloud) to run the backend workflow The template is completely free to use. Since each user adds their own OpenAI API key, you don't need to worry about usage costs or rate limits on your end. 🔧 Want to Go Further? This setup is beginner-friendly, but developers can: Add user accounts Save post history Feed user feedback back into the prompt logic Launch their own branded version as a SaaS
by Muhammad Bello
Email Inbox Manager System Categories Email Automation AI-Powered Operations Internal Productivity Tools This workflow builds a fully automated AI-powered email categorization and response assistant. It intelligently processes, categorizes, labels, and drafts replies to incoming Gmail messages in real time using AI with zero manual involvement. Perfect for support, sales, finance, and internal operations. Benefits Automated Email Triage** – Every unread email is instantly read, analyzed, and classified AI-Powered Categorization** – Uses GPT-4 to understand content and apply correct labels Smart Response Generation** – Automatically drafts accurate replies based on category Slack Notifications** – Instantly notifies your team about internal or sales-related messages Seamless Gmail Integration** – Labels, drafts, and marks emails directly in your inbox Custom Classification Rules** – Tailored to internal, support, sales, finance, and promotional needs How It Works Gmail Trigger: Monitors Gmail inbox in real-time for new unread messages Triggers the workflow every minute with no need for manual refresh Smart Classification: Feeds email body to AI-powered Text Classifier Categories include: Internal, Customer Support, Promotions, Admin/Finance, and Sales Opportunity Classification based on sender domain, keywords, and context AI-Based Labeling: Applies appropriate Gmail label based on classification result Helps keep inbox clean, organized, and easily searchable AI Reply Generation: Specialized GPT-4 agents generate replies tailored to each category: Internal:** Polished team replies Customer Support:** Clear and professional customer responses Promotions:** Summarizes and evaluates promotional value Admin/Finance:** Extracts invoice/payment information Sales Opportunity:** Drafts personalized replies and sales notifications Auto-Drafting + Slack Alerts: Replies are saved as Gmail drafts, ready for review or direct send Sends Slack notifications for Internal or Sales Opportunities Includes subject lines and quick message previews Smart Decision-Making: Promotional emails are evaluated with AI for usefulness Only valuable offers are flagged or responded to Automatically marks emails as read after processing Business Use Cases Customer Support Teams** – Automatically categorize and prep replies to client messages Sales Reps** – Instantly receive drafted responses to new inquiries Operations Managers** – Keep internal comms clear and responsive Finance Departments** – Auto-extract and review payment/invoice messages Founders** – Never miss an important email while your AI sorts and replies for you Difficulty Level: Intermediate Estimated Build Time: 2–4 hours Monthly Operating Cost: $20–80 (depending on OpenAI usage and Slack volume) Required Setup Gmail Integration Set up Gmail OAuth2 connection Create labels: Internal, Customer Support, Promotions, Admin/Finance, Sales Opportunity OpenAI Integration Connect GPT-4 or GPT-4o account Configure role-based prompts for each email category Output structured response data (subject, body, notification) Slack Integration Set up Slack OAuth2 connection Configure target Slack channel Notify team when important categories are triggered System Architecture The workflow follows a powerful 6-stage automation: Trigger – Poll Gmail for new unread emails Classify – AI categorizes the email into the right bucket Label – Applies Gmail label for search and visibility Generate Reply – GPT-4 crafts draft email reply Draft Email – Saves response in Gmail Notify – Optional Slack message alerts for priority emails Why This System Works Inbox Clarity – Keeps your inbox categorized and organized Human Quality Replies – AI-generated messages sound professional and personalized Time-Saving Automation – Handles support, sales, internal ops, and finance without touching your inbox Multi-Agent Architecture – Each email type is handled by a specialized GPT-4 prompt Real Time Reactions – From email receipt to Slack notification in under a minute
by Adam Bertram
LintGuardian: Automated PR Linting with n8n & AI What It Does LintGuardian is an n8n workflow template that automates code quality enforcement for GitHub repositories. When a pull request is created, the workflow automatically analyzes the changed files, identifies linting issues, fixes them, and submits a new PR with corrections. This eliminates manual code style reviews, reduces back-and-forth comments, and lets your team focus on functionality rather than formatting. How It Works The workflow is triggered by a GitHub webhook when a PR is created. It fetches all changed files from the PR using the GitHub API, processes them through an AI-powered linting service (Google Gemini), and automatically generates fixes. The AI agent then creates a new branch with the corrected files and submits a "linting fixes" PR against the original branch. Developers can review and merge these fixes with a single click, keeping code consistently formatted with minimal effort. Prerequisites To use this template, you'll need: n8n instance: Either self-hosted or using n8n.cloud GitHub repository: Where you want to enforce linting standards GitHub Personal Access Token: With permissions for repo access (repo, workflow, admin:repo_hook) Google AI API Key: For the Gemini language model that powers the linting analysis GitHub webhook: Configured to send PR creation events to your n8n instance Setup Instructions Import the template into your n8n instance Configure credentials: Add your GitHub Personal Access Token under Credentials → GitHub API Add your Google AI API key under Credentials → Google Gemini API Update repository information: Locate the "Set Common Fields" code node at the beginning of the workflow Change the gitHubRepoName and gitHubOrgName values to match your repository const commonFields = { 'gitHubRepoName': 'your-repo-name', 'gitHubOrgName': 'your-org-name' } Configure the webhook: Create a file named .github/workflows/lint-guardian.yml in your repository replacing the Trigger n8n Workflow step with your webhook: name: Lint Guardian on: pull_request: types: [opened, synchronize] jobs: trigger-linting: runs-on: ubuntu-latest steps: name: Trigger n8n Workflow uses: fjogeleit/http-request-action@v1 with: url: 'https://your-n8n-instance.com/webhook/1da5a6e1-9453-4a65-bbac-a1fed633f6ad' method: 'POST' contentType: 'application/json' data: | { "pull_request_number": ${{ github.event.pull_request.number }}, "repository": "${{ github.repository }}", "branch": "${{ github.event.pull_request.head.ref }}", "base_branch": "${{ github.event.pull_request.base.ref }}" } preventFailureOnNoResponse: true Customize linting rules (optional): Modify the AI Agent's system message to specify your team's linting preferences Adjust file handling if you have specific file types to focus on or ignore Security Considerations When creating your GitHub Personal Access Token, remember to: Choose the minimal permissions needed (repo, workflow, admin:repo_hook) Set an appropriate expiration date Treat your token like a password and store it securely Consider using GitHub's fine-grained personal access tokens for more limited scope As GitHub documentation notes: "Personal access tokens are like passwords, and they share the same inherent security risks." Extending the Template You can enhance this workflow by: Adding Slack notifications when linting fixes are submitted Creating custom linting rules specific to your team's needs Expanding it to handle different types of code quality checks Adding approval steps for more controlled environments This template provides an excellent starting point that you can customize to fit your team's exact workflow and code style requirements.