by Connor Provines
[Meta] Multi-Format Documentation Generator for N8N Creators (+More) One-Line Description Transform n8n workflow JSON into five ready-to-publish documentation formats including technical guides, social posts, and marketplace submissions. Detailed Description What it does: This workflow takes an exported n8n workflow JSON file and automatically generates a complete documentation package with five distinct formats: technical implementation guide, LinkedIn post, Discord community snippet, detailed use case narrative, and n8n Creator Commons submission documentation. All outputs are compiled into a single Google Doc for easy access and distribution. Who it's for: n8n creators** preparing workflows for the template library or community sharing Automation consultants** documenting client solutions across multiple channels Developer advocates** creating content about automation workflows for different audiences Teams** standardizing workflow documentation for internal knowledge bases Key Features: Parallel AI generation** - Creates all five documentation formats simultaneously using Claude, saving 2+ hours of manual writing Automatic format optimization** - Each output follows platform-specific best practices (LinkedIn character limits, Discord casual tone, n8n marketplace guidelines) Single Google Doc compilation** - All documentation consolidated with clear section separators and automatic workflow name detection JSON upload interface** - Simple form-based trigger accepts workflow exports without technical setup Smart content adaptation** - Same workflow data transformed into technical depth for developers, engaging narratives for social media, and searchable descriptions for marketplaces Ready-to-publish outputs** - No editing required—each format follows platform submission guidelines and style requirements How it works: User uploads exported n8n workflow JSON through a web form interface Five AI agents process the workflow data in parallel, each generating format-specific documentation (technical guide, LinkedIn post, Discord snippet, use case story, marketplace listing) All outputs merge into a formatted document with section headers and separators Google Docs creates a new document with auto-generated title from workflow name and timestamp Final document populates with all five documentation formats, ready for copying to respective platforms Setup Requirements Prerequisites: Anthropic API** (Claude AI) - Powers all documentation generation; requires paid API access or credits Google Docs API** - Creates and updates documentation; free with Google Workspace account n8n instance** - Cloud or self-hosted with AI agent node support (v1.0+) Estimated Setup Time: 20-25 minutes (15 minutes for API credentials, 5-10 minutes for testing with sample workflow) Installation Notes API costs**: Each workflow documentation run uses ~15,000-20,000 tokens across five parallel AI calls (approximately $0.30-0.50 per generation at current Claude pricing) Google Docs folder**: Update the folderId parameter in the "Create a document" node to your target folder—default points to a specific folder that won't exist in your Drive Testing tip**: Use a simple 3-5 node workflow for your first test to verify all AI agents complete successfully before processing complex workflows Wait node purpose**: The 5-second wait between document creation and content update prevents Google Docs API race conditions—don't remove this step Form URL**: After activation, save the form trigger URL for easy access—bookmark it or share with team members who need to generate documentation Customization Options Swappable integrations: Replace Google Docs with Notion, Confluence, or file system storage by swapping final nodes Switch from Claude to GPT-4, Gemini, or other LLMs by changing the language model node (may require prompt adjustments) Add Slack/email notification nodes after completion to alert when documentation is ready Adjustable parameters: Modify AI prompts in each agent node to match your documentation style preferences or add company-specific guidelines Add/remove documentation formats by duplicating or deleting agent nodes and updating merge configuration Change document formatting in the JavaScript code node (section separators, headers, metadata) Extension possibilities: Add automatic posting to LinkedIn/Discord by connecting their APIs after doc generation Create version history tracking by appending to existing docs instead of creating new ones Build approval workflow by adding human-in-the-loop steps before final document creation Generate visual diagrams by adding Mermaid chart generation from workflow structure Create multi-language versions by adding translation nodes after English generation Category Development Tags documentation n8n content-generation ai claude google-docs workflow automation-publishing Use Case Examples Marketplace contributors**: Generate complete n8n template submission packages in minutes instead of hours of manual documentation writing across multiple format requirements Agency documentation**: Automation consultancies can deliver client workflows with professional documentation suite—technical guides for client IT teams, social posts for client marketing, and narrative case studies for portfolio Internal knowledge base**: Development teams standardize workflow documentation across projects, ensuring every automation has consistent technical details, use case examples, and setup instructions for team onboarding
by Cheng Siong Chin
How It Works The webhook receives incoming profiles and extracts relevant demographic, financial, and credential data. The workflow then queries the programs database to identify suitable options, while the AI generates personalized recommendations based on eligibility and preferences. A formal recommendation letter is created, followed by a drafted outreach email tailored to coordinators. Parsers extract structured data from the letters and emails, a Slack summary is prepared for internal visibility, and the final response is sent to the appropriate recipients. Setup Steps Configure AI agents by adding OpenAI credentials and setting prompts for the Program Matcher, Letter Writer, and Email Drafter. Connect the programs database (Airtable or PostgreSQL) and configure queries to retrieve matching program data. Set up the webhook by defining the trigger endpoint and payload structure for incoming profiles. Configure JSON parsers to extract relevant information from profiles, letters, and emails. Add the Slack webhook URL and define the summary format for generated communications. Prerequisites OpenAI API key Financial programs database Slack workspace with webhook User profile structure (income, GPA, demographics) Use Cases Universities automating 500+ annual applicant communications Scholarship foundations personalizing outreach at scale Customization Add multilingual support for international applicants Include PDF letter generation with signatures Benefits Reduces communication time from 30 to 2 minutes per applicant, ensures consistent professional quality
by Robert Breen
This workflow introduces beginners to one of the most fundamental concepts in n8n: looping over items. Using a simple use case—generating LinkedIn captions for content ideas—it demonstrates how to split a dataset into individual items, process them with AI, and collect the output for review or export. ✅ Key Features 🧪 Create Dummy Data**: Simulate a small dataset of content ideas. 🔁 Loop Over Items**: Process each row independently using the SplitInBatches node. 🧠 AI Caption Creation**: Automatically generate LinkedIn captions using OpenAI. 🧰 Tool Integration**: Enhance AI output with creativity-injection tools. 🧾 Final Output Set**: Collect the original idea and generated caption. 🧰 What You’ll Need ✅ An OpenAI API key ✅ The LangChain nodes enabled in your n8n instance ✅ Basic knowledge of how to trigger and run workflows in n8n 🔧 Step-by-Step Setup 1️⃣ Run Workflow Node**: Manual Trigger (Run Workflow) Purpose**: Manually start the workflow for testing or learning. 2️⃣ Create Random Data Node**: Create Random Data (Code) What it does**: Simulates incoming data with multiple content ideas. Code**: return [ { json: { row_number: 2, id: 1, Date: '2025-07-30', idea: 'n8n rises to the top', caption: '', complete: '' } }, { json: { row_number: 3, id: 2, Date: '2025-07-31', idea: 'n8n nodes', caption: '', complete: '' } }, { json: { row_number: 4, id: 3, Date: '2025-08-01', idea: 'n8n use cases for marketing', caption: '', complete: '' } } ]; 3️⃣ Loop Over Items Node**: Loop Over Items (SplitInBatches) Purpose**: Sends one record at a time to the next node. Why It Matters**: Loops in n8n are created using this node when you want to iterate over multiple items. 4️⃣ Create Captions with AI Node**: Create Captions (LangChain Agent) Prompt**: idea: {{ $json.idea }} System Message**: You are a helpful assistant creating captions for a LinkedIn post. Please create a LinkedIn caption for the idea. Model**: GPT-4o Mini or GPT-3.5 Credentials Required**: OpenAI Credential Go to: OpenAI API Keys Create a key and add it in n8n under credentials as “OpenAi account” 5️⃣ Inject Creativity (Optional) Node**: Tool: Inject Creativity (LangChain Tool) Purpose**: Demonstrates optional LangChain tools that can enhance or manipulate input/output. Why It’s Cool**: A great way to show chaining tools to AI agents. 6️⃣ Output Table Node**: Output Table (Set) Purpose**: Combines original ideas and generated captions into final structure. Fields**: idea: ={{ $('Create Random Data').item.json.idea }} output: ={{ $json.output }} 💡 Educational Value This workflow demonstrates: Creating dynamic inputs with the Code node Using SplitInBatches to simulate looping Sending dynamic prompts to an AI model Using Set to structure the output data Beginners will understand how item-level processing works in n8n and how powerful looping combined with AI can be. 📬 Need Help or Want to Customize This? Robert Breen Automation Consultant | AI Workflow Designer | n8n Expert 📧 robert@ynteractive.com 🌐 ynteractive.com 🔗 LinkedIn 🏷️ Tags n8n loops OpenAI LangChain workflow training beginner LinkedIn automation caption generator
by Jasurbek
How it works This workflow runs on a daily schedule and automatically sends follow-up reminders to candidates who have received an application link but have not yet applied. It checks Airtable for eligible records, calculates how much time has passed since outreach was sent, and decides whether to send a first reminder, second reminder, or no message. All decision logic is handled in a single Code node, which outputs a simple routing value. This makes the workflow easy to understand and prevents fragile conditional logic. Each reminder is sent only once. After a reminder is sent, the workflow updates Airtable with a corresponding “sent” flag so the same reminder cannot be sent again on future runs. Setup steps Connect your Airtable account and select the table containing candidate records. Ensure Airtable includes a timestamp field indicating when outreach was sent. Ensure checkbox fields exist for each reminder (for example, “Reminder 1 Sent” and “Reminder 2 Sent”). Connect your email provider (Brevo) and SMS provider. Set the Cron node to run once per day at your preferred time. Initial setup typically takes 10–15 minutes. When to use this template You want automated follow-ups without manual chasing You need to avoid sending duplicate reminders You want Airtable to remain the source of truth
by Valeriy Halahan
AI Portfolio Generator for Freelancers Automatically transform any website URL into a complete portfolio entry with professional screenshots and AI-generated Upwork project descriptions. 🎯 Perfect For Freelancers** building their Upwork/portfolio from past projects Agencies** documenting client work at scale Web developers** showcasing their websites professionally Anyone** who needs consistent, high-quality website screenshots ✨ What It Does Submit a URL via simple web form AI analyzes the website (structure, niche, audience, services) Smart screenshots capture hero, fullpage, individual sections, and mobile views AI writes compelling Upwork portfolio description with title, role, and skills Auto-saves everything to Google Drive + Sheets + sends Telegram notification 🔥 Key Features JavaScript Rendering** — Works with React, Vue, Next.js, and any SPA (via Firecrawl) Intelligent Section Detection** — AI identifies real content sections, not utility elements Multiple Screenshot Types** — Hero (1920×1080), fullpage, custom sections, mobile (375×812) Retina Quality** — 2x device scale factor for crisp images Smart Error Handling** — Retries failed screenshots, filters invalid results Rate Limit Protection** — Built-in delays to respect API limits Complete Logging** — Every run logged to Google Sheets with all metadata 📸 Screenshots Captured | Type | Resolution | Description | |------|------------|-------------| | Hero | 1920×1080 @2x | Above-the-fold view | | Fullpage | 1920×auto @2x | Entire scrollable page | | Sections | 1920×1080 @2x | Each detected content section | | Mobile | 375×812 @2x | iPhone-style mobile view | 🤖 AI-Generated Upwork Content Project Title** (max 50 chars) Your Role** (e.g., "Full-Stack Developer", "Lead Designer") Project Description** (goals, solution, impact — max 600 chars) Skills** (5 relevant technical skills) 🔧 Services Used | Service | Purpose | Free Tier | |---------|---------|-----------| | Firecrawl | JavaScript rendering | ✅ 500 pages/month | | ScreenshotOne | Screenshot API | ✅ 100 screenshots/month | | OpenAI | GPT-4o-mini analysis | Pay-as-you-go | | Google Drive | Image storage | ✅ 15GB free | | Google Sheets | Results logging | ✅ Free | | Telegram | Notifications | ✅ Free | 📋 Setup Checklist ✅ Import workflow ✅ Add Firecrawl API key ✅ Add ScreenshotOne API key ✅ Connect OpenAI credentials ✅ Connect Google Drive (+ set your folder) ✅ Connect Google Sheets (+ set your spreadsheet) ✅ Set up Telegram bot + chat ID ✅ Activate & share the form URL! 💡 Pro Tips Test with simple sites first** before complex SPAs Increase delay** in Wait node if hitting rate limits Change AI model** to gpt-4o for better analysis quality All instructions included** as Sticky Notes inside the workflow! 📊 Output Example After processing example.com: 📁 5 PNG screenshots in Google Drive 📊 Full analysis row in Google Sheets 📱 Telegram message with all links and AI-generated Upwork content Built for freelancers, by a freelancer. Stop wasting hours on manual portfolio creation — let AI do the heavy lifting! 🚀 #portfolio, #screenshots, #upwork, #freelancer, #ai, #gpt, #automation, #firecrawl, #screenshotone, #google-drive, #google-sheets, #telegram, #website-analysis, #form-trigger
by Mantaka Mahir
How it works This workflow automates the process of converting Google Drive documents into searchable vector embeddings for AI-powered applications: • Takes a Google Drive folder URL as input • Initializes a Supabase vector database with pgvector extension • Fetches all files from the specified Drive folder • Downloads and converts each file to plain text • Generates 768-dimensional embeddings using Google Gemini • Stores documents with embeddings in Supabase for semantic search Built for the Study Agent workflow to power document-based Q&A, but also works perfectly for any RAG system, AI chatbot, knowledge base, or semantic search application that needs to query document collections. Set up steps Prerequisites: • Google Drive OAuth2 credentials • Supabase account with Postgres connection details • Google Gemini API key (free tier available) Setup time: ~10 minutes Steps: Add your Google Drive OAuth2 credentials to the Google Drive nodes Configure Supabase Postgres credentials in the SQL node Add Supabase API credentials to the Vector Store node Add Google Gemini API key to the Embeddings node Update the input with your Drive folder URL Execute the workflow Note: The SQL query will drop any existing "documents" table, so backup data if needed. Detailed node-by-node instructions are in the sticky notes within the workflow. Works with: Study Agent (main use case), custom AI agents, chatbots, documentation search, customer support bots, or any RAG application.
by Miki Arai
This workflow automates the daily content creation process by monitoring trends, generating drafts for multiple platforms using AI, and requiring human approval before saving. It acts as an autonomous "AI Content Factory" that turns raw news into polished content for SEO Blogs, Instagram, and TikTok/Reels. How it works Trend Monitoring: Fetches the latest trend data via RSS (e.g., Google News or Google Trends). AI Filtering: An AI Agent acts as an "Editor-in-Chief," selecting only the most viral-worthy topics relevant to your niche. Multi-Format Generation: Three specialized AI Agents (using gpt-4o-mini for cost efficiency) run in parallel to generate: An SEO-optimized Blog post structure. An Instagram Carousel plan (5 slides). A Short Video Script (TikTok/Reels). Human-in-the-Loop: Sends a formatted message with interactive buttons to Slack. The workflow waits for your decision. Final Storage: If approved, the content is automatically appended to Google Sheets. Who is this for Social Media Managers & Content Creators Marketing Agencies managing multiple accounts Anyone wanting to automate "research to draft" without losing quality control. Requirements n8n:** Version 1.19.0+ (requires AI Agent nodes). OpenAI:** API Key (works great with low-cost gpt-4o-mini). Slack:** A workspace to receive notifications. Google Sheets:** To store the approved content. How to set up Configure Credentials: Set up your OpenAI, Slack, and Google Sheets credentials. Slack App: Create a Slack App, enable "Interactivity," and set the Request URL to your n8n Production Webhook URL. Add the chat:write scope and install it to your workspace. Google Sheet: Create a sheet with columns for Blog, Instagram, and Script (row 1 as headers). RSS Feed: Change the RSS node URL to your preferred topic source.
by hayatofujita
Manage expenses with AI insights through LINE 👥 Who’s it for This workflow is designed for small business owners, freelancers, and finance teams who want to eliminate manual data entry for expense tracking. It is ideal for users who want not just a record of their spending, but real-time AI-driven financial insights to help manage their cash flow. 🚀 How it works This template acts as an autonomous finance assistant that connects LINE, Google Workspace, and OpenAI. Data Capture**: Receives receipt images directly through the LINE Messaging API. AI Analysis**: Uses OpenAI (GPT-4o-mini) to perform OCR and extract Date, Store Name, Amount, and Category with high precision. Duplicate Prevention**: Automatically searches your Google Sheet to verify if the receipt has already been registered (based on Date and Amount) to prevent double-counting. Cloud Storage**: Renames and saves the receipt image to Google Drive for tax compliance and easy retrieval. Automated Ledger**: Appends the structured data into a Google Sheets master file. Financial Insights**: Calculates the current month's total spending, compares it to previous data, and generates sharp management advice via AI to help you stay on budget. ⚙️ Setup steps Prepare Google Sheet: Create a sheet with headers: Date, Amount, Store, and Category. Prepare Google Drive: Create a folder for receipt storage and copy its Folder ID. Configure Credentials: Set up your credentials in n8n for OpenAI (API Key), Google Workspace (OAuth2), and LINE Messaging API (Channel Access Token). Update Node Settings: In the "Google Sheets" nodes, select your specific spreadsheet. In the "Google Drive" node, paste your Folder ID. In the "LINE" HTTP nodes, ensure your Authorization header is set to Bearer YOUR_TOKEN. Activate: Set your n8n Webhook URL in the LINE Developers Console and toggle the workflow to "Active." 📦 Requirements n8n version 1.0 or later OpenAI API Key (GPT-4o / GPT-4o-mini) Google Workspace Account (Drive, Sheets) LINE Developers Account (Messaging API) 🎨 How to customize Refine AI Advice**: Edit the System Message in the AI Agent node to change the tone of the advice (e.g., from "Strict Accountant" to "Friendly Growth Hacker"). Switch Channels**: Replace the final LINE node with Slack or Discord nodes if your team uses those platforms. Budget Alerts**: Add a Filter node to trigger special notifications if monthly spending exceeds a certain threshold.
by Oneclick AI Squad
This n8n workflow runs daily to analyze active customer behavior, engineers relevant features from usage and transaction data, applies a machine learning or AI-based model to predict churn probability, classifies risk levels, triggers retention actions for at-risk customers, stores predictions for tracking, and notifies relevant teams. Key Insights Prediction accuracy heavily depends on feature quality — ensure login frequency, spend trends, support interactions, and engagement metrics are consistently captured and up-to-date. Start with simple rule-based scoring or AI prompting (e.g., OpenAI/Claude) before integrating full ML models for easier testing and faster value. High-risk thresholds (e.g., >70%) should be tuned based on your actual churn data to avoid alert fatigue or missed opportunities. Workflow Process Initiate the workflow with the Daily Schedule Trigger node (runs every day at 2 AM). Query the customer database to fetch active user profiles, recent activity logs, login history, transaction records, and support ticket data. Perform feature engineering: calculate metrics such as login frequency (daily/weekly), average spend, spend velocity, days since last activity, number of support tickets, NPS/sentiment if available, and other engagement signals. Feed engineered features into the prediction step: call an ML model endpoint, run a Python code node with a lightweight model, or use an AI agent/LLM to estimate churn probability (0–100%). Classify each customer into risk tiers: HIGH RISK, MEDIUM RISK, or LOW RISK based on configurable probability thresholds. For at-risk customers (especially HIGH), trigger retention actions: create personalized campaigns, add to nurture sequences, generate discount codes, or create tasks in CRM. Store predictions, risk scores, features, and actions taken in an analytics database for historical tracking and model improvement. Send summarized alerts (e.g., list of high-risk customers with scores and recommended actions) via Email and/or Slack to customer success or retention teams. Usage Guide Import the workflow into n8n and configure credentials for your customer database (PostgreSQL/MySQL), ML API (if external), analytics DB, Slack webhook, SMTP/email, and CRM/retention platform. Define feature extraction queries and thresholds carefully in the relevant nodes — test with a small customer subset first. If using AI/LLM for prediction, refine the prompt to include clear examples of churn signals. Run manually via the Execute workflow button with sample data to validate data flow, scoring logic, and notifications. Once confident, activate the daily schedule. Prerequisites Customer database with readable tables for users, activity logs, transactions, and support interactions ML integration option: either an external ML API endpoint, Python code node with scikit-learn/simple model, or LLM node (OpenAI, Claude, etc.) for probabilistic scoring Separate analytics database (or same DB) with a table ready for churn predictions (customer_id, date, churn_prob, risk_level, etc.) SMTP credentials or email service for alerts Slack webhook URL (optional but recommended for team notifications) CRM or marketing automation API access (e.g., HubSpot, ActiveCampaign, Klaviyo) for creating retention campaigns/tasks Customization Options Adjust the daily trigger time or make it hourly for near real-time monitoring of high-value accounts. Change risk classification thresholds or add more tiers in the scoring logic node. Enhance the prediction step: switch from LLM-based to a trained ML model (via Hugging Face, custom endpoint, or Code node). Personalize retention actions: use AI to generate custom email content/offers based on the customer's behavior profile. Add filtering (e.g., only high-value customers > certain MRR) to focus retention efforts. Extend notifications: integrate with Microsoft Teams, Discord, or create tickets in Zendesk/Jira for follow-up. Build feedback loop: after actual churn occurs, update a training dataset or adjust weights/rules in future runs.
by Rahul Joshi
📊 Description Automate patient pre-arrival intake, AI risk assessment, and real-time doctor alerts in one seamless healthcare workflow. ⏰📧 This automation sends scheduled intake reminders before appointments, analyzes submitted patient data using AI, and flags high-risk cases instantly via Slack. Powered by Azure OpenAI, it evaluates symptoms, allergies, and medical history to generate structured risk levels and doctor preparation notes. Reduce manual triage work, improve patient safety, and ensure physicians are fully prepared before every visit. 🤖🚨 🔄 What This Template Does ⏰ Runs hourly to fetch upcoming appointments from Google Calendar. 📧 Sends pre-arrival intake emails to patients via Gmail. 📊 Monitors new intake form submissions in Google Sheets. 🤖 Analyzes symptoms and medical history using Azure OpenAI (GPT-4o-mini). 🧹 Parses and normalizes structured AI output for consistency. 📝 Stores risk level and doctor notes back into Google Sheets. 🚨 Sends instant Slack alerts if a patient is classified as High risk. ✅ Key Benefits ✅ Automates pre-visit intake collection with zero manual follow-up ✅ Uses AI-powered risk triage to support clinical decision-making ✅ Flags high-risk or urgent cases instantly ✅ Reduces front-desk workload and manual screening time ✅ Improves patient safety through structured early assessment ✅ Ensures doctors are prepared before appointments begin ⚙️ Features Google Calendar hourly appointment trigger Gmail automated intake reminder emails Google Sheets real-time form monitoring Azure OpenAI GPT-4o-mini risk assessment engine Structured JSON AI output with confidence scoring Automatic data normalization and formatting Conditional risk triage logic (Low / Medium / High / Emergency) Slack instant doctor notifications for critical cases 🔐 Requirements Google Calendar OAuth2 credentials Gmail OAuth2 credentials Google Sheets OAuth2 credentials Azure OpenAI API credentials (GPT-4o-mini deployment) Slack OAuth2 credentials 🎯 Target Audience Private clinics and healthcare providers Telemedicine platforms Medical practices handling high patient volume Healthcare automation consultants and digital health startups
by Rahul Joshi
📘 Description This workflow automates daily re-engagement for HubSpot leads that were previously paused due to timing. It runs every 24 hours, fetches recent leads with activity data, and filters only those marked as “bad timing,” ensuring active opportunities are not disturbed. Qualified leads are processed in batches, and an AI agent generates a short, polite follow-up email designed to reopen the conversation without sounding salesy. Instead of sending automatically, the workflow creates a Gmail draft for human review and notifies the assigned owner in Slack. If AI generation fails, the workflow sends a Slack alert with the lead details so manual follow-up can still happen. This ensures consistent reactivation of cold leads without silent failures or missed opportunities. ⚙️ What This Workflow Does (Step-by-Step) ⏰ Daily Trigger (Every 24h) Runs the workflow once per day. 📥 HubSpot: Fetch Recent Leads + Activity Fields Pulls up to 20 leads with key activity and lifecycle properties needed for follow-up timing. 🚦 Filter: Lead Status = BAD_TIMING Keeps only leads that were previously paused due to timing (targets cold opportunities only). 🔄 Batch Leads (Split In Batches) Processes leads one-by-one / in batches to prevent rate-limit issues and keep execution stable. ✍️ AI: Generate Re-Engagement Follow-Up Email Body Creates a short, friendly follow-up message based on lead context (no CRM labels, no salesy tone). 🧩 Parse AI JSON Normalizes AI output into usable fields (subject/body/toEmail/owner details). 📩 Gmail: Create Follow-Up Draft Email Creates a Gmail draft (human-in-the-loop; no auto-send). 🔔 Slack: Notify Owner to Re-Engage Lead Notifies the assigned owner that a draft is ready and the lead is worth revisiting. ⚠️ Build AI Error Payload Builds a clean error context (lead name/email + AI error message). 🚨 Slack: Alert AI Draft Failure Alerts the owner when AI fails so the lead can be followed up manually. 🧩 Prerequisites • n8n instance • HubSpot App Token (read access to leads + properties) • Azure OpenAI credentials (GPT-4o) for email generation • Gmail OAuth2 (draft creation permissions) • Slack API token (DM user messaging permissions) 💡 Key Benefits ✔ Daily automated re-engagement for paused leads ✔ Prevents random outreach to active leads ✔ Human-in-the-loop safety via Gmail drafts (no auto-send) ✔ Owner notifications ensure follow-up execution ✔ Batch processing avoids rate-limit instability ✔ Strong failure handling prevents silent lead drops 👥 Perfect For SDR teams managing large HubSpot pipelines Founders doing lightweight outbound follow-ups Sales ops teams wanting structured reactivation loops Agencies re-engaging cold inbound / outbound leads
by WeblineIndia
Retail Price Sync Automation for Shopify & WooCommerce This workflow automates the synchronization of product prices across Shopify and WooCommerce platforms to ensure retail consistency. It triggers when a price change is detected in either system, applies platform-specific pricing rules (such as psychological rounding) and updates the secondary platform. The workflow also includes a threshold-based alerting system via Gmail for major price drops and logs every change to a Google Sheets master file for auditing. Quick Implementation Steps Set the priceChangeThreshold in the Shopify Configuration and WooCommerce Configuration nodes. Connect your Shopify Access Token credentials to the Shopify trigger and update nodes. Connect your WooCommerce API credentials to the WooCommerce trigger and update nodes. Link your Google Sheets OAuth2 and Gmail OAuth2 credentials for logging and notifications. Specify the documentId for your pricing log in the Log Price Changes node. What It Does This workflow acts as a bridge between two major e-commerce platforms, ensuring that a price update in one is intelligently reflected in the other. It goes beyond simple mirroring by: Threshold Monitoring: Detecting if a price change exceeds a set limit (e.g., $150 or $500) to trigger immediate management alerts. Platform-Specific Logic: Automatically formatting prices for different environments—for example, rounding WooCommerce prices to the nearest .99 for psychological pricing while using standard rounding for Shopify. Audit Trail Creation: Maintaining a centralized record of all price migrations in Google Sheets, including SKUs, old vs. new prices and timestamps. Team Communication: Sending automated email notifications to ensure the team is aware of successful syncs or critical price volatility. Who’s It For Multi-Channel Retailers who need to keep pricing in sync across Shopify and WooCommerce storefronts. Inventory Managers looking to automate price adjustments without manual data entry. Finance & Operations Teams requiring an automated audit log of all pricing modifications. Technical Workflow Breakdown Entry Points (Triggers) Shopify Price Update: Triggers on the orders/updated topic (or product updates) to capture new price data from Shopify. WooCommerce Price Update: Triggers on product.updated to capture changes originating from the WooCommerce store. Processing & Logic Configuration Nodes: Define the source system and set the specific threshold for what constitutes a "major" price change. Apply Platform-Specific Rules: A custom code block that calculates psychological pricing (e.g., forcing a .99 ending) and ensures prices never drop below a minimum safety floor (e.g., $1.00). Check Price Change Threshold: An internal filter that routes the workflow based on the magnitude of the price shift. Output & Integrations Update Nodes: Pushes the formatted price data to the target platform (Shopify or WooCommerce). Log Price Changes: Appends a new row to a Google Sheet with detailed metadata. Notifications: Uses Gmail to send high-priority alerts for major drops and routine confirmations for successful syncs. Customization Adjust Pricing Strategy Modify the Apply Platform-Specific Rules (Code Node) to change rounding logic, add currency conversion factors or implement different psychological pricing tiers. Change Alert Thresholds Update the priceChangeThreshold value in the Shopify Configuration or WooCommerce Configuration nodes to make alerts more or less sensitive. Expand Logging The Log Price Changes node is set to autoMapInputData. You can add custom columns to your Google Sheet and the workflow will automatically attempt to fill them if the data exists in the workflow. Troubleshooting Guide | Issue | Possible Cause | Solution | | :--------------------------------------- | :----------------------------------- | :----------------------------------------------------------------------------------------------------------------- | | Sync Not Triggering | Webhook not registered correctly. | Check the webhookId in the Shopify/WooCommerce trigger nodes and ensure the apps have permission to send events. | | Google Sheets Error | Sheet ID or Column names mismatched. | Verify the documentId and ensure the id column exists in your Sheet for matching. | | Prices Not Rounding Correct/Expected | Code node logic error. | Review the JavaScript in Apply Platform-Specific Rules to ensure the Math functions match your strategy. | | Emails Not Sending | Gmail OAuth2 expired. | Re-authenticate your Gmail credentials in the n8n settings. | Need Help? If you need assistance adjusting the psychological pricing code, adding more platforms (like Amazon or eBay) or setting up advanced Slack notifications, please reach out to our n8n automation experts at WeblineIndia. We can help scale this workflow to manage thousands of SKUs with high precision.